I respect Armin's opinions on the state-of-the-art in programming a lot. I'm wondering if he finds that "vibe coding" (or vibe engineering) is particularly pleasant and effective in Rust compared to, say, Python.
I bet it would be probably even nicer. I've been programming DSP in C++ with JUCE. I have a very rusty C++ experience from years ago, but it's getting me through a lot of it, and I feel pretty comfortable. Maybe my ignorance is bliss, and I'm really just putting out bad shit.
I've loved and used Django ORM and SQLAlchemy for many years. It got me a long way in my career. But at this point I've sworn-off using query-builders and ORMs. I just write real, hand-crafted SQL now. These "any db" abstractions just make for the worst query patterns. They're easy and map nicely to your application language, but they're really terrible unless you want to put in the effort to meta-program SQL using whatever constructs the builder library offers you. CTEs? Windows? Correlated subqueries? It's a lot. And they're always lazy, so you never really know when the N+1s are going to happen.
Just write SQL. I figured this out when I realized that my application was written in Rust, but really it was a Postgres application. I use PG-specific features extensively. My data, and database, are the core of everything that my application does, or will ever do. Why am I caring about some convenient abstractions to make it easier to work with in Rust, or Python, or whatever?
Anytime this topic comes up, this opinion is invariably at the top of the comments. However I've never seen a non-trivial application made this way. Mind sharing one? More than the query generation, I think people reach for ORMs for static typing, mapping, migrations, transactions, etc.
I'm not doubting that it can be done, I'm just curious to see how it's done.
I formerly worked for a travel company. It was the best codebase I've ever inherited, but even so there were select N+1's everywhere and page loads of 2+ seconds were common. I gradually migrated most of the customer-facing pages to use hand-written SQL and Dapper; getting most page loads below 0.5 seconds.
The resulting codebase was about 50kloc of C# and 10kloc of SQL, plus some cshtml and javascript of course. Sounds small, but it did a lot -- it contained a small CMS, a small CRM, a booking management system that paid commissions to travel agents and payments to tour operators in their local currencies, plus all sorts of other business logic that accumulates in 15+ years of operation. But because it was a monolith, it was simple and a pleasure to maintain.
That said, SQL is an objectively terrible language. It just so happens that it's typically the least of all the available evils.
YouTube is one from my experience. The team there had a pretty strong anti-orm stance. DB performance was an existential necessity during the early scaling. The object fetching and writing tended to be focused through a small number of function calls with well scrutinized queries and write through memcaching.
Anytime this topic comes up, I ask: Why not both? I don't want to modify my SQL strings every time I change a column. Django ORM lets me combine custom SQL snippets with ORM code. I never hesitate to use custom SQL, but its just not a reasonable default for basic CRUD operations that my IDE can autocomplete. Not only that, but also provide nice feautures pike named arguments, walking relationships, sanitizations, etc. At the same time, I can do a UNIONS, CTES, anything I want. I just don't understand why it's worth arguing against ORMs, when no one is forcing you to stop using raw SQL.
I completely agree, it is absolutely essential to understand what SQL is emitted, and how SQL works. Perhaps the strawman argument against ORMs is that they preclude you from knowing SQL. They don't.
The company I work for is one such example. We write inline SQL in a Python Flask+Celery app which processes >$3bn of salaries a month. The stated goal from the CTO, who was an early engineer, is simplicity.
In addition to the great replies folks are sharing, I've found LLMs are quite good at authoring non-trivial SQL. Have effectively been using these to implemnt + learn so much about Postgres
Many great SQL examples have long existed on stackoverflow and similar sources, but until the recent past were buried by lower quality questions and answers or SEO spam.
You will find that if you check sources they are lifted almost verbatim. LLMs are a way to cut through the noise, but they are rarely "authoring" anything here.
It's wild how far a little marketing can go to sell the same or an arguably worse product that used to be free and less unethical.
I worked for a publicly traded corporate elearning company that was written this way. Mainly sprocs with a light mapping framework. I agree this is better as long as you keep the sprocs for accessing data and not for implementing application logic.
ORMs are way more trouble than they’re worth because it’s almost easier to write the actual SQL and just map the resulting table result.
My current company is built like this, and it’s great. I can’t think of a single production bug that’s come from it, which was my main concern with the approach. It’s really, really nice to be able to see the SQL directly rather than having to reason about some layer of indirection in addition to reasoning about the query you’re actually trying to build.
I've worked on a few, nothing I can share. I don't mind using an data mappers like Dapper in C# that will give you concrete types to work against with queries. Easy enough with data types for parameterized inputs as well.
Every single time. Where are these developers? Orms are a god send 98% of the time. Sure, write some SQL from time to time, but the majority of the time just use the ORM.
We have a POS system where entire blogic is postgres functions.
There are many others as well. Sure Rails/Laravel/Django people use the ORM supplied by their framework, but many of us feel it's un-necessary and limiting.
Limiting because for example many of them don't support cte queries(rails only added it a couple of years ago). Plus it get weird when sometimes you have to use sql.raw because your ORM can't express what you want.
Also transactions are way faster when done in a SQL function than in code. I have also seen people do silly things like call startTransaction in code and the do a network request resulting in table lock for the duration of that call.
Some people complain that writing postgres functions make testing harder, but with pglite it's a non issue.
As an aside I have seen people in finance/healthcare rely on authorization provided by their db, and just give access to only particular tables/functions to a sql role owned by a specific team.
I worked at a company where we used Dapper with plain SQL. Like the sibling commenter said, simplicity. There were never [ORM] issues to debug and queries could easily be inspected.
I love SQL and use it all day long to answer various business questions, but I would never use raw SQL in my code unless there is a good reason for it (sometimes there is). ORMs are there for maintainability, composability, type safety, migrations, etc.. trying to do all that with raw SQL strings doesn't scale in a large code base. You need something that IDE tools can understand and allow things like 'find all references', 'rename instances', compile time type checks, etc.. Raw SQL strings can't get you that. And managing thousands of raw SQL strings in a code base is not sustainable.
ORMs are one of those things that a lot of people think is a replacement for knowing SQL. Or that ORMs are used as a crutch. That has nothing to do with it. Very similar to how people here talked about TypeScript 10 years ago in a very dismissive way. Not really understanding its purpose. Most people haven't used something like Entity Framework either which is game changing level ORM. Massive productivity boost, and LINQ rivals SQL itself in that you can write very small yet powerful queries equivalent to much more complex and powerful SQL.
SQL is such a joy to work with compared to all the baggage ORMs bring. I’m not against ORMs but I like to keep them as thin as possible (mostly to map columns to data objects). I’ve been happily using JDBC and Spring Data JDBC (when I needed to use Repository pattern) for a long time in Java.
I'm curious if you have tried SeaORM? I've used it a little bit (not too extensively) and really like it. It's like sqlalchemy in that you can declare your tables and have a type checked query builder, which is a big win IMO. It's nice to add/change a field and have the compiler tell you everywhere you need to fix things.
I've definitely had issues when using sqlalchemy where some REST API type returns an ORM object that ends up performing many queries to pull in a bunch of unnecessary data. I think this is harder to do accidentally with SeaORM because the Rust type system makes hiding queries and connections harder.
Most of my usage of SeaORM has been as a type query builder, which is really what I want from an ORM. I don't want to have to deal with lining my "?" or "$1" binds or manually manipulate strings to build a query. IMO a good query builder moves the experience closer to writing actual SQL, without a query builder I find myself writing "scripts" to write SQL.
You may not need to use an ORM, but hand writing SQL, especially CRUD, should be a terminable offense. You _cannot_ write it better than a process that generates it.
ORMs come with a lot of baggage that I prefer to avoid, but it probably depends on the domain. Take an e-commerce store with faceted search. You're pretty much going to write your own query builder if you don't use one off the shelf, seems like.
I once boasted about avoiding ORM until an experienced developer helped me to see that 100% hand‑rolled SQL and customer query builders is just you writing your own ORM by hand.
Since then I've embraced ORMs for CRUD. I still double-check its output, and I'm not afraid to bypass it when needed.
Not really. ORMs have defining characteristics that hand-rolled SQL with mapping code does not. Eg, something like `Users.all.where(age > 45)` create queries from classes and method calls, while hand-rolled SQL queries are...well..hand-written.
I've been using django & duckdb together, which keeps me from using the ORM. Was this a happy accident for me? For background, I have a scientist background; I don't have as much experience w/ software and designing database apps.
Dapper is fantastic, and I'm happy to see it getting some love. It does exactly what I want: provides strongly-typed mapping and protects against SQL injection. It makes it easy to create domain-specific repositories without leaking anything.
In contrast, every company I've joined that used Entity Framework had enterprise products that ended up being a tightly coupled mess from IQueryable<T> being passed around like the world's favourite shotgun.
The cargo-cult shibboleth of "never put business logic in your database" certainly didn't help, since a lot of developers just turned that into "never use stored procedures or views, your database is a dumb store with indexes."
A lot of people probably think it's better to keep database "easy to swap". Which is silly, its MUCH easier to change your application layer, than database.
There's value in not having to hunt in several places for business logic, having it all in one language, etc. I was ambivalent on the topic until I encountered an 12 page query that contained a naive implementation of the knapsack problem. As with most things dogma comes with a whole host of issues, but in this case I think it's largely benign and likely did more good than harm.
But that is the result of having multiple applications needing to enforce valid states in the database.
"Business logic" is a loose term. The database is the effective store for state so it must enforce states, eg by views, triggers, and procedures.
Other "business logic" can happen outside of the db in different languages. When individual apps need to enforce valid states, then complexity, code, etc grows exponentially.
genuinely curious, can you steel man stored procedures? views make intuitive sense to me, but stored procedures, much like meta-programming, needs to be sparingly used IMO.
At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features so I'm surprised to see what seems like sound advice, "don't use stored procedures", called out as a cargo cult.
My hunch is that the problems with stored procedures actually come down to version control, change management and automated tests.
If you don't have a good way to keep stored procedures in version control, test them and have them applied consistently across different environments (dev, staging, production) you quickly find yourself in a situation where only the high priests of the database know how anything works, and making changes is painful.
Once you have that stuff in git, with the ability to run automated tests and robust scripting to apply changes to all of your environments (I still think Django's migration system is the gold standard for this, though I've not seen that specifically used with stored procedures myself) their drawbacks are a lot less notable.
You give no reasons why you think it's a sound advice.
My experience is following
1) Tx are faster when they are executed a sql function since you cut down on network roundtrip between statements. Also prevents users from doing fancy shenanigans with network after calling startTransaction.
2) It keeps your business logic separated from your other code that does caching/authorization/etc.
3) Some people say it's hard to test sql functions, but since pglite it's a non issue IMO.
4) Logging is a little worse, but `raise notice` is your friend.
> At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features
Isn't it just because most engineers aren't as well versed in SQL as they are in other programming languages.
It’s about what you want to tie to which system. Let’s say you keep some data in memory in your backend, would you forbid engineers from putting code there too, and force it a layer out to the front end - or make up a new layer in between the front end and this backend just because some blogs tell you to?
If not, why would you then avoid putting code alongside your data at the database layer?
There are definitely valid reasons to not do it for some cases, but as a blanket statement it feels odd.
Stored procedures can do things like smooth over transitions by having a query not actually know or care about an underlying structure. They can cut down on duplication or round trips to the database. They can also be a nightmare like most cases where logic lives in the wrong place.
I think people would sympathize more if it was something like "Apple makes choosing a different default browser or email client unnecessarily cumbersome" --
instead of "Apple makes you double-opt-in to sharing your private data with even more advertisers"
But that's not the story here. I hate ads as much as anyone, but this action is a matter of market competition, not privacy. They're completely different fights and intelligent people ought to be able to distinguish between the two. Anti-competitive behavior by Google, Apple, Meta, etc. is what got us into this mess with tracking and privacy violations in the first place.
It's the market for privacy violations. I'd go so far as to say that improving competitiveness in this market probably makes the world worse, by making privacy violations more profitable. If they had fined them for not allowing sideloading, or not allowing third-party payments, it would be a different story. Those are markets I want to see grow and thrive.
They received a complaint, they investigated and issued a fine. You're asking them to selectively enforce laws based on their subjective opinion of some industry, which would be highly illegal.
The entire advertising industry needs to die and I'll support every fight in pursuit of that goal, but this isn't about that. You don't dismantle an industry by picking a winner and letting them get away with crime.
And yes, there needs to be an EU-wide action over all of those other issues you mentioned too but that has nothing to do with this particular case.
Can you do me a favor and familiarize yourself with the executive summary document instead of just replying "nuh uh" out of ignorance? See paragraphs 5, 10, and 12 in particular.
They broke competition law. The fact that did so in the advertising industry as opposed to any other is irrelevant to this case.
When Apple introduced these changes, rates for Apple Search Ads tripled.
Because Apple Search Ads are offered by the same company that sold you the device, they are legally not a “third party” service. Apple still tracks your installs, your revenue, your retention period, etc, and uses it for Apple Search Ads. Developers can see these metrics for their own apps.
EU privacy regulations and the GDPR are a complete farce. You'll notice that the EU's own government websites are littered with cookie banners. They want the data just as bad as everybody else.
The goal was not in any way to protect privacy, but rather to extract rent from American tech companies.
Big cookie banner. Wait. What's that. It's not a modal? And a big "Accept only essential cookies" button with the same visual weight as the "Accept all cookies" button? Surely everybody does it this way because it's literally what EU law requires - surely nobody would try to trick people into clicking "accept all" by hiding the alternative behind multiple layers of opaque options and checkboxes.
Technical cookies... functional cookies... boring - most of these are just for handling logins and preferences. Ooh, analytics! But what's Europa Analytics? Let's check: https://european-union.europa.eu/europa-analytics_en
Oh, they are not only opt-in, they even respect DNT headers. And they're masking the IP addresses before processing them further. Damn, they must really want that data just as bad as "everybody else".
I get what you're saying, but OS vendors could prevent themselves from running arbitrary code, even from themselves, without the user's authorization if they really wanted to. I'm not sure it is in anyone's best interest since it would affect everything from security updates to automatically installing device drivers (e.g. people would be left with insecure systems or would claim Windows is broken since most would not understand the prompts). It would also be difficult to prevent Microsoft's marketing department from sneaking a trojan horse into things like security update.
The average user is not able to understand the code that is running and the 99th percentile user does not want to spend the time to understand the code.
Make it do the security stuff out-of-the-box, allow the user to change ANYTHING they want, including turning off the security stuff. Linux! It's in everyone's best interest.
I mean.. how is this different from any OS distribution? Apple can push whatever. So can Red Hat or Ubuntu or Gentoo. Unless im literally running Linux From Scratch im at the mercy of maintainers to do whatever they want.
I'm not sure what the current state of most distributions is, but I remember update applications providing an option to accept or reject individual packages. Even without that, you could preview the list of pending updates and delay them indefinitely, do manual updates of individual packages, or configure it to ignore particular packages during updates. Historically, I believe that you could block certain updates on Windows as well - or maybe you could just rollback and update. Of course none of this is considered user friendly so things may have changed.
But where does the original compiler come from? Reproducible builds are only as good as the compiler used to compile them. That's the point of Trusting Trust. If you build with a backdoored compiler and I reproduce your build with the same backdoored compiler, that solves nothing. This is why full-source bootstrap is important[0].
It would be very very hard to actually accomplish something like that on mainstream x86/arm compilers. And hide it from every debugger in the world. If it diminishes the value of reproducible builds, it's by something like 1%.
> Reproducible builds are only as good as the compiler used to compile them.
Which is so so so much better than "as good as nothing".
"Ubuntu will apply security updates automatically, without user interaction. This is done via the unattended-upgrades package, which is installed by default."
Right, but it's a minor annoyance, get rid of it with:
sudo apt-get remove --purge unattended-upgrades
(doesn't trigger removal of anything else, and you'll enjoy 420kb of additional disk space).
OTOH the real issue with Ubuntu is snap(d). Snap packages definitely do auto-update. You may want to uninstall the whole snap system - it's (still?) perfectly possible, if a little bit convoluted, due to some infamous snaps like firefox, thunderbird, chromium, or eg. certbot on servers
Or just use Debian or any snap-free fork for the matter.
There are a lot more distros than RH, Ubuntu, Gentoo and LFS. And none of them will show you ads except maybe Ubuntu. Plus you can also look at *BSD.
None of them comes close to what Microsoft is doing. To me, your comment looks like you do not understand the Linux eco-system. Plus IIRC, LFS can now come with compiled binaries.
> Apple can push whatever. So can Red Hat or Ubuntu or Gentoo
In the case of Ubuntu and Debian, and to a lesser extent RedHat, I trust the developers not to do that because they have a history of not "just pushing whatever".
Also in many cases I actually know these developers, and I can go round and ask them / remonstrate with them / put a brick through their window / other response if required about it.
What are you talking about? It's my machine. I authorized the running of certain kinds of software from Microsoft. It's not supposed to be a running authorization for them to reach in and do whatever they want on it.
> It's the same reason you don't want Chinese equipment in your telecommunications infrastructure. You can't trust what the Chinese government will do to it or with it.
Doesn't Europe actually have a lot of Chinese equipment in their telecom infrastructure? Is this an effort just to try not to make that mistake again?
They're both very expensive and the carriers primarily care about cost and features. And huawei will take a dozen devs, give them a one way ticket and put them in a hotel room near a customer to grind our whatever feature needed to seal the deal.
I remember years ago talking to some EU telecom VP who was on the engineering side that said "id buy from North Korea if the price was right".
We live in new times anyways - most of the carriers have outsourced a lot of the tech stuff to the vendors anyways.
Yes. I worked for a Cambodian telco, when there was a range of Alcatel, Nokia, etc switching equipment across 10 carriers. Huawei swept the lot within 2 years, and Alcatel staff told me they were losing everywhere - they couldn't match the price or technology. This was before the US decided to sanction Huawei.
I moved to UK in late 2013 and to be fair, from my observation the cellular coverage in the country has always been more than just a bit shitty.
Incidentally, the voice call quality in the UK is also really crappy. Operators compress/downsample the audio stream to the very edge of recognisability, because investing in sufficient infrastructure to support higher bandwidths is expensive.
Europe will just end up doing whatever is cheapest. It's the same story as always. They'll say some stuff publicly but they'll quietly come back to American tech once they see the price tag difference. They're very cost sensitive and their investors are extremely risk-averse.
Would (gently) note that we’re commenting on an article re: American tech risk. :)
Not sure it’s really sunk in for my fellow Americans what’s going on, we’re not exactly used to consequences and it’s still considered, a best, impolite to treat a holistic evaluation of policies as something beyond debate.
But look at solar adoption across Europe since 2022. It’s going gang busters and now with sodium batteries coming online next year, cheap home energy storage is about to boom as well.
Europe doesn’t want to buy Russian gas, but there is also the very real political reality of what happens if your citizens freeze to death. I will be very surprised if any EU state is reliant on Russian gas by 2035.
When people start talking about battery technology that has not even reached scale as any kind of political solution, you know people have lost the plot.
Taking one look at just the cost required for the network, even outside of the cost of any generation at all, you realize this is an insane and slapping a few solar panels down is far from a solution.
And also lets not ignore that places that have done a lot of the 'lets just build renewable and hope for the best' have very high energy prices. And maybe possible maybe sodium batteries might show up will not solve these issues.
I calculated the costs of covering the needs of Germany for a 2 days low production event (as it happened between 6-9 december) and you would need about a trillion dollar.
That's for something that cannot even garantee you more than 48h of runtime for half the country's needs.
You would need at least 4 times that to be safe.
Even if batteries price are divided by 2 (very unlikely, there are large fixed costs) you would need trillions of dollar for a single country.
That's just not happening any time soon and even in 30 years time, I doubt it will be that prevalent of a solution.
US says that Europe is their number one enemy. Using American tech is the most risky thing you can do since Trump declared that they are now a hostile enemy with intents of overthrowing European democracies.
Without getting hung up on the exact phrase “number 1”. It’s very literally one of the biggest things in official US national security strategy right now and some leaks of the non-public version talk about explicit plans to try and destroy the EU. So semantics aside, the overall point stands on solid ground.
That whole thing is just incoherent. There's lots about forming a trading alliance against China, and then loads about destroying the EU. You can't have both of those at the same time.
How was it a mistake? Europe got a lot of good telecom infrastructure for a low price. There's no evidence it was compromised.
It was actually the US that was pressuring Europe to get rid of Chinese telecom equipment, as part of the first Trump administration's broader strategy against China.
It seems like Mistral is just chasing around sort of "the fringes" of what could be useful AI features. Are they just getting out-classed by OAI, Google, Anthropic?
It seems like EU in general should be heavily invested in Mistral's development, but it doesn't seem like they are.
Yep. I saw the title and got excited.... this is a particular problem area where I think these things can be very effective. There are so many data entry class tasks which don't require huge knowledge or judgement... just clear parsing and putting that into a more machine digestible form.
I don't know... feels like this sort of area, while not nearly so sexy as video production or coding or (etc.)... but seems like reaching a better-than-human performance level should be easier for these kinds of workloads.
Following the leaders too closely seems like a bad move, at least until a profitable business model for an AI model training company is discovered. Mistral’s models are pretty good, right? I mean they don’t have all the scaffolding around them that something like chatGPT does, but building all that scaffolding could be wasted effort until a profitable business model is shown.
Until then, they seem to be able to keep enough talent in the EU to train reasonably good models. The kernel is there, which seems like the attainable goal.
Devstral 2 is free from the API. That has to be a bigger point to what makes it better. The price to performance ratio is practically better in every way.
Does it matter if the performance is slightly worse when it is practically free?
Yes, but if it's actually competitive that won't last that long. Mistral will do the same as google (cut their free tier by 50x or so) if they ever catch up. Financially anything else would make no sense.
Of course currently Mistral has an insane free tier, 1 billion tokens for each(?) of their models per month.
They can't hire the best talent because the most experienced people will not leave their homes to chase a high-risk role with questionable remuneration by relocating their whole life to Paris or London.
This goes to show how leaders in Mistral don't quite get that they are not special as they seem to think they are. Anthropic or OpenAI also require their talent to relocate but with stakes that are at least a high reward - $500k or $1M a year is a good start that is maybe worth investing into.
If somebody is in the EU already that calculation completely flips. We have a strong software startup industry in the US, would it really be that surprising if there was more unallocated talent in the EU, at this point?
> If somebody is in the EU already that calculation completely flips.
Would you find it compelling to move your whole life for ~100k EUR when you can make as much or more at your home city, with a job that is almost certainly more stable?
And I meant the Europeans. People in EU don't have a culture of moving between cities or countries unless they really have a strong reason to, e.g. can't find a job at home.
> would it really be that surprising if there was more unallocated talent in the EU, at this point?
I am pretty sure there is. It has changed over the course of last few years, primarily because of COVID, and companies willing to offer remote contracts, but it's far from being able to utilize the talent.
> They can't hire the best talent because the most experienced people will not leave their homes to chase a high-risk role with questionable remuneration by relocating their whole life to Paris or London.
The best talents have been regularly leaving Paris and London, India and China for decades. With the US closing its borders, they definitely have a chance to lure some.
Mistral is pursuing pursuing B2B use cases. Thats because they're releasing open models and the big thing about B2B is they HATE sending their data off-prem. OCR'ing and organizing old docs is a huge feature in B2B. Mistral's strategy seems smart to me.
> It seems like EU in general should be heavily invested in Mistral's development, but it doesn't seem like they are
The EU is extremely invested in Mistral's development: half of the effort is finding ways to tax them (hello Zucman tax), the other half is wondering how to regulate them (hello AI act)
Zucman taxes rich individuals (100m€+), not Mistral. AI Act rules are not that difficult to comply with by GPAI model providers as long as the model doesn't become systemic risk... They have to spend a lot more time on PR and handshaking with French politicians than on AI compliance. They probably don't even have a single FTE for that... So that's just prejudice I believe.
I think there is a lot of broad support, but they're just kind of hamstrung by EU regulation on AI development at this stage. I think the end game will ultimately be getting acquired by an American company, and then relocating.
> And then the rest of the world won’t even have older chips.
This basically just means Europe wont have older chips.
TSMC is already producing a significant percentage of their chips in Arizona. And they've even slated ~30% of their total production of 2nm chips and better will be produced in USA by 2028-2029.
The article mentions the language and cultural barriers, and the rigid hierarchy with old bosses who demand blind obedience, even when the rules are counterproductive.
I have been living in Japan for years now and I have had the same experience, so I am inclined to believe the article. Mixing Western workers with East Asian management is extremely difficult, to put it mildly.
I'm guessing it will be wildly successful. Companies don't really care about middlemen between them and their users. They just want to reach them wherever, however they can.
Has there been any successful app stores since the mobile app stores, which made developers fortunes selling fart apps, thus making it highly appealing for others to try and chase the same?
Every one I can think of since gets a bit of initial interest hoping to relive the mobile app store days, but interest wanes quickly when they realize that nobody wants to buy fart apps anymore. That ship sailed a long time ago.
And ChatGPT apps are in a worse position as it doesn't even have a direct monetization strategy. It suggests that maybe you can send users to your website to buy products, but they admit that they don't really know what monetization should look like...
I can certainly think of some organizations that fit into that bucket. I can also name organizations that are hyper controlling and micromanage every aspect of the interaction with their core products and services because they value consistency above all else.
I wonder if we'll have a situation where out of two competing organizations only one is elected to use this and the other one staunchly opposes. That will be telling.
Atlas being created is kinda the shot across the bow. You can integrate with us willingly, or we'll hook into your web apps anyways. One retains at least some control. Same outcome as Disney's deal with Sora.
reply