I suspect that this orbital data centers isn't entirely about dollars (No doubt dollars are important).
I suspect it is about the regulatory environment. The regulatory environment on data centers is moving quickly. Data centers used to be considered a small portion of the economy and thus benign and not worth extorting/controlling. This seems to be changing, rapidly.
Given that data centers only exchange information with their consumers they are a natural candidate for using orbit as a way to escape regulators.
Further, people are likely betting that regulators will take considerable time to adjust since space is multinational.
True, but businesses don't care about regulations except where it costs them money. Also, remember that time is money, so any regulatory delays cost real money to a business.
My point is that you can actually reduce it all to dollars. And I believe that the cost of orbital data centers will come down due to technological advances, while the cost of regulation will only go up, because of local and global opposition.
"My point is that you can actually reduce it all to dollars."
I'm not sure. A couple of points:
1) The regulatory landscape is enormous. It is unknown from which angle regulators will "slow you down."
2) As I mentioned the regulatory frameworks in this area are evolving very quickly. It is unknown what the regulations will be in 1, 2, 5 years and how that will impact your business.
I think it is also about security. It is impossible for ordinary people to break into such a data center.
It’s a bit like the cyberpunk future when the ultra riches live in moon bases or undersea bases and ordinary people fight for resources in a ruined earth.
Well the argument some of these companies are making is that it would be cheaper over 10 years (some things like power can be cheaper in space, and you can get it from solar nearly 24h a day). It seems likely to me (as it does many other people) that it won't be cheaper, but if it's the same price or mildly more expensive there might be a regulatory incentive to train a ML model in space instead of a place like the EU
IMHO, I doubt they were holding much back. Obviously, they're always working on 'next improvements' and rolled what was done enough into this but I suspect the real difference here is throwing significantly more compute (hence investor capital) at improving the quality - right now. How much? While the cost is currently staying the same for most users, the API costs seem to be ~40% higher.
The impetus was the serious threat Gemini 3 poses. Perception about ChatGPT was starting to shift, people were speculating that maybe OAI is more vulnerable than assumed. This caused Altman to call an all-hands "Code Red" two weeks ago, triggering a significant redeployment of priorities, resources and people. I think this launch is the first 'stop the perceptual bleeding' result of the Code Red. Given the timing, I think this is mostly akin to overclocking a CPU or running an F1 race car engine too hot to quickly improve performance - at the cost of being unsustainable and unprofitable. To placate serious investor concerns, OAI has recently been trying to gradually work toward making current customers profitable (or at least less unprofitable). I think we just saw the effort to reduce the insane burn rate go out the window.
1) LLM advances stop
2) The Chinese companies release open source/weight models which are as good or better than the West
3) Apple somehow turns it around with AI
Apple is done for.
AI is going to be central to the next generation of phones and the next form factor.
Their complete failure on AI has been ... shocking. Not sure if they don't have the data to train a leading edge model or if they have some kind of personele issue, it has just been shocking to see their lack of progress.
No doubt Apple has rested on their laurels for a long time. I just would not have expected this.
It looks like she buys and holds, unlikely nancy. And it looks like she bought some AMD which had great returns (oh how I wish I bought it) and held it.
Maybe. Depends upon's who's hype. But I think it is fine to say that we don't have AGI today (however that is defined) and that some people hyped that up.
2) LLMs haven't failed outright
I think that this is a vast understatement.
LLMs have been a wild success. At big tech over 40% of checked in code is LLM generated. At smaller companies the proportion is larger. ChatGPT has over 800 million weekly active users.
Students throughout the world, and especially in the developed world are using "AI" at 85-90% (from some surveys).
Between 40% of professionals and 90% (depending upon survey and profession) are using "AI".
This is 3 years after the launch of ChatGPT (and the capabilities of chatGPT 3.5 were so limited compared to today that it is a shame that they get bundled together in our discussions). I would say instead of "failed outright" that they are the most successful consumer product of all time (so far).
from what I've seen in a several-thousand-eng company: LLMs generally produce vastly more code than is necessary, so they quickly out-pace human coders. they could easily be producing half or more of all of the code even if only 10% of the teams use it. particularly because huge changes often get approved with just a "lgtm", and LLM-coding teams also often use/trust LLMs for reviews.
but they do that while making the codebase substantially worse for the next person or LLM. large code size, inconsistent behavior, duplicates of duplicates of duplicates strewn everywhere with little to no pattern so you might have to fix something a dozen times in a dozen ways for a dozen reasons before it actually works, nothing handles it efficiently.
the only thing that matters in a business is value produced, and I'm far from convinced that they're even break-even if they were free in most cases. they're burning the future with tech debt, on the hopes that it will be able to handle it where humans cannot, which does not seem true at all to me.
Measuring the value is very difficult. However there are proxies (of varying quality) which are measured, and they are showing that AI code is clearly better than copy-pasted code (which used to be the #1 source of lines of code) and at least as "good" (again, I can't get into the metrics) as human code.
Hopefully one of the major companies will release a comprehensive report to the public, but they seem to guard these metrics.
> At big tech over 40% of checked in code is LLM generated.
Assuming this is true though, how much of that 40% is boilerplate or simple, low effort code that could have been knocked out in a few minutes previously? It's always been the case that 10% of the code is particularly thorny and takes 80% of the time, or whatever.
Not to discount your overall point, LLMs are definitely a technical success.
Before LLMs I used whatever autocomplete tech came with VSCode and the plugins I used. Now with Cursor a lot of what the autocomplete did is replaced with LLM output, at much greater cost. Counting this in the "LLM generated" statistic is misleading at best, and I'm sure it's being counted
I thought Ilya said we have more companies than we have ideas. He also noted that our current are resulting in models which are very good at benchmarks but have some problems with generalization (and gave a theory as to why).
But I don't recall him actually saying that the current ideas won't lead to AGI.
Then, he starts to talk about the other ideas but his lawyers / investors prevent him from going into detail: https://youtu.be/aR20FWCCjAs?t=1939
The worrisome thing is that he openly talks about whether to release AGI to the public. So, there could be a world in which some superpower has access to wildly different tech than the public.
To take Hinton's analogy of AGI to extraterrestrial intelligence, this would be akin to a government having made contact but withholding the discovery and the technology from the public: https://youtu.be/e1Hf-o1SzL4?t=30
It’s also weird to think that if there is extraterrestrial contact, it will most definitely happen in the specific land mass known as the United States and only the US government will be collecting said technology and hiding it. Out of the entire planet, contact is possible only in the USA.
I'm not sure if you're jabbing at the concept of American supremacy, or Hinton's idea, or my position. I don't live in the USA right now, but I am happy to participate in conversation. That's why I am here.
reply