My (paranoid) unpopular take: the AI boom we’re currently experiencing is a concerted effort by the billionaires to maintain operational agency (the ability to think and do at a massive scale) once society begins to collapse due to climate change.
~~ edit ~~
Thank you for the sane responses. I’m reconsidering how much I believe this.
This is also my reasoning for why I think AI alignment is not going to be a problem for humanity any time soon.
By the time AI will be capable of maintaining the whole supply chain required to keep itself running sufficient time will have passed so we can come up with something viable.
I think Billionaire alignment is a much larger problem than AI alignment. To use Bostrom's language, it's not full-on owl domestication, but sparrows with owl-like powers that we need to worry about.
Respectfully disagree. An AI with full access to robots could do everything on its own that it would need to "survive" and grow. I argue that humans are actually in the way of that.
The highlighted parts are a kind of TL;DR, but in the context here actually reading it - it is not much - is actually required to get anything out of it for the arguments used here.
Anything technological is orders of magnitude more complex.
Pointing to any single part really makes no sense, the point is the complexity and interconnectedness of everything.
Some AI doing everything is harder than the East Bloc countries attempting to use central planning for the whole economy. Their economy was much more simple than what such a mighty AI would require for itself and its robot minions. And that's just the organization.
I did like "Gaia" in Horizon Zero Dawn (game) because it made a great story though. This would be pretty much exactly the kind of AI fantasized about here.
Douglas Adams hints at hidden complexity towards the end of HHGTTG, talking about the collapse of Golgafrincham's society.
You overlook just one single tiny thing and it escalates to failure from there. Biological systems don't have that problem, they are self-assembling no matter how you slice and dice them. You may just end up with a very difference eco-system, but as long as the environment is not completely outside the useful range it will grow and reorganize. human-made engineered things on the other hand will just fail and that's it, they will not rise on their own from nothing. Human-made systems are much much more fragile than biological ones (even if you can't guarantee the kind of biological system you will get after rounds of growth and adaptations).
> Pointing to any single part really makes no sense, the point is the complexity and interconnectedness of everything
Doesn’t it though?
The bauxite mine owners in Pincarra could purchase hypothetical robotic mining & smelting equipment. The mill owners in Downey, the cocoa leaf processor in New Jersey, the syrup factory in Atlanta, and others could purchase similar equipment. Maybe they all just buy humanoid robots and surveil their works for awhile to train the robots and replace the workers.
If all of those events happen, Coca Cola supply chain has been automated. Also, since e.g. the aluminum mill probably handles more orders beyond just coke cans, other supply chains for other products will now be that much more automated. Thereby the same mechanism that built these deep supply chains will (I bet) also automate them.
> Biological systems don't have that problem, they are self-assembling no matter how you slice and dice them.
If the machines used to implement manufacturing processes are also built in an automated way, the system is effectively self-healing as you describe for biological systems.
> did like "Gaia" in Horizon Zero Dawn (game) because it made a great story though. This would be pretty much exactly the kind of AI fantasized about here.
Perhaps the centralized AI “Gaia” becomes an orchestrator in this scheme, rather than the sole intelligence in all of manufacturing? Not too familiar with this franchise to make a more direct comparison, but my larger point is that the complexity of the system doesn’t need to be focused on one single greenfield entity.
Man made stuff does not self-repair and self-replicate.
So, no. You are not thinking far enough, only the next step. But it is a complex vast network, and every single thing in it except the humans has that man-made item deficiency of decay without renewal.
You miss even repairs of the tiniest item - which in turn requires repairing he repairers, everything eventually stops.
Humans have to intervene fixing unforeseen problems all the time! It is humans that hold all those systems together.
Even if you had AGI, human brains are far from perfect too so that would not change anything in the end, we have biology to the rescue (of us in general, not necessarily the individual ofc) when we miss stuff.
Let us assume, at some point in the near future, it is possible to build a humanoid robot that is able to operate human-run machines and mimic human labor:
> Man made stuff does not self-repair and self-replicate.
If robots can repair a man-made object or build an entirely new one, the object is effectively self-repairing and self-replicating for the purposes of a larger goal to automate manufacturing.
> You miss even repairs of the tiniest item - which in turn requires repairing he repairers, everything eventually stops
So… don’t? Surely the robots can be tasked to perform proactive inspections and maintenance of their machines and “coworkers” too.
> But it is a complex vast network
…that already exists, and doesn’t even need to be reimagined in our scenario. If one day our hypothetical robots become available, each individual factory owner could independently decide the next day to purchase them. If all of the factories in the “supply chain graph” for a particular product do this, the complex decentralized system they represent doesn’t require human labor to run. It doesn’t even need to happen all at once. By this mechanism I propose the supply chain could rapidly organically automate itself.
Yeah? How many robots? What kind of robots? What would the AI need to survive? Are the robots able to produce more robots? How are the robots powered? Where will they get energy from?
Sure it's easy to just throw that out there in one sentence, but once you actually dig into it, it turns out to be a lot more complicated than you thought at first. It's not just a matter of "AI" + "Robots" = "self-sustaining". The details matter.
This makes no sense. It takes a complex industrial society to keep that tech going. The supply chain to make GPUs would not survive even a modest disruption in the world economy. It's probably the most fragile thing we currently manufacture.
If you're an AI company and you believe your own hype (like Musk seems to), you'll probably believe that you can automate everything from digging minerals out of the ground all of the way up to making the semiconductors in the robots that dig the minerals.
As you may infer from my use of the word "hype", I do not think we are close to such generality at a high enough quality level to actually do this.
Presumes that the surviving humans will not actively disrupt/destroy these automated industries. Which seems highly likely as they will want to scavenge them for anything of value or repurpose them for their own means.
There's lots of implicit assumptions or this would be a book, but remember that Musk has a rocket and wants to colonise Mars, and that Mars is so bad that it is currently 100% populated by robots.
For the billionaires without rockets, there's also a whole bunch of deserts conveniently filled with lots of silicon.
(Or as Mac(Format|World|User) put it sometime in the 90s when they were considering who might bail out Apple and suggested one of the middle east oil barrons, a "silly con").
His lifetime, I agree unlikely, but also I think that will be short: he's pissed off too many other powerful people and will get the western equivalent of Russian oligarchs "falling out of a window".
The economics he talks about are all nonsense. No bank will lend someone $200k for the ticket to go to Mars on the offchance they might be a successful pizza restaraunteur.
But like I said, if you're (e.g.) him and you buy your own hype…
(His grandkids' lifetimes are another question entirely. Things are changing too fast).
While I believe we’re in a slow takeoff, I believe we are in a takeoff. The important question to my mind is whether AGI comes before systemic societal collapse due to climate change. I think it does, and my tin foil hat grows a wider brim with each passing day. I hope I’m wrong!
My expectation is that a lot of social breakdown happens with AI that's not quite capable of fully replacing human labour. A lot of angry unemployed people, or a lot of people who suddenly find they're unable to compete with data centres for electricity and can no longer afford to keep their freezer frozen, groups like that may not be able to pull of a Butlerian Jihad, but they're absolutely relevant to the timelines, and I think they happen before fully-automated security bots that are worth bothering to install.
This is also why I'm skeptical of claims that it would be impossible (or nearly so) for governments to meaningfully regulate AI R&D/deployment (regardless of whether or not they should). The "you can't regulate math" arguments. Yeah, you can't regulate math, but using the math depends on some of the most complex technologies humanity has produced, with key components handled by only one or a few companies in only a handful of countries (US, China, Taiwan, South Korea, Netherlands, maybe Japan?). US-China cooperation could probably achieve any level of regulation they want up to and including "shut it all down now." Likely? Of course not. But also not impossible if the US and China both felt sufficiently threatened by AI.
The only thing that IMO would be really hard to regulate would be the distribution of open-weight models existing at the time regulations come into effect, although I imagine even that would be substantially curtailed by severe enough penalties for doing so.
This is the best argument I’ve heard against it, so thanks.
My anxiety entirely orbits around the scale of AI compute we’ve reached and the sentiment that there is drastic room for improvement, the rapidly advancing state of the art in robotics, and the massive potential for disruption of middle/lower class stake in society. Not to mention the general sentiment that the economy is more important than people’s well being in 99.9% of scenarios.
Who's to say it has to keep moving forward? The companies are buying up massive amounts of GPUs in this AI race, a move that's widely questioned because next year's GPUs might render the current ones outdated[0], so there will probably be plenty of GPUs to go around if the CEO demands it (prior to collapse). Operating datacenters would probably be out of the question with a collapsed society as the power grid might be unreliable, global networks might be down and securing many datacenters would probably be difficult, but there's at least one public record of a billionaire building his own underground bunker with off-grid power generation and enough room to have his own little datacenter inside[1]. "Ordinary" people will acquire 32GB GPUs or Mac Studios for local open-source LLM inference, so it seems likely billionaires would just do the next step up for their bunker and use their company's proprietary weights on decommissioned compute clusters.
If there's an evil plot, it's goal must surely be to accelerate environmental degradation.
First we had the blockchain, now AI to consume enormous amounts of resources and distract us from what we should be investing in to make the environment healthier.
it's very easy to achieve great things without coordination if you can just do what's best for yourself and help your peers achieve their collective goals.
but they do meet at davos every now and again, without the democratic shackles.
I guess it's whoever was in that Doug Rushkoff meeting with the whole idea we'll have security forces with those exploding dog collars to keep them in line and to keep revolutionary forces from killing us and taking our food supply!
FWIW I do agree with the operational agency at scale bit
and I’m always fascinated by these conspiracy theories, was genuinely hoping to get one (but also happy to see you’re challenging your own position). the idea of people coordinating on these things is very funny to me
I think like all tech people will use it for good and bad. those in power have more power etc etc I think it tends to boil down to whether you believe people are, overall, good or bad. over time, that’s what you’ll get with use of tech
~~ edit ~~
Thank you for the sane responses. I’m reconsidering how much I believe this.