I agree and really empathize with you on this. It's frustrating how hard it is to get people to care, I've even had someone throw McLuhan's tetrad at me, as if this is the equivalent of the introduction of phone apps.
We're racing into a fundamentally deep and irreversible societal shift, at least the same order of magnitude as the agricultural or industrial revolution. Maybe even many orders of magnitude deeper. Society will change so profoundly, it will be at least as unrecognizable as our lives would look to the average person from the Bronze age. There's absolutely no reason to assume this will be a good change. If it's not something I personally will have to live with, my descendants most certainly will.
I'll admit, I also draw a blank when I try to imagine what the consequences of all this will be, but it's a blank as in "staring into a pitch black room and having no idea what's in it" - not ignoring the darkness altogether. Mass psychosis is a good term for this, I think.
The collective blindspot failing to understand that there's NOTHING that says we're gonna 'make it'.
There's no divine being out there watching out for us. This isn't a fucking fairy tale, you can't assume that things will always 'work out'. Obviously they've always worked out until now because we're able to have this conversation, but that does NOT mean that things will work out indefinitely into the future.
Baseless conjecture: I think we are biased towards irrational optimism because it's an adaptive trait. Thinking everything will work out is better than not, because it means you're more likely to attempt escaping a predator or whatever despite a minuscule chance of success (which is better than not trying at all). It's another entry into the list of instincts we've inherited from our ancestors which bite us in the ass today (like being omnivorous, liking sweets, tribalism, urge to reproduce, etc).
You seem like you've given this a bunch of thought, and I wanna chat more about this and pick your brain about a few things. Have you ever thought about whether this intersects with the Fermi paradox somehow?
Have you read Eliezer Yudkowsky and the LessWrong forum on AI existential risk? Your understanding of the sheer magnitude of future AI and taking it seriously as a critical risk to humanity are common qualities shared with them. (Their focus to address this is to figure out if it's possible for AI to be built aligned with human values, so that way it cares about helping us instead of letting us get killed.)
(The Fermi paradox is also the kind of thing discussed on LessWrong.)
ive created a twitter account for people to follow to organize around this issue, talk to each other and organize political action. giving out my email to so many people is becoming untenable so please contact me there. im always excited to even encounter someone who sees the issue this way let alone get to chat. thats how few of us there are apparently. @stop_AGI
one thought -- i agree with your sentiment towards ai, but i think the goal of stopping AGI is fruitless. even if we stop OpenAI, there will be companies/entities in other countries that will proceed where OpenAI left off.
there is zero chance of surviving AGI in the long term. if every human were aware of whats going on, like they are aware of many other pressing issues, then stopping AGI would be easy. in comparison to surviving AGI, stopping it is trivial. training these models is hugely expensive in dollars and compute. we could easily inflate the price of compute through regulation. we could ban all explicit research concerning AI or anything adjacent. we could do many things. the fact of the matter is that AGI is detrimental to all humans and this means that the potential for drastic and widespread action does in fact exist even if it sounds fanciful compared to what has come before.
a powerful international coalition similar to NATO could exclude the possibility of a rogue nation or entity developing AGI. its a very expensive and arduous process for a small group -- you cant do it in your basement. the best way to think about it is that all we have to do is not do it. its easy. if an asteroid was about to hit earth, there might be literally nothing we could do about it despite the combined effort of every human. this is way easier. i think its really ironic that the worst disaster that might ever happen could also be the disaster that was the easiest to avoid.
the price of compute is determined by the supply of compute. supply comes from a few key factories that are very difficult to build, maintain and supply. highly susceptible to legislation.
how? the same way that powerful international coalitions do anything else... with overwhelming economic and military power.
You can't do it in your basement as of 2023. Very important qualification. It's entirely plausible that continuous evolution of ML architectures will lead to general AI which anyone can start on their phone and computer and learn online from there.
We're racing into a fundamentally deep and irreversible societal shift, at least the same order of magnitude as the agricultural or industrial revolution. Maybe even many orders of magnitude deeper. Society will change so profoundly, it will be at least as unrecognizable as our lives would look to the average person from the Bronze age. There's absolutely no reason to assume this will be a good change. If it's not something I personally will have to live with, my descendants most certainly will.
I'll admit, I also draw a blank when I try to imagine what the consequences of all this will be, but it's a blank as in "staring into a pitch black room and having no idea what's in it" - not ignoring the darkness altogether. Mass psychosis is a good term for this, I think.
The collective blindspot failing to understand that there's NOTHING that says we're gonna 'make it'.
There's no divine being out there watching out for us. This isn't a fucking fairy tale, you can't assume that things will always 'work out'. Obviously they've always worked out until now because we're able to have this conversation, but that does NOT mean that things will work out indefinitely into the future.
Baseless conjecture: I think we are biased towards irrational optimism because it's an adaptive trait. Thinking everything will work out is better than not, because it means you're more likely to attempt escaping a predator or whatever despite a minuscule chance of success (which is better than not trying at all). It's another entry into the list of instincts we've inherited from our ancestors which bite us in the ass today (like being omnivorous, liking sweets, tribalism, urge to reproduce, etc).
You seem like you've given this a bunch of thought, and I wanna chat more about this and pick your brain about a few things. Have you ever thought about whether this intersects with the Fermi paradox somehow?
Drop me a line here: l7byzw6ao at mozmail dot com