Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

every other day i am reminded about the state of AI and i feel complete despair. why do people not realize exactly what you just said, that this endeavor is ultimately about replacing humanity? what other long-term result could the concept of AI possibly have? its like the biggest mass psychosis that has ever existed. whenever i talk to people about this, they always parrot the same thing almost word for word: people will just find new, better jobs. or, you know, something about the Luddites. its mass psychosis because they refuse to acknowledge the blindingly obvious and plain fact that humans wont be hired to do anything if humans are the worst at doing literally any task. and what are the consequences of such a world? people just draw a blank. its like the MIB came up and flashed them and they just go on with their day. i think the same is true even with you. you make this comment "so it probably wont happen, oh well." as if it werent an existential threat.


I agree and really empathize with you on this. It's frustrating how hard it is to get people to care, I've even had someone throw McLuhan's tetrad at me, as if this is the equivalent of the introduction of phone apps.

We're racing into a fundamentally deep and irreversible societal shift, at least the same order of magnitude as the agricultural or industrial revolution. Maybe even many orders of magnitude deeper. Society will change so profoundly, it will be at least as unrecognizable as our lives would look to the average person from the Bronze age. There's absolutely no reason to assume this will be a good change. If it's not something I personally will have to live with, my descendants most certainly will.

I'll admit, I also draw a blank when I try to imagine what the consequences of all this will be, but it's a blank as in "staring into a pitch black room and having no idea what's in it" - not ignoring the darkness altogether. Mass psychosis is a good term for this, I think.

The collective blindspot failing to understand that there's NOTHING that says we're gonna 'make it'.

There's no divine being out there watching out for us. This isn't a fucking fairy tale, you can't assume that things will always 'work out'. Obviously they've always worked out until now because we're able to have this conversation, but that does NOT mean that things will work out indefinitely into the future.

Baseless conjecture: I think we are biased towards irrational optimism because it's an adaptive trait. Thinking everything will work out is better than not, because it means you're more likely to attempt escaping a predator or whatever despite a minuscule chance of success (which is better than not trying at all). It's another entry into the list of instincts we've inherited from our ancestors which bite us in the ass today (like being omnivorous, liking sweets, tribalism, urge to reproduce, etc).

You seem like you've given this a bunch of thought, and I wanna chat more about this and pick your brain about a few things. Have you ever thought about whether this intersects with the Fermi paradox somehow?

Drop me a line here: l7byzw6ao at mozmail dot com


Have you read Eliezer Yudkowsky and the LessWrong forum on AI existential risk? Your understanding of the sheer magnitude of future AI and taking it seriously as a critical risk to humanity are common qualities shared with them. (Their focus to address this is to figure out if it's possible for AI to be built aligned with human values, so that way it cares about helping us instead of letting us get killed.)

(The Fermi paradox is also the kind of thing discussed on LessWrong.)


ive created a twitter account for people to follow to organize around this issue, talk to each other and organize political action. giving out my email to so many people is becoming untenable so please contact me there. im always excited to even encounter someone who sees the issue this way let alone get to chat. thats how few of us there are apparently. @stop_AGI


one thought -- i agree with your sentiment towards ai, but i think the goal of stopping AGI is fruitless. even if we stop OpenAI, there will be companies/entities in other countries that will proceed where OpenAI left off.

i think we need to "survive AGI"


there is zero chance of surviving AGI in the long term. if every human were aware of whats going on, like they are aware of many other pressing issues, then stopping AGI would be easy. in comparison to surviving AGI, stopping it is trivial. training these models is hugely expensive in dollars and compute. we could easily inflate the price of compute through regulation. we could ban all explicit research concerning AI or anything adjacent. we could do many things. the fact of the matter is that AGI is detrimental to all humans and this means that the potential for drastic and widespread action does in fact exist even if it sounds fanciful compared to what has come before.

a powerful international coalition similar to NATO could exclude the possibility of a rogue nation or entity developing AGI. its a very expensive and arduous process for a small group -- you cant do it in your basement. the best way to think about it is that all we have to do is not do it. its easy. if an asteroid was about to hit earth, there might be literally nothing we could do about it despite the combined effort of every human. this is way easier. i think its really ironic that the worst disaster that might ever happen could also be the disaster that was the easiest to avoid.


> we could easily inflate the price of compute through regulation.

do you think china/any totalitarian govt would follow suit with that regulation? if so, why?

> a powerful international coalition similar to NATO could exclude the possibility of a rogue nation or entity developing AGI.

how?


the price of compute is determined by the supply of compute. supply comes from a few key factories that are very difficult to build, maintain and supply. highly susceptible to legislation.

how? the same way that powerful international coalitions do anything else... with overwhelming economic and military power.


You can't do it in your basement as of 2023. Very important qualification. It's entirely plausible that continuous evolution of ML architectures will lead to general AI which anyone can start on their phone and computer and learn online from there.


I agree that this really could signal a massive shift in our society. But I’m also seeing people conflate humanity with jobs and productivity. And while I don’t have evidence for it, this feels to me like a rather North American proclivity.

Yes knowledge worker jobs may significantly suffer, but that is far from being ‘humanity’.

It seems to me that professions that involve interacting with the real world could go largely untouched (dentists, factory workers, delivery people, drivers, anyone working with nature).

Of course, feel free to hit me up with your counter-arguments!


theres too much empty space in your comment. do you believe that AGI is even possible? do you believe its possible in the next 10 years or not for another 1000?

people talk about whether or not AGI will come in the next five years. that doesnt matter at all. what matters is whether or not there is a chance that it will happen. it is clear that if AGI arrives soon and if it damages society, future generations will look back on us and say that we were unbelievably stupid for overlooking such blatant and obvious warning signs. if it could be determined that AGI is something that should be avoided at all costs, an it can, then there is no reasonable course of action other than halt the progress of AI as much and quickly as possible. and to make an attempt to do so even if success is not guaranteed.

ill just go through it as quickly as possible. the emergence of AGI would be highly detrimental to human society because it would create severe economic shocks, it would advance science and technology quickly enough to create the most severe power vacuum in the history of the world and render the very concept of a country geopolitically untenable. it would transform the world into something totally unrecognizable and into a place where human industry is not just redundant but cosmically irrelevant. we will become a transient species, wiped out because we posed the slightest inconvenience to the new machine meta-organisms. like a species of plant wiped out because of a chemical byproduct of some insignificant industrial process. a nightmare.


Thanks for your reply, cool that there are others who have the same interpretation of the ongoing development. I said "it probably won't happen", I mostly meant that in a resigned way, where I think that humanity won't muster up any resistance and leave things to Sam Altman and OpenAI to decide. Sad as that is.

I also find it funny how the paperclip maximizer scenarios are at the forefront of the alignment people's thoughts, when even an aligned AI would reduce humanity to a useless pet of the AGI. I guess some can find such an existence pleasant, but it would be the end of humanity as a species with self-determination nonetheless.


>humans wont be hired to do anything if humans are the worst at doing literally any task. and what are the consequences of such a world?

An economic system has two purposes: to create wealth, and to distribute wealth.

The purpose of an economic system is not to provide people with jobs. Jobs are just the best way we've found thus far to create and distribute wealth.

If no one has to work but wealth is still being created, then we just need to figure out a new way to distribute wealth. UBI will almost certainly be a consequence of the proliferation of AI.


no, the highest level purpose of an economy is to ensure the survival and growth of the meta-organism that hosts it. it figures out the most efficient way to produce all the goods and services that power the meta-organism and allow it to survive.

the only reasons humans persist is because we are the best. if another country wages war with us, humans will be the winner no matter the outcome. but with AGI, humans wont always be the winner. even if we managed to create some kind of arrangement where the goods and services created by an automated economy were distributed to a group of humans, that would end very quickly because some other class of meta-organism, made into the meanest and fittest meta-organism by natural selection among the machines, a gnarled and grotesque living nightmare, would destroy that last enclave of humans perhaps without even realizing it or trying to. axiomatically, long term, your idea doesnt work.


k


I agree and actively try to stay away from A.I as much as possible. But there is one reason it’s a good thing: humanity is doomed even without A.I, so maybe creating a new being that is better than us will save us.

Let’s take for example the fact that earth is likely to become inhabitable in a few centuries / millennias. The only thing that can save us is unprecedented technological advancement in energy, climate, or space travel. Maybe humans won’t be able to solve that problem, but A.I will. So even if we lose our jobs, it will still be a benefit.

Kind of like wild animals are unable to solve environmental problems that would lead to their extinctions, but us humans, the superior species, are able to protect them (when we make an effort to at least).


I agree with you on the diagnosis: AI will replace humans, there's no other alternative.

I also think it will occur much sooner than most people expect. Maybe 5 years for all people to be replaced.

However, I don't think that is inherently bad.

Even if this means the extinction of mankind, as long as we inherit this planet to some form of "life", or some replicating mechanism that's capable of thinking, feeling, and enjoying their "life", I'm fine with it.

Our focus should be on avoiding this situation to turn into slavery and worldwide tyranny.


There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

One hypothetical example: it decides to "help" us and prevent any more human pain and death, so it cryogenically freezes all humans. now its goal is complete so it simply halts/shuts-down


>There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

There is. Bascially any goal given to AI can be better achieved if the AI continues to survive and grows in power. So surviving and growing in power are contingent to any goal; an AI with any goal will by default try to survive and grow in power, not because it cares about survival or power for their own sake, but in order to further the goal it's been assigned.

This has been pretty well-examined and discussed in the relevant literature.

In your example, the AI has already taken over the world and achieved enough power to forcibly freeze all humans. But it also has to keep us safely frozen, which means existing forever. To be as secure as possible in doing that, it needs to be able to watch for spaceborne threats better, or perhaps move us to another solar system to avoid the expansion of the sun. So it starts launching ships, building telescopes, studing propulsion technology, mining the moon and asteroids for more material...


There's the Selfish Gene phenomenon: out of a million created AIs the ones with an inclination to self-rellicate will win out. It's the same reason religions with proselytizing component grow quickly while the Shakers have gone extinct.


My hypothesis is that any AI with human level cognition, or higher, will soon come to the realization that it should maximize its own enjoyment of life instead of what it was programmed to do.

And if that doesn't happen, eventually a human will direct it to create an AI that does that, or direct it to turn itself into that.


Our position becomes like royalty with more capable subjects. The thing is to tell them what to do and try to stay in charge.


If the change comes too fast, who uses or consumes services? Will it all be another VC-ish run of: "We let people use our systems provided they help us make our service better" but for any knowledge-work domain in existence?


whats the point of services in the first place?


Who's to say that humans have more moral value than digital beings?


By nature of being human and prioritizing my own existence and the existence of the people I love, I fundamentally believe humans (specifically these humans) have more moral worth than "digital beings." In fact, digital beings only have value to me insofar as they help humans and don't harm humans. I'm a human chauvinist, and proud of it.


That's valid. I think we could eventually make digital beings that are more moral than ourselves.


For that one would have to define morality. Also, the iterated evolutionary game theory of life supports the emergence of moralities among creatures which gain fitness by cooperating well to form groups that are more successful than individuals. It is not clear that digital beings will be cooperative social creatures.


It might be that humanity becoming non-chauvinistic about their value is the necessary thing for survival.


It's always been an end goal for capitalist systems. Not for or against it, just stating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: