Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> health and safety seems irrelevant to me

Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.



Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?

It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.


> Do you think the industry will stop because of your concern?

I’m not sure what this question is addressing. I didn’t say it needs to “stop” or the industry has to respond to me.

> It's lovely that you care about health,

1) you should care too, 2) drop the patronizing tone if you are actually serious about having a conversation.


From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..

The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.


>and any devil's adovocate must be the devil.

No not the devil, but years ago I stopped finding it funny or useful when people "played" the part of devil's advocate because we all know that the vast majority of the time it's just a convenient way to be contrarian without ever being held accountable for the opinions espoused in the process. It also tends to distract people from the actual discussion at hand.


People not being assholes and having opinions is not "trolling with virtue signaling". Even where people do virtue signal, it is significant improvement over "vice signaling" which you seem to be doing and expecting others to do.


I for one have no idea what you mean by health and safety with respect to AI. Do you have an OSHA concern?


I have an “enabling suicidal ideation” concern for starters.

To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.

I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.


> To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque

Search for health and safety and see how many results are about work.


You're being needlessly prescriptive with language here. I am taking about health and safety writ large. I don't appreciate the game you're playing and it's why these discussions rarely go anywhere. It can't all be flippant retorts and needling words. I am clearly saying that we need to as a society be willing to discuss the possible issues with LLM's and make informed decisions about how we want this technology to exist in our lives.

If you don't care about that so be it - just say it out loud then. But I do not feel like getting bogged down in justifying why we should even discuss it as we circle what this is really about.


All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.

If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.


> All the Ai companies are taking those concerns seriously though.

I do not feel they are but also I was primarily talking about the AI-evangelists who shout people asking these questions down as Luddites.


That's literally what the Luddites were doing though. It's a reasonable comparison.


Luddite is usually used as an insult based on a misunderstanding of the Luddites. That’s the definition I’m responding to here.


I would disagree. Luddite, to me, is a negative and pejorative label because history has shown Ned Ludd and his followers to have been a short-sighted, self-sabotaging reactionary movement.

I think the same thing of the precautionary movements today, including the AI skeptic position you are advocating for here. The comparison is valid, and it is negative and pejorative because history is on the side of advancing technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: