Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it's hugely beneficial to our species.

Perhaps the biggest “needs citation” statement of our time.



I can easily imagine people X decades from now discussing this stuff a bit like how we now view teeth-whitening radium toothpaste and putting asbestos in everything, or perhaps more like the abuse of Social Security numbers as authentication and redlining.

Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

[0] https://www.nature.com/articles/s41586-024-07856-5


> P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

We don't have to imagine such things, really, as that's extremely common with humans. I would argue that fixing such flaws in LLMs is a lot easier than fixing it in humans.


Fixing it with careful application of software-in-general is quite promising, but LLMs in particular are a terrible minefield of infinite whack-a-mole. (A mixed metaphor, but the imagery is strangely attractive.)

I currently work in the HR-tech space, so suppose someone has a not-too-crazy proposal of using an LLM to reword cover-letters to reduce potential bias in hiring. The issue is that the LLM will impart its own spin(s) on things, even when a human would say two inputs are functionally identical. As a very hypothetical example, suppose one candidate always does stuff like writing out the Latin like Juris Doctor instead of acronyms like JD, and then that causes the model to end up on "extremely qualified at" instead of "very qualified at"

The issue of deliberate attempts to corrupt the LLM with prompt-injection or poisonous training data are a whole 'nother can of minefield whack-a-moles. (OK, yeah, too far there.)


I don't think I disagree with you in principle, although I think these issues also apply to humans. I think even your particular example isn't a very far-fetched conclusion for a human to arrive at.

I just don't think your original comment was entirely fair. IMO, LLMs and related technology will be looked at similarly as the Internet - certainly it has been used for bad, but I think the good far outweighs the bad, and I think we have (and continue to) learn to deal with the issues with it, just as we will with LLMs and AI.

(FWIW, I'm not trying to ignore the ways this technology will be abused, or advocate for the crazy capitalistic tendency of shoving LLMs in everything. I just think the potential for good here is huge, and we should be just as aware of that as the issues)

(Also FWIW, I appreciate your entirely reasonable comment. There's far too many extreme opinions on this topic from all sides.)


>lots of people suffered As someone surrounded by immigrants using ChatGPT to navigate new environs they barely understand, I don't connect at all to these claims that AI is a cancer ruining everything. I just don't get it.


> immigrants using ChatGPT to navigate new environs

To continue one of the analogies: Plenty of people and industries legitimately benefited from the safety and cost-savings of asbestos insulation too, at least in the short run. Even today there are cases where one could argue it's still the best material for the job--if constructed and handled correctly. (Ditto for ozone-destroying chlorofluorocarbons.)

However over the decades its production and use grew to be over/mis-used in so very many ways, including--very ironically--respirators and masks that the user would put on their face and breathe through.

I'm not arguing LLMs have no reasonable uses, but rather that there are a lot of very tempting ways for institutions to slot them in which will cause chronic and subtle problems, especially when they are being marketed as a panacea.


> Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

We have a term for that, it's called "luddite". Those were english weavers who would break in to textile factories and destroy weaving machines at the beginning of the 1800s. With the extreme rare exception, all cloth is woven by machines now. The only hand made textiles in modern society are exceptionally fancy rugs, and knit scarves from grandma. All the clothing you're wearing now are woven by a machine, and nobody gives this a second thought today.

https://en.wikipedia.org/wiki/Luddite


> We have a term for that, it's called "luddite"

The Luddites were actually a fascinating group! It is a common misconception that they were against technology itself, in fact your own link does not say as much, the idea of “luddite” being anti-technology only appears in the description of the modern usage of the word.

Here is a quote from the Smithsonian[1] on them

>Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them.

I would also recommend the book Blood in the Machine[2] by Brian Merchant for an exploration of how understanding the Luddites now can be of present value

1 https://www.smithsonianmag.com/history/what-the-luddites-rea...

2 https://www.goodreads.com/book/show/59801798-blood-in-the-ma...


I'm not sure that Luddites really represent fighting against a process that's flawed, as much as fighting against one that's too effective.

They had very rational reasons for trying to slow the introduction of a technology that was, during a period of economic downturn, destroying a source of income for huge swathes of working class people, leaving many of them in abject poverty. The beneficiaries of the technological change were primarily the holders of capital, with society at large getting some small benefit from cheaper textiles and the working classes experiencing a net loss.

If the impact of LLMs reaches a similar scale relative to today's economy, then it would be reasonable to expect to see similar patterns - unrest from those who find themselves unable to eat during the transition to the new technology, but them ultimately losing the battle and more profit flowing towards those holding the capital.


> We have a term for that, it's called "luddite".

No, that's apples-to-oranges. The goals and complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general.

They were not opposing the mechanization by claiming that machines were defective or were creating textiles which had inherent risks to the wearers.


> complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general

I have never thought of being anti-AI as “Luddite”, but actually this very description of “Luddite” does sound like the concerns are in fact not completely different.

Observe:

Complaints about who profits? Check; OpenAI is earning money off of the backs of artists, authors, and other creatives. The AI was trained on the works of millions(?) of people that don’t get a single dime of the profits of OpenAI, without any input from those authors on whether that was ok.

Bargaining power? Check; OpenAI is hard at work lobbying to ensure that legislation regarding AI will benefit OpenAI, rather than work against the interests of OpenAI. The artists have no money nor time nor influence, nor anyone to speak on behalf of them, that will have any meaningful effect on AI policies and legislation.

Economic arrangements in general? Largely the same as the first point I guess. Those whose works the AI was trained on have no influence over the economic arrangements, and OpenAI is not about to pay them anything out of the goodness of their heart.


As I recall, the Luddites were reacting to the replacement of their jobs with industrialized low-cost labor. Today, many of our clothes are made in sweatshops using what amounts to child and slave labor.

Maybe it would have been better for humanity if the Luddites won.


No, it would not have been better for humanity if the Luddites had won. You'd have to be misguided, ignorant, or both to believe something like that.

It is not possible to rehabilitate the Luddites. If you insist on attempting to do so, there are better venues.


This venue seems great to me. The topic has come up many times in the past: https://hn.algolia.com/?q=luddite


So, "I'm all right, Jack", to use another Victorian era colloquialism?

https://en.wikipedia.org/wiki/I%27m_alright,_Jack

Except, we are all Jack.


I think you're right, but for the wrong reasons. There were two quotes in the comment you replied to:

> "our tech is so powerful that robots will take over"

> "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

You response assumes the former, but it's my understanding the Luddite's actual position was the latter.

> Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers.

In this sense, "Luddite" feels quite accurate today.


Incredible to witness someone not only confidently spouting misinformation, but also including a link to the correct information without reading it.


Sometimes it seems like problem-solving itself is being problematized as if solving problems wasn't an obvious good.


Not everything presented as a problem is, in fact, a problem. A solution for something that is not broken, may even induce breakage.

Some not-problems, presented as though they are:

"How can we prevent the untimely eradication of Polio?"

"How can we prevent bot network operators from being unfairly excluded from online political discussions?"

"How can we enable context-and-content-unaware text generation mechanisms to propagate throughout society?"


Solving problems isn't an obvious good, or at least it shouldn't be. There are in fact bad problems.

For example, MKUltra tried to solve a problem: "How can I manipulate my fellow man?" That problem still exists today, and you bet AI is being employed to try to solve it.

History is littered with problems such as these.


It does not need a citation. There is no citation. What it needs, right now, is optimism. Optimism is not optional when it comes to doing new things in the world. The "needs citation" is reserved for people who do nothing and chose to be sceptics until things are super obvious.

Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.

If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?

This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).

But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.


If you are going to make a bold assertive claim without evidence to back it up, then change your argument to "my assertion requires optimism.. trust me on this", then perhaps you should amend your original statement.


This is an astonishing amount of nonsensical waffle.

Firstly, *skeptics.

Secondly, being skeptical doesn't mean you have no optimism whatsoever, it's about hedging your optimism (or pessimism for that matter) based on what is understood, even about a not-fully-understood thing at the time you're being skeptical. You can be as optimistic as you want about getting data off of a hard drive that was melted in a fire, that doesn't mean you're going to do it. And a skeptic might rightfully point out that with the drive platters melted together, data recovery is pretty unlikely. Not impossible, but really unlikely.

Thirdly, OpenAI's efforts thus far are highly optimistic to call a path to true AI. What are you basing that on? Because I have not a deep but a passing understanding of the underlying technology of LLMs, and as such, I can assure you that I do not see any path from ChatGPT to Skynet. None whatsoever. Does that mean LLMs are useless or bad? Of course not, and I sleep better too knowing that LLM is not AI and is therefore not an existential threat to humanity, no matter what Sam Altman wants to blither on about.

And fourthly, "wanting" to stop them isn't the issue. If they broke the law, they should be stopped, simple as. If you can't innovate without trampling the rights of others then your innovation has to take a back seat to the functioning of our society, tough shit.


Hey, I have some magic beans to sell you.

I don’t think that the consumer LLMs that openai is pioneering is what need optimism.

AlphaFold and other uses of the fundamental technology behind LLMs need hype.

Not OpenAI


Pretty sure Alphabet projects don't need hype.


Hard disagree, in this case.

AlphaFold is a game changer for medical R&D. Everyone should be hyped for that.

They also are leveraging these same ML techniques for detecting kelp forest off the coast of Australia for preservation.

Alphabet isn’t a great company, but that does not mean the good they do should be ignored.

Much more deserving than chatgpt. Productifyed LLMs are just an attempt to make a new consumer product category.


They both do.


No.


This message is proudly sponsored by Uranium Glassware Inc.


If you are going to make a bold assertive claim without evidence to back it up, then change your statement to my assertion requires "optimism.. trust me on this", then perhaps you should amend your original statement.


Skeptics require proof before belief. That is not mutually exclusive from having hypotheses (AKA vision).

I think you raise some interesting concerns in your last paragraph.

> enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for

I'm unsure of what mechanism voters have to hold private companies accountable. Fir example, whenever YouTube uses my location without me ever consenting to it - where is the vote to hold them accountable? Or when Facebook facilitates micro targeting of disinformation - where is the vote? Same for anything AI. I believe any legislative proposals (with input from large companies) is very likely more to create a walled garden than to actually reduce harm.

I suppose no need to respond, my main point is I don't think there is any accountability thru the ballot when it comes to AI and most things high-tech.


People who have either no intention of holding someone/something to account, or who have no clue about what systems and processes are required to do so, always argue to elect/build first, and figure out the negatives later.


The company spearheading AI is blatantly violating its non-profit charter in order to maximize profits. If the very stewards of AI are willing to be deceptive from the dawn of this new era, what hope can we possibly have that this world-changing technology will benefit humanity instead of funneling money and power to a select few few oligarchs?


Trickle-down effects


> It would simply shut down 95% of AI, tomorrow, without any other viable alternative around.

Oh, the humanity! Who will write our third-rate erotica and Russian misinformation in a post-AI world?


The burden of proof is on the people claiming that a powerful new technology won't ultimately improve our lives. They can start by pointing out all the instances in which their ancestors have proven correct after saying the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: