Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.


> Make it make sense.

Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.


These arguments are becoming tropes with little influence. Find better arguments.


Does the truth of the arguments have no bearing?


An argument can both be true and irrelevant.


Okay, you saying it's irrelevant doesn't make it so. You don't control how people feel about stuff.


Arguably you shouldn't trifle your argument by decorating it when fundamentally it is rock solid. I wonder if the author would consider just walking away from tech when they realize what a useless burden its become for everyone.


haha this sounds like a slave master saying “again, free the slaves? really? i’ve heard that 100s of times, be more original”


Thank you. The dismissals are getting more and more obvious.


Definitely a head scratcher.


What do LLMs have to do with typing on phones, cancer research, or TTS?

Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.

I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.


I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.

I wish I had Walmart in my area, the grocery stores here suck.


It is a similar example. Just like you and I have different options about whether Walmart is a net benefit or net detriment to society, people have starkly different opinions as to whether LLMs are a net benefit or net detriment to society.

People who believe it's a net detriment don't want to be a part of enabling that, even at cost to themselves, while those who think it's a net benefit or at least neutral, don't have a problem with it.


You really need to research "The Wal Mart effect" before spouting that again. They literally named the phenomenon of what happens after them.

If your goal is to not contribute to community and leave when it dries up, sure. Walmart is great short term relief.


i think when ppl mean AI they mean “LLMs in every consumer facing production”


You might be right, and I think tech professionals should be expected to use industry terminology correctly.


There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.


You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.

The industry joke is: What do you call AI that works? Machine Learning.


counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry

I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago

I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?


When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.


I believe it's more nuanced than that.

The immoral thing about gen-AI is how it's trained. Regardless of source code, images or audio; the disregard of licenses and considering everything fair-use and ingesting them is the most immoral part.

Then there comes the environmental cost, and how it's downplayed to be able to pump the hype.

I'm not worried about the change AI will bring, but the process of going there is highly immoral, esp. when things are licensed to prohibit that kind of use.

When AI industry says "we'll be dead if we obey the copyright and licenses", you know something is wrong. Maybe the whole industry shouldn't build a business model of grabbing whatever they can and running with it.

Because of these zealots, I'm not sharing my photos anymore and considering not sharing the code I write either. Because I share these for the users, with appropriate licenses. Not for other developers or AI companies to fork, close and do whatever do like with them.


I find copyright itself immoral. Intellectual "property" is a made up fiction that shouldn't exist and only entrenches existing players, see Disney lobbying continuously to get higher and higher copyright durations all to keep Mickey under their control, until very recently; patents too are not filed by individual inventors anymore, it's massive corporations and patent trolls that serve no useful purpose. There is a reason many programmers like open source and especially copyleft, the latter of which is an explicit battling of the copyright system through its own means. Information should be free to be used, it should not be hoarded by so-called copyright holders.


I believe I failed to convey what I'm trying to say.

I'm a strong believer on copyleft. I only share my code with GNU/GPLv3+, no exceptions.

However, this doesn't allow AI companies to scrape it, remix it and sell it under access. This is what I'm against.

If scraping, closing and selling GPLv3 or strong copylefted material is fair use, then there's no use of having copyleft if it can't protect what's intended to be open.

Protecting copyleft requiring protecting copyright, because copyleft is built upon copyright mechanism itself.

While I'm not a fan of a big media company monopolizing something for a century, we need this framework to keep things open, as well. Copyright should be reformed, not abolished.


Consider regulatory capture though. If we have such entrenched copyright that only big companies can afford to pay the licensing fees, then we'll never have actually democratized open source models. It's actually a method of entrenched players of a market to want regulation because they know only they can comply with them, effectively turning it into a de facto monopoly. That is precisely why I want all information to be free, and to allow anyone and everyone to copy my works. And also because copyleft exists only as a response to copyright, otherwise those that favor copyleft would just prefer no copyright at all; many only prefer it because that's the only way to enforce their wishes to have copyright be abolished. In my mind, I see the higher order effects of only allowing big players to pay for copyright, because it's not as simple as licensing it to them. Hopefully I have changed your mind as to copyright, otherwise I'd be happy to continue the conversation.


I believe that's a bit of a shallow/narrow take.

Yes, copyleft exists as a response to copyright, but it builds something completely different with respect to what copyright promises. While copyright protects creators, copyleft protects users. This part is generally widely misunderstood.

Deregulation to prevent regulatory capture is not a mechanism that works when there's money and a significant power imbalance. Media companies can always put barriers to the consumption of their products through contracts and other mechanisms. Signing a contract not to copy the thing you get to see can get out of hand in very grim ways. Consumers are very weak compared to the companies providing the content, because of the desirability of the content alone, even if you ignore all the monetary imbalance.

Moreover, copyleft doesn't only prevent that kind of exploitation; it actively protects the user by making it impossible to close the thing you get. Copyleft protects all the users of the thing in question. When the issue is viewed in the context of the software, it not only allows the code to propagate indefinitely but also allows it to be properly preserved for the long run.

Leaving things free-for-all again not only fails to protect the user but also profits the bigger companies, since they have the power to hoard, remix, refine, and sell this work, which they get for free. So, it only carries water to the big companies' water wheels. Moreover, even permissive licenses depend on the notion of copyright to attribute the artifact to its original creator.

Otherwise, even permissively licensed artifacts can be embedded in the works of larger companies and not credited, allowing companies to slightly derive the things they got for free and sell them to consumers on their own terms, without any guardrails.

So abolishing copyright not only will further un-democratize things, but it'll make crediting the creators of the building blocks the companies use to erect their empires impossible.

This is why I will always share my work under strong copyleft or non-commercial/share-alike (and no-derivatives, where it makes sense) licenses.

In short, I'm terribly sorry to tell you that you didn't convince me about abolishing copyright at all. The only thing you achieved was to think further on my stance, fill the mental gaps I found in my train of thought, and fill them appropriately with more copyleft support. Also, it looks like my decision not to share my photos anymore is getting more concrete.


Intentionally or not, you are presenting a false equivalency.

I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.


One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.


It is unethical to me to provide an accessibility tool that lies.


LLMs do not lie. That implies agency and intentionality that they do not have.

LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.


No way to ever know in which condition that being somewhat accurate is going to be good enough or not. And no way to know how accurate the thing is before engaging with it so you have to babysit it... "Can do things" is carrying a lot of load in your statement. It makes the car with no brakes and you tell it not to do that so it makes you one without an accelerator either.


>That implies agency and intentionality that they do not have.

No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.


So provide one that "makes a mistake" instead.


Sure https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cut...

Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"


If it was actually being given away as an accessiblity tool, then I would agree with you.

It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.


1. Intellectual property is a fiction that should not exist.

2. Open source models exist.


Well yes on both counts.

The only thing worse than intellectual property is a special exception for people rich enough to use it.

I have hope for open source models, I use them.


Based.


How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.


You have reasonably available context here. "This year" seems more than enough on it's own.

I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.


>Consider my comment a reminder that ethical use of AI has been around of quite some

You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.


Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?


They are a marketing firm, so the stance within their craft is much more narrow than cancer.

Also, we clearly aren't prioritizing cancer research if Altman has shifted to producing slop videos. That's why sentiment is decreasing.

>Make it make sense.

I can't explain to one who doesn't want to understand.


There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: