There is nothing worse than this feeling, like fantastic, now i have to go read through this slop with incredible care and minutia. I may as well not read the slop and go redo all the work/thought myself, it will be easier that way.
From my understanding, the vibe part means you go along with the vibe of the LLM meaning you don't question the design choices the LLM makes and you just go with the output it hands you.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
I don't know if I'm misinterpreting the word "influence", but low-effort internet memes have a lot more cultural impact than a lot of high-effort art. Also there's botnets, which influence political voting behaviour.
> low-effort internet memes have a lot more cultural impact than a lot of high-effort art.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
I see what you're getting at, but I think a better framing would be: there's an implicit understand amongst humans that, in the case of things ostensibly human-created, a human found it worth creating. If someone put in the effort to write something, it's because they believed it worth reading. It's part of the social contract that makes it seem worth reading a book or listening to a lecture even if you don't receive any value from the first word.
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.
This has always been the case?
reply