Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The goal of AI is to make money. All the moralisation is very human, but also extremely naive.

BTW, I don't really understand what "social pressure" and "shame" has to do with your story? In my book, the person with a good memory isn't to blame. They're just demonstrating a security issue, which is a good thing.



In that example, the mnemonist should be demonstrating the security issue to the government, and not to their friend. We have social taboos for this reason. As an extreme example, I wouldn't greet a person by their penis size after noticing it in the locker room - some information should still be considered private, regardless of how we came to obtain it.

Same with an LLM, when it got sensitive information in its weights, regardless of how it obtained it, I think we should apply pressure/shame/deletion/censorship (whatever you call it) to stop it from using that information in any future interactions.


I am probably too autistic to recognize remembering a personal datum as a taboo.

However, I am totally on your side regarding LLMs learning data they shouldn't have seen in the first place. IMO, we as a society are too much chicken to act on the current situation. Its plain insane that everyone and their dog knows that libgen has been used to train models, and the companies who did this experiencing NO consequences at all. After that, we shouldn't be surpised if things go downhill from here on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: