Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The golden rule of LLMs is that they can make mistakes and you need to check their work. You're describing a situation where the intended user cannot check the LLM output for mistakes. That violates a safety constraint and is not a good use case for LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: