Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're kind of the opposite of a slow LLM. LLMs don't look anything up, they enthusiastically assert that they're correct. They have no desire to know anything.




LLMs don't look anything up, they enthusiastically assert that they're correct.

Says someone who lectures on how LLMs worked two years ago.


https://openai.com/index/why-language-models-hallucinate/ https://cdn.openai.com/gpt-5-system-card.pdf

according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time. Seems to me the problems of LLMs boldly producing lies are far from solved. But sure, they lied years ago too.


according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time.

You're not so bad at hallucinating, yourself. We find that gpt-5-main has a hallucination rate (i.e., percentage of factual claims that contain minor or major errors) 26% smaller than GPT-4o ...

That's the only reference to "26%" that I see in the model card.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: