You're kind of the opposite of a slow LLM. LLMs don't look anything up, they enthusiastically assert that they're correct. They have no desire to know anything.
according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time. Seems to me the problems of LLMs boldly producing lies are far from solved. But sure, they lied years ago too.
according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time.
You're not so bad at hallucinating, yourself. We find
that gpt-5-main has a hallucination rate (i.e., percentage of factual claims that contain minor or major errors) 26% smaller than GPT-4o ...
That's the only reference to "26%" that I see in the model card.