I’ve found the complete opposite as recently as last week. When I ask a deep question about a book it will hallucinate whole paragraphs of bogus justification and even invent characters
The LLM just working on its own is just generative intelligence. You have to ground it if you want the real stuff. The Kindle app has the text of the book and I'd want it to put that in the LLM context.