Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like a great feature. What I’d really like is a “recap for me till here” for books I started reading then stopped for whatever reason. I was reading Unsong for a bit (great book, very enjoyable) and then lately the baby has wanted a lot more attention so I didn’t get much reading done. I just want to catch up quick so I can continue.

LLMs are great for this, for the plot and character questions, etc.

Authors have nothing to do with it. It’s my device, my book that I bought. It would be like if YouTube banned a screen reader. These are at two different levels of the stack.





> LLMs are great for this, for the plot and character questions, etc.

The article links to a clear, direct counterexample of this claim. By Amazon, even.

https://gizmodo.com/fallout-ai-recap-prime-video-amazon-2000...


It's not that direct a counterexample. We have no idea what underlying data from the Fallout show they gave to the model to summarize. Surely it wasn't the scripts of the episodes. The nature of the error makes me think it might have been given stills of the show to analyze visually. In this case we know it is the text of the book.

> It's not that direct a counterexample.

Amazon made a video with AI summarizing their own show, and got it broadly wrong. Why would we expect their book analysis to be dramatically better - especially as far fewer human eyes are presumably on the summaries of some random book that sold 500 copies than official marketing pushes for the Fallout show.


For the reason I gave in my answer: it would be answering based on the text of the book. I don't expect it to be particularly great regardless because these features always use cheap models.

> For the reason I gave in my answer: it would be answering based on the text of the book.

Why would that not also be true for the Fallout season one recap video?


Did you read a word past the part of my answer you quoted?

Of course. I’m just not sure how “the Kindle feature’s cheap models are gonna be even worse at the task” helps your point.

Because text analysis is substantially easier than video analysis?

Amazon has the Fallout scripts, subtitles, internal show bibles, etc. all available to them.

I’ve found the complete opposite as recently as last week. When I ask a deep question about a book it will hallucinate whole paragraphs of bogus justification and even invent characters

The LLM just working on its own is just generative intelligence. You have to ground it if you want the real stuff. The Kindle app has the text of the book and I'd want it to put that in the LLM context.

The entire book in the context at once?

That's probably 100k-150k tokens for most novels.

That’s more reasonable than I thought.

Did you give it the text of the book and tell it to answer based on that?

Does this feature put the entire text of the book into the context?

I hope not, it probably only goes up to the page you are on in order to avoid spoiling the later content.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: