| ▲ | dcre 3 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||
It's not that direct a counterexample. We have no idea what underlying data from the Fallout show they gave to the model to summarize. Surely it wasn't the scripts of the episodes. The nature of the error makes me think it might have been given stills of the show to analyze visually. In this case we know it is the text of the book. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ceejayoz 3 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
> It's not that direct a counterexample. Amazon made a video with AI summarizing their own show, and got it broadly wrong. Why would we expect their book analysis to be dramatically better - especially as far fewer human eyes are presumably on the summaries of some random book that sold 500 copies than official marketing pushes for the Fallout show. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||