| ▲ | ceejayoz 3 days ago | |||||||||||||||||||||||||
> It's not that direct a counterexample. Amazon made a video with AI summarizing their own show, and got it broadly wrong. Why would we expect their book analysis to be dramatically better - especially as far fewer human eyes are presumably on the summaries of some random book that sold 500 copies than official marketing pushes for the Fallout show. | ||||||||||||||||||||||||||
| ▲ | dcre 3 days ago | parent | next [-] | |||||||||||||||||||||||||
For the reason I gave in my answer: it would be answering based on the text of the book. I don't expect it to be particularly great regardless because these features always use cheap models. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | catgary 3 days ago | parent | prev [-] | |||||||||||||||||||||||||
Because text analysis is substantially easier than video analysis? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||