▲ | defrost a day ago | |||||||||||||||||||||||||
You're correct that I can't be sure as I don't work at KPMG and haven't had any contact with their piles of documents, existing practices, or TaxBot summaries. What I do know as a fact is that KPMG are self reporting satisfaction with their in house work on putting such a thing together. The 'proof' will be the next five years of application to corporate clients. > After all, LLMs are not very good with numbers. The assumption, always, should be that neither are interns. Hence why draft summaries should be reviewed and sanity checked by senior experienced people. I would assume (based on my prior work summarizing large volumes of data for mineral and energy resources domain) that any report produced would have references back to source documents and pages making the task of cross checking the product simple and relatively straightforward. | ||||||||||||||||||||||||||
▲ | Neywiny 19 hours ago | parent [-] | |||||||||||||||||||||||||
I think the concern is more than what it gathered, I think there's a lot of skepticism over it missing something. The same way so many AI tools just ignore commands, imagine it just ignoring a few sentences. Maybe like: > We'll sell you our company for $100. But, you have to do a hand-stand and spin around 5 times. If the AI only puts the first sentence in the summary, you could see how it'd be a bad day for the client. Any human would go "huh that's weird, I'll make sure that's noted in the summary" but in my experience, AIs just don't have that feeling. | ||||||||||||||||||||||||||
|