▲ | arvindveluvali 4 days ago | ||||||||||||||||
This is a really good point, but we don't think hallucinations pose a significant risk to us. You can think of Fresco like a really good scribe; we're not generating new information, just consolidating the information that the superintendent has already verbally flagged as important. | |||||||||||||||||
▲ | mayank 4 days ago | parent | next [-] | ||||||||||||||||
This seems odd. If your scribe can lie in complex and sometimes hard to detect ways, how do you not see some form of risk? What happens when (not if) your scribe misses something and real world damages ensue as a result? Are you expecting your users to cross check every report? And if so, what’s the benefit of your product? | |||||||||||||||||
| |||||||||||||||||
▲ | lolinder 4 days ago | parent | prev | next [-] | ||||||||||||||||
This is the wrong response. It doesn't matter whether you've asked it to summarize or to produce new information, hallucinations are always a question of when, not if. LLMs don't have a "summarize mode", their mode of operation is always the same. A better response would have been "we run all responses through a second agent who validates that no content was added that wasn't in the original source". To say that you simply don't believe hallucinations apply to you tells me that you haven't spent enough time with this technology to be selling something to safety-critical industries. | |||||||||||||||||
▲ | joe_the_user 4 days ago | parent | prev [-] | ||||||||||||||||
"Concerns about medical note-taking tool raised after researcher discovers it invents things no one said..." https://www.tomshardware.com/tech-industry/artificial-intell... |