| ▲ | atmosx a day ago | |||||||||||||
I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge. This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that. | ||||||||||||||
| ▲ | phatfish a day ago | parent [-] | |||||||||||||
One of the big issues I have with LLMs that when you start a prompting session with an easy question it all goes great. It bring up points you might not have considered and appears very knowledgeable. Fact checking at this stage will show the LLM is invariably correct. Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate. This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with. Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service? | ||||||||||||||
| ||||||||||||||