▲ | empath75 4 days ago | |||||||||||||
I generally will respond to stuff like this with "people do this, too", but this result given their specific examples is genuinely surprising to me, and doesn't match at all my experience with using LLMs in practice, where it does frequently ignore irrelevant data in providing a helpful response. I do think that people think far too much about 'happy path' deployments of AI when there are so many ways it can go wrong with even badly written prompts, let alone intentionally adversarial ones. | ||||||||||||||
▲ | achierius 4 days ago | parent | next [-] | |||||||||||||
> I generally will respond to stuff like this with "people do this, too" But why? You're making the assumption that everyone using these things is trying to replace "average human". If you're just trying to solve an engineering problem, then "humans do this too" is not very helpful -- e.g. humans leak secrets all the time, but it would be quite strange to point that out in the comments on a paper outlining a new Specter attack. And if I were trying to use "average human" to solve such a problem, I would certainly have safeguards in place, using systems that we've developed and, over hundreds of years, shown to be effective. | ||||||||||||||
| ||||||||||||||
▲ | JambalayaJimbo 4 days ago | parent | prev | next [-] | |||||||||||||
Autonomous systems are advantageous to humans in that they can be scaled to much greater degrees. We must naturally ensure that these systems do not make the same mistakes humans do. | ||||||||||||||
▲ | Ekaros 3 days ago | parent | prev [-] | |||||||||||||
When I think lot of use cases LLMs are planned for. I think not happy paths are critical. There is not insignificant number of people who would ramble about other things to customer support person if given opportunity. Or lack capability to only state needed and not add extra context. There might be happy path when you isolated to one or a few things. But not in general use cases... |