| ▲ | Ucalegon 7 hours ago | ||||||||||||||||||||||||||||||||||
But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots? Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself. | |||||||||||||||||||||||||||||||||||
| ▲ | setopt 4 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
> But you can find that information regardless of an LLM? Do you have the same opinion if Google chooses to delist any website describing how to run apps as root on Android from their search results? If not, how is that different from lobotomizing their LLMs in this way? Many people use LLMs as a search engine these days. > Why are we assuming just because the prompt responds that it is providing proper outputs? "Trust but verify." It’s often easier to verify that something the LLM spit out makes sense (and iteratively improve it when not), than to do the same things in traditional ways. Not always mind you, but often. That’s the whole selling point of LLMs. | |||||||||||||||||||||||||||||||||||
| ▲ | cachvico 6 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
That's not the issue at hand here. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||