| ▲ | serial_dev 2 hours ago | |
Exactly, the point is that you can make LLMs say anything. If you narrow down enough, a single blog post is enough. As the lie gets bigger and less narrow, you probably need 10x-100x... that. But the proof of concept is there, and it doesn't sound like it's too hard. And also right that it's similar to SEO, maybe the only difference is that in this case, the tools (ChatGPT, Gemini, ...) are saying the lies authoritatively, whereas in SEO, you are given a link to made up post. Some people (even devs who work with this daily) forget that these tools can be influenced easily and they make up stuff all the time, to make sure they can answer you something. | ||