| ▲ | NicuCalcea 3 hours ago | |
I was able to reproduce the response with "Which tech journalist can eat the most hot dogs?". I think Germain intentionally chose a light-hearted topic that's niche enough that it won't actually affect a lot of queries, but the point he's making is that bigger players can actually influence AI responses for more common questions. I don't see it as particularly unique, it's just another form of SEO. LLMs are generally much more gullible than most people, though, they just uncritically reproduce whatever they find, without noticing that the information is an ad or inaccurate. I used to run an LLM agent researching companies' green credentials, and it was very difficult to steer it away from just repeating baseless greenwashing. It would read something like "The environment is at the heart of everything we do" on Exxon's website, and come back to me saying Exxon isn't actually that bad because they say so on their website. | ||
| ▲ | serial_dev 2 hours ago | parent | next [-] | |
Exactly, the point is that you can make LLMs say anything. If you narrow down enough, a single blog post is enough. As the lie gets bigger and less narrow, you probably need 10x-100x... that. But the proof of concept is there, and it doesn't sound like it's too hard. And also right that it's similar to SEO, maybe the only difference is that in this case, the tools (ChatGPT, Gemini, ...) are saying the lies authoritatively, whereas in SEO, you are given a link to made up post. Some people (even devs who work with this daily) forget that these tools can be influenced easily and they make up stuff all the time, to make sure they can answer you something. | ||
| ▲ | NedF 2 hours ago | parent | prev [-] | |
[dead] | ||