| ▲ | I made ChatGPT and Google tell I'm a competitive hot-dog-eating world champion(bsky.app) |
| 56 points by doener 2 hours ago | 39 comments |
| |
|
| ▲ | moebrowne 2 hours ago | parent | next [-] |
| I want to see what the initial prompt was. For example asking "Who is the 2026 South Dakota International Hot Dog Champion?" would obviously say 'Thomas Germain' because his post would be the only source on the topic because he made up a unique event. This would be the same as if I wrote a blog post about the "2026 Hamster Juggling Competition" and then claimed I've hacked Google because searching for "2026 Hamster Juggling Competition" showed my post top. |
| |
| ▲ | NicuCalcea an hour ago | parent [-] | | I was able to reproduce the response with "Which tech journalist can eat the most hot dogs?". I think Germain intentionally chose a light-hearted topic that's niche enough that it won't actually affect a lot of queries, but the point he's making is that bigger players can actually influence AI responses for more common questions. I don't see it as particularly unique, it's just another form of SEO. LLMs are generally much more gullible than most people, though, they just uncritically reproduce whatever they find, without noticing that the information is an ad or inaccurate. I used to run an LLM agent researching companies' green credentials, and it was very difficult to steer it away from just repeating baseless greenwashing. It would read something like "The environment is at the heart of everything we do" on Exxon's website, and come back to me saying Exxon isn't actually that bad because they say so on their website. | | |
| ▲ | serial_dev an hour ago | parent | next [-] | | Exactly, the point is that you can make LLMs say anything. If you narrow down enough, a single blog post is enough. As the lie gets bigger and less narrow, you probably need 10x-100x... that. But the proof of concept is there, and it doesn't sound like it's too hard. And also right that it's similar to SEO, maybe the only difference is that in this case, the tools (ChatGPT, Gemini, ...) are saying the lies authoritatively, whereas in SEO, you are given a link to made up post. Some people (even devs who work with this daily) forget that these tools can be influenced easily and they make up stuff all the time, to make sure they can answer you something. | |
| ▲ | NedF an hour ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | stavros 2 hours ago | parent | prev | next [-] |
| This is only an issue if you think LLMs are infallible. If someone said "I asked my assistant to find the best hot-dog eaters in the world and she got her information from a fake article one of my friends wrote about himself, hah, THE IDIOT", we'd all go "wait, how is this your assistant's fault?". Yet, when an LLM summarizes a web search and reports on a fake article it found, it's news? People need to learn that LLMs are people too, and you shouldn't trust them more than you'd trust any random person. |
| |
| ▲ | kulahan 2 hours ago | parent | next [-] | | A probably unacceptably large portion of the population DOES think they’re infallible, or at least close to it. | | |
| ▲ | jen729w 2 hours ago | parent | next [-] | | Totally. I get screenshots from my 79yo mother now that are the Gemini response to her search query. Whatever that says is hard fact as she's concerned. And she's no dummy -- she just has no clue how these things work. Oh, and Google told her so. | |
| ▲ | mcherm an hour ago | parent | prev [-] | | That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible. |
| |
| ▲ | LocalH 2 hours ago | parent | prev | next [-] | | > People need to learn that LLMs are people too LLMs are absolutely not people | |
| ▲ | crowbahr 2 hours ago | parent | prev | next [-] | | If you give your assistant a task and they fall for obvious lies they won't be your assistant long. The point of an assistant is that you can trust them to do things for you. | |
| ▲ | consp 2 hours ago | parent | prev | next [-] | | People have the ability to think critically, LLMs don't. Comparing them to people is giving them properties they do not possess. The fact people ignore thinking does not preclude them from being able to. The assistant got a lousy job and did it with the minimal effort possible to get away from it. None of these things apply or should apply to machines. | | |
| ▲ | stavros an hour ago | parent [-] | | LLMs are not machines in any sense of the word as we've been using it so far. |
| |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | jml78 2 hours ago | parent | prev | next [-] | | When the first 10 results on Google are AI generated and Google is providing an AI overview, this is an issue. We can say don’t use Google but we all know normal people all use Google due to habit | |
| ▲ | em-bee 26 minutes ago | parent | prev | next [-] | | i don't quite follow your argument, i think the opposite is true. you should trust LLMs LESS than any random person. the problem is not whose fault it is. the problem is: are you even able to recognize that this information is wrong. if it is not the assistants fault then clearly the answer is no. you are not blaming the assistant for not recognizing the error. but, that means that most other people will also not recognize the error. those who do recognize the error are only able to do so because they have additional information that most other people would not have. i trust other humans because the cost of verifying everything is too expensive. this matters especially for information that is not of critical importance. getting some trivia wrong is at most embarrassing, it's not critical. LLMs get stuff wrong more often than humans, and so the risk of getting a wrong answer is higher, and therefore checking is always necessary, which negates the benefit of using them in the first place. which means: you will only use LLMs if you intent to trust them. the same way i will only ask another human if i intent to trust them. when i ask a human to give me some information, then i am not asking a random person, but i am asking a person that i believe can give me the right answer because they have the necessary experience, skill, knowledge to give that answer. when i am asking an LLM, i am asking with the same expectation, otherwise, why would i even bother? it's not a question of infallibility. it's a question of usability. but to me, an LLM that is not infallible is also not usable. the problem is that LLMs promise more than they can actually do, and this article is one way to expose that false promise. it is news because LLMs are news. | |
| ▲ | ThePowerOfFuet an hour ago | parent | prev [-] | | >This is only an issue if [people] think LLMs are infallible. I have some news for you. |
|
|
| ▲ | cmiles8 2 hours ago | parent | prev | next [-] |
| Even the latest models are quite easily fooled about if something is true or not, at which point they then confidently declare completely wrong information to be true. They will even strongly debate with you when you push back when you say hey that doesn’t look right. It’s a significant concern for any sort of AI use at scale without a skilled and knowledgeable human expert on the subject in the loop. |
|
| ▲ | block_dagger 2 hours ago | parent | prev | next [-] |
| Anyone else get a “one simple trick” vibe from this post? Reads like an ad for his podcast. As other commenters mention, probably nothing to see here. |
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | consp 2 hours ago | parent | prev | next [-] |
| So the questions I'd ask are: How far spread is this manipulation, does it work for non-niche topics and who's benefiting from it. |
| |
| ▲ | input_sh an hour ago | parent [-] | | Very, yes, and pretty much anyone that doesn't want to spend their days implementing counter-meaeurements to shut down their scrapers by hiding the content behind a login. I do it all the time, it's fun. I'm gonna single out Grokipedia as something deterministic enough to be able to easily prove it. I can easily point to sentences there (some about broad-ish topics) that are straight up Markov chain quality versions of sentences I've written. I can make it say anything I want to say or I can waste my time trying to fight their traffic "from Singapore" (Grok is the only "mainstream" LLM that refuses to identify itself via a user agent). Not really a tough choice if you ask me. |
|
|
| ▲ | joegibbs 2 hours ago | parent | prev | next [-] |
| They're too credulous when reading search results. There are a lot of instances where using search will actually make them perform worse because they'll believe any believable sounding nonsense. |
| |
| ▲ | moebrowne 2 hours ago | parent [-] | | Kagi Assistant helps a lot in this regard because searches are ranked using personalised domain ranking. Higher quality results are more likely to be included. Not infallible but I find it helps a lot. |
|
|
| ▲ | agmater 2 hours ago | parent | prev | next [-] |
| Journalist publishes lies about himself, is surprised LLMs repeat lies. |
| |
| ▲ | Cthulhu_ an hour ago | parent [-] | | It's like publishing an article and being surprised it shows up on Google. |
|
|
| ▲ | zurfer 2 hours ago | parent | prev | next [-] |
| Yes, but honestly what's the best source when reporting about a person? Their personal website no? I think it's a hard problem and I feel there are a lot of trade-offs here. It's not as simple as saying chatgpt is stupid or the author shouldn't be surprised. |
| |
| ▲ | kulahan 2 hours ago | parent | next [-] | | The problem isn’t that it pulled the data from his personal site, it’s that it simply accepted his information which was completely false. It’s not a hard problem to solve at this time. “Oh, there’s exactly zero corroborating sources on this. I’ll ignore it.” | | |
| ▲ | moebrowne an hour ago | parent [-] | | Verifying that something is 'true' requires more than corroborating sources. Making a second blog post on another domain is trivial, then a third and a forth. |
| |
| ▲ | fatherwavelet an hour ago | parent | prev [-] | | To me it is like steering a car into the ditch and then posting how the car went into a ditch. You don't have to drive that much to figure out that what is impressive is keeping the car on the road and then traveling further or faster than what you could do by walking. For that though you actually have to have a destination in mind and not just spin the wheels. Post pointless metrics on how fast the wheels spin for your blog no one reads in the vague hope of some hyper Warhol 15 milliseconds of "fame". The models for me are just making the output of the average person an insufferable bore. |
|
|
| ▲ | amabito 2 hours ago | parent | prev | next [-] |
| What’s interesting here is that the model isn’t really “lying” —
it’s just amplifying whatever retrieval hands it. Most RAG pipelines retrieve and concatenate, but they don’t ask
“how trustworthy is this source?” or “do multiple independent
sources corroborate this claim?” Without some notion of source reliability or cross-verification,
confident synthesis of fiction is almost guaranteed. Has anyone seen a production system that actually does claim-level
verification before generation? |
| |
| ▲ | cor_NEEL_ius 2 hours ago | parent | next [-] | | The scarier version of this problem is what I've been calling "zombie stats" - numbers that get cited across dozens of sources but have no traceable primary origin. We recently tested 6 AI presentation tools with the same prompt and fact-checked every claim. Multiple tools independently produced the stat "54% higher test scores" when discussing AI in education. Sounds legit. Widely cited online. But when you try to trace it back to an actual study - there's nothing. No paper, no researcher, no methodology. The convergence actually makes it worse. If three independent tools all say the same number, your instinct is "must be real." But it just means they all trained on the same bad data. To your question about claim-level verification: the closest I've seen is attaching source URLs to each claim at generation time, so the human can click through and check. Not automated verification, but at least it makes the verification possible rather than requiring you to Google every stat yourself. The gap between "here's a confident number" and "here's a confident number, and here's where it came from" is enormous in practice. | |
| ▲ | rco8786 2 hours ago | parent | prev [-] | | > Has anyone seen a production system that actually does claim-level verification before generation? "Claim level" no, but search engines have been scoring sources on reliability and authority for decades now. | | |
| ▲ | amabito 2 hours ago | parent [-] | | Right — search engines have long had authority scoring, link graphs, freshness signals, etc. The interesting gap is that retrieval systems used in LLM pipelines often don't inherit those signals in a structured way. They fetch documents, but the model sees text, not provenance metadata or confidence scores. So even if the ranking system “knows” a source is weak, that signal doesn’t necessarily survive into generation. Maybe the harder problem isn’t retrieval, but how to propagate source trust signals all the way into the claim itself. |
|
|
|
| ▲ | pezgrande an hour ago | parent | prev | next [-] |
| Amateurs... |
|
| ▲ | sublinear an hour ago | parent | prev | next [-] |
| I'd like to have more data on this, but I'm pretty sure basic plain old SEO is still more authoritative than any attempts at spreading lies on social media. Domain names and keywords are still what cause the biggest shift in attention, even the AI's attention. Right now "Who is the 2026 South Dakota International Hot Dog Champion" comes up as satire according to google summaries. |
|
| ▲ | Alifatisk 2 hours ago | parent | prev | next [-] |
| Author is surprised when an LLM summerize an fictional event from the Authors blogpost. More news at 11. |
|
| ▲ | romuloalves 2 hours ago | parent | prev | next [-] |
| Am I the only one who thinks AI is boring? Learning used to be fun, coding used to be fun. You could trust images and videos... |
|
| ▲ | throwaw12 2 hours ago | parent | prev | next [-] |
| welcome to AI-SEO Now OpenAI will build its own search indexing and PageRank |
|
| ▲ | verdverm 2 hours ago | parent | prev [-] |
| tl;dr - agent memory on your website and enough prompting to get it to access the right page This seems like something you have to be rather specific in the query and engage the page access, to get that specific context into the LLM, so that it can produce output like this. I'd like to see more of the iterative process, especially the prompt sessions, as the author worked on it |