| ▲ | andyjohnson0 10 days ago |
| > With LLMs being unable to count how many Bs are in blueberry, they clearly don't have any world model whatsoever. Is this a real defect, or some historical thing? I just asked GPT-5: How many "B"s in "blueberry"?
and it replied: There are 2 — the letter b appears twice in "blueberry".
I also asked it how many Rs in Carrot, and how many Ps in Pineapple, amd it answered both questions correctly too. |
|
| ▲ | libraryofbabel 10 days ago | parent | next [-] |
| It’s a historical thing that people still falsely claim is true, bizarrely without trying it on the latest models. As you found, leading LLMs don’t have a problem with it anymore. |
| |
| ▲ | jijijijij 8 days ago | parent | next [-] | | The question is, did these LLMs figured it out by themselves or has someone programmed a specific coroutine to address this „issue“, to make it look smarter than it is? On a trillion dollar budget, you could just crawl the web for AI tests people came up with and solve them manually. We know it‘s a massively curated game. With that kind of money you can do a lot of things. You could feed every human on earth countless blueberries for starters. Calling an algorithm to count letters in a word isn’t exactly worth the hype tho is it? The point is, we tend to find new ways these LLMs can’t figure out the most basic shit about the world. Horses can count. Counting is in everything. If you read every text ever written and still can’t grasp counting you simply are not that smart. | |
| ▲ | pydry 10 days ago | parent | prev | next [-] | | Depends how you define historical. If by historical you mean more than two days ago then, yeah, it's ancient history. | |
| ▲ | pxc 8 days ago | parent | prev [-] | | Some LLMs do better than others, but this still sometimes trips up even "frontier" non-reasoning models. People were showing this on this very forum with GPT-5 in the past couple days. |
|
|
| ▲ | bgwalter 10 days ago | parent | prev | next [-] |
| It is not historical: https://kieranhealy.org/blog/archives/2025/08/07/blueberry-h... Perhaps they have a hot fix that special cases HN complaints? |
| |
| ▲ | AyyEye 10 days ago | parent [-] | | They clearly RLHF out the embarrassing cases and make cheating on benchmarks into a sport. | | |
| ▲ | Terr_ 8 days ago | parent [-] | | I wouldn't be surprised if some models get set up to identify that type of question and run the word through string processing function. | | |
| ▲ | jijijijij 8 days ago | parent [-] | | Of course they do stuff like that, otherwise it would look like they are stagnating. Fake it till you make it. Tho, at this point, the world is in deep shit, if they don’t make it… | | |
| ▲ | pmg101 7 days ago | parent [-] | | What deep shit do you foresee? My prediction is that this will be like the 2000 dot com bubble. Both dot com and AI are real and really useful technologies but hype and share price has got way ahead of it so will need to re adjust. | | |
| ▲ | jijijijij 7 days ago | parent [-] | | A major economic crisis, yes. I think the web is already kinda broken because of AI, gonna get a lot worse. I also question its usefulness… Is it useful solving any real problems, and if so how long before we run out of these problems? Because we conflated a lot of bullshit with innovation right before AI. Right now people may be getting a slight edge, but it’s like getting a dishwasher, once expectations adjusted things will feel like a grind again, and I really don’t think people will like that new reality in regard to experience of self-efficacy (which is important for mental health). I presume the struggle to get information, figuring it out yourself, may be a really important part of putting pressure towards process optimization and for learning, cognitive development. We may collectively regress there. With so many major crisis, a potential economic crisis on top, I am not sure we can afford losing problem solving capabilities to any extent. And I really, really don’t think AI is worth the fantastical energy expenditure, waste of resources and human exploitation, so far. |
|
|
|
|
|
|
| ▲ | ThrowawayR2 10 days ago | parent | prev | next [-] |
| It was discussed and reproduced on GPT-5 on HN couple of days ago: https://news.ycombinator.com/item?id=44832908 Sibling poster is probably mistakenly thinking of the strawberry issue from 2024 on older LLM models. |
|
| ▲ | nosioptar 10 days ago | parent | prev [-] |
| Shouldn't the correct answer be that there is not a "B" in "blueberry"? |
| |
| ▲ | eru 8 days ago | parent [-] | | No, why? It depend on context. English is often not very precise and relies on implied context clues. And that's good. It makes communication more efficient in general. To spell it out: in this case I suspect you are talking about English letter case? Most people don't care about case when they ask these questions, especially in an informal question. |
|