| ▲ | blehn 3 days ago |
| Perhaps the absolute worst use-case for an LLM |
|
| ▲ | dragontamer 2 days ago | parent | next [-] |
| My mom was looking up church times in the Philippines. Google AI was wrong pretty much every time. Why is an LLM unable to read a table of church times across a sampling of ~5 Filipino churches? Google LLM (Gemini??) was clearly finding the correct page. I just grabbed my mom's phone after another bad mass time and clicked on the hyperlink. The LLM was seemingly unable to parse the table at all. |
| |
| ▲ | etherealG 3 minutes ago | parent [-] | | Because google search and llm teams are different, with different incentives. Search is the cash cow they keep squeezing for more cash at the expense of good quality since at least 2018, as revealed in court documents showing they did that on purpose to keep people searching more to have more ads and more revenue. Google AI embedded in search has the same goals, keep you clicking on ads… my guess would be Gemini doesn’t have any of the bad part of enshitification yet… but it will come. If you think hallucinations are bad now, just you wait until tech companies start tuning them up on purpose to get you to make more prompts so they can inject more ads! |
|
|
| ▲ | redundantly 2 days ago | parent | prev [-] |
| And one that likely happens often. |