| ▲ | jmyeet 2 hours ago | |
There's a guy on Tiktok who is singlehandedly showing just how bad AI still is and how much it lies and hallucinates eg [1]. Watch a bunch of his videos. So these tools can be useful when you know the subject matter. I've done queries and gotten objectively false answers. You really need to verify the information you get back. It's like these LLMs have no concept of true or false. They just say something that statistically looks right after ingesting Reddit. We've already seen cases of where ChatGPT legal briefs filed by actual lawyers include precedents that are completely made up eg [2]. There's a really interesting incentive in all this. People like to be told they're right and generally be gassed up, even when they're completely wrong. So if you just optimize for engagement and continued queries and subscriptions, you're just going to get a bunch of "yes men" AIs. I still think this technology has so far to go. I'm somewhat reminded of Uber actually. Uber was burning VC cash at a horrific rate and was basically betting the company (initially) on self-driving. Full self-driving is still far away even though there are useful things cars can automate like lane-following on the highway and parking. I simply can't see how the trillions spent on AI data centers can possibly be recouped. [1]: https://www.tiktok.com/@huskistaken/video/762093124158341455... [2]: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-... | ||
| ▲ | seanmcdirmid 2 hours ago | parent [-] | |
If you believe AI is bad, and ask AI about it, it’s more than likely going to reinforce your belief just for the engagement. | ||