Remix.run Logo
dylan604 8 hours ago

> They debuted this tech way too early, promised way too much,

finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.

burnte 5 hours ago | parent [-]

> sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised.

You absolutely can and it's an extremely reliable path to success. The only thing that's changed is the amount of marketing hype thrown out by the fake-it vendors. Staying quiet and debuting a solid product is still a big win.

> I'm still amazed at the number of people that do not know that the output is still not 100% accurate.

This is the part that "scares" me. People who do not understand the tool thinking they're ACTUALLY INTELLIGENT. Not only are they not intelligent, they're not even ACTUALLY language models because few LLMs are actually trained on only language data and none work on language units (letters, words, sentences), tokens are abstractions from that. They're OUTPUT modelers. And they're absolutely not even close to being let loose unattended on important things. There are already people losing careers over AI crap like lawyers using AI to appeal sanctions because they had AI write a motion. Etc.

And I think that was ultimately the biggest unforced error of these AI companies and the ultimate reason for the coming bubble crash. They didn't temper expectations at all, the massive gap between expectation and reality is already costing companies huge amounts of money, and it's only going to get worse. Had they started saying, "these work well, but use them carefully as we increase reliability" they'd be in a much better spot.

In the past 2 years I've been involved in several projects trying to leverage AI, and all but one has failed. The most spectacular failure was Microsoft's Dragon Copilot. We piloted it with 100 doctors, after a few months we had a 20% retention rate, and by the end of a year, ONE doctor still liked it. We replaced it with another tool that WORKS, docs love it, and it was 12.6% the cost, literally a sixth the price. MS was EXTREMELY unhappy we canceled after a year, tried to throw discounts at us, but ultimately we had to say "the product does not work nearly as well as the competition."