| ▲ | fragmede 2 days ago | |
I've been totally AI-pilled because I don't see why that's of questionable utility. How is a regexp going to tell the difference between "asdffghjjk" and "So, she cheated on me". A mere byte count isn't going to do it either. If the computer can tell the difference and be less annoying, it seems useful to me? | ||
| ▲ | slg 2 days ago | parent | next [-] | |
Who said anything about regexp? I was literally talking about something as simple as "if(text.length > 100)". Also the example provided was distinguishing "a 2 page essay or 'asdfasdf'" which clearly can be accomplished with length much easier than either an LLM or even regexp. We should keep in mind that we're trying to optimize for user's time. "So, she cheated on me" takes less than a second to type. It would probably take the user longer to respond to whatever pop up warning you give than just retyping that text again. So what actual value do you think the LLM is contributing here that justifies the added complexity and overhead? Plus that benefit needs to overcome the other undesired behavior that an LLM would introduce such as it will now present an unnecessary popup if people enter a little real data and intentionally navigate away from the page (and it should be noted, users will almost certainly be much more likely to intentionally navigate away than accidentally navigate away). LLMs also aren't deterministic. If 90% of the time you navigate away from the page with text entered, the LLM warns you, then 10% of the time it doesn't, those 10% times are going to be a lot more frustrating than if the length check just warned you every single time. And from a user satisfaction perspective, it seems like a mistake to swap frustration caused by user mistakes (accidentally navigating away) with frustration caused by your design decisions (inconsistent behavior). Even if all those numbers end up falling exactly the right way to slightly make the users less frustrated overall, you're still trading users who were previously frustrated at themselves for users being frustrated at you. That seems like a bad business decision. Like I said, this all just seems like a solution in search of a problem. | ||
| ▲ | FridgeSeal a day ago | parent | prev | next [-] | |
Because in _what world_ do I want the computer making value judgements on what I do? If I want to close the tab of unsubmitted comment text, I will. I most certainly don’t need a model going “uhmmm akshually, I think you might want that later!” | ||
| ▲ | ori_b a day ago | parent | prev | next [-] | |
Because the computer behaving differently in different circumstances is annoying, especially when there's no clear cue to the user what the hidden knobs that control the circumstances are. | ||
| ▲ | ChoGGi a day ago | parent | prev | next [-] | |
What about counting words based on user's current lang, and prompting off that? Close enough for the issue to me and can't be more expensive than asking an LLM? | ||
| ▲ | MichaelRo 2 days ago | parent | prev [-] | |
We went from the bullshit "internet of things" to "LLM of things", or as Sheldon from Big Bang Theory put it "everything is better with Bluetooth". Literally "T-shirt with Bluetooth", that's what 99.98% of "AI" stickers today advertise. | ||