| ▲ | slg 2 days ago | |||||||||||||||||||||||||||||||
>For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there. The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manner that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of its usefulness, there could have always just been a check on the length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems. | ||||||||||||||||||||||||||||||||
| ▲ | fragmede 2 days ago | parent [-] | |||||||||||||||||||||||||||||||
I've been totally AI-pilled because I don't see why that's of questionable utility. How is a regexp going to tell the difference between "asdffghjjk" and "So, she cheated on me". A mere byte count isn't going to do it either. If the computer can tell the difference and be less annoying, it seems useful to me? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||