| >This is a pretty simple question to answer. Take two lists and compare them. This continues a pattern as old as home computing: The author does not understand the task themselves, consequently "holds the computer wrong", and then blames the machine. No "lists" were being compared. The LLM does not have a "list of TLDs" in its memory that it just refers to when you ask it. If you haven't grokked this very fundamental thing about how these LLMs work, then the problem is really, distinctly, on your end. |
| |
| ▲ | Dilettante_ 11 hours ago | parent | next [-] | | They absolutely could have accomplished the task. The task was purposefully or ignorantly posed in a way that is known to be not suited to the LLM, and then the author concluded "the machine did not complete the task because it sucks." | |
| ▲ | Blahah 11 hours ago | parent | prev [-] | | Not really. This works great in Claude Sonnet 4.1: 'Please could you research a list of valid TLDs and a list of valid HTML5 elements, then cross reference them to produce a list of HTML5 elements which are also valid TLDs. Use search to find URLs to the lists, then use the analysis tool to write a script that downloads the lists, normalises and intersects them.' Ask a stupid question, get a stupid answer. | | |
| ▲ | Lapel2742 10 hours ago | parent [-] | | > This works great in Claude Sonnet 4.1: 'Please could you research a list of valid TLDs and a list of valid HTML5 elements, then cross reference them to produce a list of HTML5 elements which are also valid TLDs. Use search to find URLs to the lists, then use the analysis tool to write a script that downloads the lists, normalises and intersects them.' Ok, I only have to: 1. Generally solve the problem for the AI 2. Make a step by step plan for the AI to execute 3. Debug the script I get back and check by hand if it uses reliable sources. 4. Run that script. For what do I need the AI? | | |
| ▲ | Dilettante_ 10 hours ago | parent | next [-] | | Try doing all of that by hand instead. The difference is about half an hour to an hour of work plus giving your attention to such a minor menial task. Also, you are literally describing how you are holding it wrong. If you expect the LLM to magically know what you want from it without you yourself having to make the task understandable to the machine, you are standing in front of your dishwasher waiting for it to grow arms and do your dishes in the sink. | | |
| ▲ | Lapel2742 8 hours ago | parent [-] | | > you are standing in front of your dishwasher waiting for it to grow arms and do your dishes in the sink. No. I'm standing in front of the dishwasher and the dishwasher expects me to tell it in detail how to wash the dishes. This is not about if you can find any use for a LLM at all. This is about: > LLMs are still surprisingly bad at some simple tasks And yes. They are bad if you have to hand feed them each and every detail for an extremely simple task like comparing two lists. You even have to debug the result because you cannot be sure that the dishwasher really washed the dishes. Maybe it just said it did. | | |
| ▲ | Dilettante_ 7 hours ago | parent [-] | | >Hand feed them every detail for an extremely simple task like comparing two lists You believe 57 words are "each and every detail", and that "produce two full, exhaustive lists of items out of your blackbox inner conceptspace/fetch those from the web" are "extremely simple tasks"? Your ignorance of how complex these problems are misleads you into believing there's nothing to it. You are trying to supply an abstraction to a system that requires a concrete. You do not even realize your abstraction is an abstraction. Try learning programming. | | |
| ▲ | Lapel2742 6 hours ago | parent [-] | | > You believe 57 words are "each and every detail", and that "produce two full, exhaustive lists of items out of your blackbox inner conceptspace/fetch those from the web" are "extremely simple tasks"? Sure they are. I'm not interested in how difficult this is for a LLM. This is not the question. Go out there, get the information. That this is hard for a LLM proves the point: They are surprisingly bad at some simple tasks. > Try learning programming. I started programming in the early 1980's. | | |
| ▲ | Dilettante_ 5 hours ago | parent [-] | | >I'm not interested in how difficult this is for a LLM. This is not the question. And neither was that my point. It is a complex problen, full stop. Again, your own inability to look past your personal abstractions ("just do the thing, it's literally one step dude") is what makes it feel simple. You ever do that "instruct someone to make coffee" exercise when you started out? What you're doing is saying "just make the coffee", refusing to decompose the problen any further, and then complaining that the other person is bad at following instructions. |
|
|
|
| |
| ▲ | Blahah 10 hours ago | parent | prev [-] | | The work. It intelligently provides the labor, it doesn't replace your brain. It runs the script itself. |
|
|
|