| ▲ | datsci_est_2015 16 hours ago |
| I often use LLMs to explore prior art and maybe find some alternative ways of thinking of problems. About 90% of what it tells me is useless or inapplicable to my domain due to a technicality it could not have known, but the other 10% is nice and has helped me learn some great new things. I can’t imagine letting an agent try everything that the LLM chatbot had recommended ($$$). Often coming up in recommendations are very poorly maintained / niche libraries that have quite a lot of content written about them but what I can only imagine is very limited use in real production environments. On the other hand, we have domain expert “consultants” in our leadership’s ears making equally absurd recommendations that we constantly have to disprove. Maybe an agent can occupy those consultants and let us do our work in peace. |
|
| ▲ | andy12_ 15 hours ago | parent | next [-] |
| I think the main value lies in allowing the agent to try many things while you aren't working (when you are sleeping or doing other activities), so even if many tests are not useful, with many trials it can find something nice without any effort on your part. This is, of course, only applicable if doing a single test is relatively fast. In my work a single test can take half a day, so I'd rather not let an agent spend a whole night doing a bogus test. |
| |
| ▲ | M4v3R 14 hours ago | parent | next [-] | | Even if your tests take a long time, you can always (if hardware permits) run multiple tests in parallel. This would enable you to explore many approaches at the same time. | |
| ▲ | genxy 14 hours ago | parent | prev | next [-] | | > single test can take half a day Why is that? I don't doubt you, but when Shigeo Shingo created SMED (Single Minute Exchange of Die), die changes were an hours long process. | |
| ▲ | datsci_est_2015 14 hours ago | parent | prev [-] | | Experiments for us cost on the order of tens of dollars, so doing 100 of them every night quickly becomes the price of an entire new employee. And that’s not even including the cost of letting agents run all night. Definitely not in the budget for non-VC-backed companies who aren’t in the AI bubble. | | |
| ▲ | gf000 an hour ago | parent [-] | | The costs keep decreasing and self-hosted models may be able to do some of the tasks as well. So this may be only temporarily unavailable for many. |
|
|
|
| ▲ | Eufrat 15 hours ago | parent | prev | next [-] |
| I find LLMs useful in regurgitating one-liners that I can’t be bothered to remember or things where even being flat out wrong is okay and you just do it yourself. For all the folks spending a lot of time and energy in setting up MCP servers, AGENTS.md, etc. I think this represents more that the LLM cannot do what it is being sold as by AI boosters and needs extreme amounts of guidance to reach a desired goal, if it even can. This is not an argument that the tech has no value. It clearly can be useful in certain situations, but this is not what OpenAI/Anthropic/Perplexity are selling and I don’t think the actual use cases have a sustainable business model. People who spend the energy to tailor the LLMs to their specific workflows and get it to be successful, amazing. Does this scale? What’s going to happen if you don’t have massive amounts of money subsidizing the training and infrastructure? What’s the actual value proposition without all this money propping it up? |
| |
| ▲ | M4v3R 14 hours ago | parent | next [-] | | > I find LLMs useful in regurgitating one-liners This was the case for me a year ago. Now Claude or Codex are routinely delivering finished & tested complete features in my projects. I move much, much faster than before and I don’t have an elaborate setup - just a single CLAUDE.md file with some basic information about the project and that’s it. | | |
| ▲ | Eufrat 14 hours ago | parent [-] | | People keep saying this and I agree Claude has gotten a lot better even in my own experience, but I think the value is questionable. What’s the point of adding features that are inscrutable? I have gotten Claude to make a feature and it mostly works and if it doesn’t work quite right I spend a massive amount of time trying to understand what is going on. For things that don’t matter too much, like prototyping, I think it’s great to just be able to get a working demo out faster, but it’s kind of terrifying when people start doing this for production stuff. Especially if their domain knowledge is limited. I can personally attest to seeing multiple insane things that are clearly vibe coded by people who don’t understand things. In one case, I saw API keys exposed because they were treating database users as regular user accounts for website login auth. > I move much, much faster than before This is a bad metric as has been attested multiple times in unrelated situations. Moving faster is not necessarily productivity nor is it value. | | |
| ▲ | GorbachevyChase 10 hours ago | parent [-] | | That was equally true of human written code that you didn’t write. So if a human had written that insecure program, what would the consequences be ? Would they go to prison? Would they lose license to practice? When they get sued? If the answer to all of these is no, then where was the assurance before? These anecdotes of “well one time I saw an AI written program that sucked!” are just as valid as “well one time Azure exposed government user data” |
|
| |
| ▲ | foobarian 15 hours ago | parent | prev [-] | | > I find LLMs useful in regurgitating one-liners that I can’t be bothered to remember I found LLMs make a fabulous frontend for git :-D | | |
|
|
| ▲ | lukebechtel 11 hours ago | parent | prev | next [-] |
| What is your domain? |
|
| ▲ | MattGaiser 15 hours ago | parent | prev [-] |
| > agent try everything that the LLM chatbot had recommended ($$$) A lot depends on whether it is expensive to you. I use Claude Code for the smallest of whims and rarely run out of tokens on my Max plan. |
| |
| ▲ | datsci_est_2015 14 hours ago | parent [-] | | Our experiments aren’t free. We use cloud infrastructure. An experiment costs on the order of tens of dollars, so massively parallelizing “spaghetti at wall” simulators is costly before we even talk about LLMs. | | |
| ▲ | victorbjorklund 13 hours ago | parent [-] | | If it is an experiment. Can’t you just make a POC for the experiment that doesn’t need to use half of AWS to just run? And if the experiment is actually positive you can then bring it to the real application and test it there (and spending the 10-100 usd it costs to test it live)? | | |
| ▲ | datsci_est_2015 12 hours ago | parent | next [-] | | I wouldn’t want the LLM-based agent to hyperspecialize its solution to a subset of the data. That’s a basic tenet of machine learning. Steelmanning your question though, I guess you could come up with some sort of tiered experimentation scheme where you slowly expose it to more data and more compute based on prior success or failures. | |
| ▲ | 12 hours ago | parent | prev [-] | | [deleted] |
|
|
|