| |
| ▲ | satvikpendem 8 hours ago | parent | next [-] | | Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time. | | |
| ▲ | bandrami 3 hours ago | parent [-] | | > I'm just not familiar enough with the solution space Neither is the LLM | | |
| ▲ | jimbokun an hour ago | parent | next [-] | | No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc. | |
| ▲ | satvikpendem 3 hours ago | parent | prev [-] | | Oftentimes it is though, good enough for my purposes. | | |
| ▲ | bandrami 3 hours ago | parent [-] | | If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know? | | |
| ▲ | satvikpendem 3 hours ago | parent [-] | | I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching. | | |
| ▲ | bandrami 3 hours ago | parent [-] | | I still don't get what you're saying. If you possess enough information to accurately judge the LLM's suggestions you possess enough information to decide on your own. There's not really a way around that. | | |
| ▲ | satvikpendem 3 hours ago | parent | next [-] | | Of course I'm deciding on my own, I'm not letting the LLM decide for me (although some people do). But the point is whatever the suggestion is is merely an implementation detail that either solves my problem or not, not sure what part of that is confusing. Replace LLM with glorified Google and maybe it's less confusing. | | |
| ▲ | bandrami 3 hours ago | parent [-] | | No, Google (at least back when it worked) ranked results based on the feedback of other users, so it was a useful signal. | | |
| ▲ | satvikpendem 3 hours ago | parent [-] | | Theoretically the LLM would weight more popular suggestions more too. Regardless you're reading too much into this, either use the LLM or don't, I'm not sure if someone else can convince you. As I said for my purposes of getting shit done it works perfectly fine and works more like a research tool than anything else, especially if it can understand my specific use case unlike general research tools like Google or Stack Overflow. | | |
| ▲ | bandrami an hour ago | parent [-] | | IDK man this sounds a lot like my junior devs saying "it works fine for me" as they hand in PRs that break prod | | |
| ▲ | satvikpendem an hour ago | parent [-] | | If you don't review the code it generates then that's still on you. There isn't an excuse for handing in breaking PRs like your juniors. It's a tool at the end of the day and it's the responsibility of the user to utilize it correctly. |
|
|
|
| |
| ▲ | jimbokun an hour ago | parent | prev [-] | | Do you use search engines or do you just memorize all the world’s information? | | |
| ▲ | bandrami 36 minutes ago | parent [-] | | I don't use search engines for much of anything nowadays (does anybody still?) At work I read documentation if I need to learn something. |
|
|
|
|
|
|
| |
| ▲ | WCSTombs 4 hours ago | parent | prev | next [-] | | Absolutely, the whole point of the rubber duck is that it's inanimate. The act of talking to the rubber duck makes you first of all describe your problem in words, and secondly hear (or read) it back and reprocess it in a slightly different way. It's a completely free way to use more parts of your brain when you need to. LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing. | |
| ▲ | Waterluvian 7 hours ago | parent | prev | next [-] | | Maybe it’s just a semantic distinction, which, sure. I guess I’d just call it research? It’s basically the “I’m reading blogs, repos, issue trackers, api docs etc. to get a feel for the problem space” step of meaningful engineering. But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my
brain, not the LLM, by not exposing my workspace to it. | |
| ▲ | dexwiz 9 hours ago | parent | prev | next [-] | | Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety. Unfortunately they can also validate some really bad ideas. | |
| ▲ | jrowen 7 hours ago | parent | prev | next [-] | | I feel I've had the most success with treating it like another developer. One that has specific strengths (reference/checklists/scanning) and weaknesses (big picture/creativity). But definitely bouncing actual questions that I would say to a person off it. | |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
|