Remix.run Logo
satvikpendem 3 days ago

Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.

bandrami 2 days ago | parent [-]

> I'm just not familiar enough with the solution space

Neither is the LLM

apsurd 2 days ago | parent | next [-]

(Trying to find where you might still see this)

I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?

bandrami 2 days ago | parent [-]

Because if that's it we've made a ludicrously expensive i-ching.

imtringued 2 days ago | parent | prev | next [-]

If there is something LLMs are good at it's knowing some obscure fact that only 10 other people on this planet know.

bandrami 2 days ago | parent [-]

They're also very good at almost knowing an obscure fact that only 10 people know but getting a detail catastrophically wrong about it

jimbokun 2 days ago | parent | prev | next [-]

No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc.

bandrami 2 days ago | parent [-]

Oh Lord, no. Not at all. That's what they're terrible at. They are ok-ish at superficial overviews and catastrophically bad at specific minutiae

darth_aardvark 2 days ago | parent [-]

Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?

satvikpendem 2 days ago | parent | next [-]

They note in another comment they don't even use search engines so I don't think they're the right person to ask regarding frontier models.

darth_aardvark 2 days ago | parent [-]

I'd ask them what tools they do use, but I doubt they'll see my comment; I'll see if I can mail it to them.

bandrami 2 days ago | parent [-]

(Why wouldn't I see your comment?)

I just don't use the web much anymore because the experience has degraded so much over the past several years and it has become decreasingly useful at work as well. I do sometimes need to search for a document and find Kagi pretty good for that, but the old way of using a search engine to kind of explore and discover stuff just isn't viable anymore, unfortunately.

I administer software for a living so I read a lot of documentation of that software but it comes with the software so I don't ever really need to search for it; I also read and participate in some forums and us the relevant IRC channels.

bigstrat2003 2 days ago | parent | prev [-]

I have. And the people who say "use a frontier" model are full of it. The frontier models aren't any better than the free ones.

satvikpendem 2 days ago | parent [-]

What are you defining as free versus frontier, and for what purpose? For coding there is a big difference between Opus and GPT 5.3/4 versus Sonnet and other models such as open weight ones.

satvikpendem 2 days ago | parent | prev [-]

Oftentimes it is though, good enough for my purposes.

bandrami 2 days ago | parent [-]

If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?

satvikpendem 2 days ago | parent [-]

I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.

bandrami 2 days ago | parent [-]

I still don't get what you're saying. If you possess enough information to accurately judge the LLM's suggestions you possess enough information to decide on your own. There's not really a way around that.

satvikpendem 2 days ago | parent | next [-]

Of course I'm deciding on my own, I'm not letting the LLM decide for me (although some people do). But the point is whatever the suggestion is is merely an implementation detail that either solves my problem or not, not sure what part of that is confusing. Replace LLM with glorified Google and maybe it's less confusing.

bandrami 2 days ago | parent [-]

No, Google (at least back when it worked) ranked results based on the feedback of other users, so it was a useful signal.

satvikpendem 2 days ago | parent [-]

Theoretically the LLM would weight more popular suggestions more too. Regardless you're reading too much into this, either use the LLM or don't, I'm not sure if someone else can convince you. As I said for my purposes of getting shit done it works perfectly fine and works more like a research tool than anything else, especially if it can understand my specific use case unlike general research tools like Google or Stack Overflow.

bandrami 2 days ago | parent [-]

IDK man this sounds a lot like my junior devs saying "it works fine for me" as they hand in PRs that break prod

satvikpendem 2 days ago | parent [-]

If you don't review the code it generates then that's still on you. There isn't an excuse for handing in breaking PRs like your juniors. It's a tool at the end of the day and it's the responsibility of the user to utilize it correctly.

jimbokun 2 days ago | parent | prev [-]

Do you use search engines or do you just memorize all the world’s information?

bandrami 2 days ago | parent [-]

I don't use search engines for much of anything nowadays (does anybody still?) At work I read documentation if I need to learn something.

imtringued 2 days ago | parent [-]

This is a very strange and contradictory situation. I'm not sure there's any point in engaging with you since there is nothing but a stream of weak dismissals farming for engagement.

You dismiss LLMs because of factual inaccuracy, which is fair, but now you're doubling down on an anti search engine stance, which is weird, because the modern substitute is letting LLMs either use search engines on your behalf or learn the entire internet with some error and you've dismissed both.

Yes, I'm the "backwards" guy who still uses search engines. We still exist.

satvikpendem 2 days ago | parent [-]

I've noticed that HN can attract some of the most extreme people I've ever seen, and I suppose there is precedent in the tech world when I'm reminded of the story of Stallman not using a browser but instead sending webpages to his email where he then reads the content. It's literally nonsensical for 99.9999% of the population and I've read similar absurd things on HN as well.

This person not using LLMs is fine, I understand the argument like you said, but the double down on not using search engines either makes me not take anything they say seriously. Not to be too crass but it reminds me of this situation on the nature of arguing on the internet [0].

[0] https://www.reddit.com/r/copypasta/comments/pxb2kn/i_got_int...