Remix.run Logo
specproc 4 days ago

What I always appreciate about SO is the dialogue between commenters. LLMs give one answer, or bullet points around a theme, or just dump a load of code in your IDE. SO gives a debate, in which the finer points of an issue are thrashed out, with the best answers (by and large) floating to the top.

SO, at its best, is numerous highly-experienced and intelligent humans trying to demonstrate how clever they are. A bit like HN, you learn from watching the back and forth. I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience.

Whatever people's gripes about the site, I learned a hell of a lot from it. I still find solutions there, and think a world without it would be worse.

NewJazz 4 days ago | parent | next [-]

The fundamental difference between asking on SO and asking an LLM is that SO is a public forum, and an LLM will be communicated with in private. This has a lot of implications, most of which surround the ability for people to review and correct bad information.

dbobbitt 4 days ago | parent | next [-]

The other major benefit of SO being a public forum is that once a question was wrestled with and eventually answered, other engineers could stumble upon and benefit from it. With SO being replaced by LLMs, engineers are asking LLMs the same questions over and over, likely getting a wide range of different answers (some correct and others not) while also being an incredible waste of resources.

nprateem 4 days ago | parent | prev [-]

Surely the fundamental difference is one asks actual humans who know what's right vs statistical models that are right by accident.

ijidak 4 days ago | parent | next [-]

Providing context to ask a Stack Overflow question was time-consuming.

In the time it takes to properly format and ask a question on Stack Overflow, an engineer can iterate through multiple bad LLM responses and eventually get to the right one.

The stats tell the uncomfortable truth. LLMs are a better overall experience than Stack Overflow, even after accounting for inaccurate answers from the LLM.

Don't forget, human answers on Stack Overflow were also often wrong or delayed by hours or days.

I think we're romanticizing the quality of the average human response on Stack Overflow.

matt_kantor 3 days ago | parent | next [-]

The purpose of StackOverflow was never to get askers quick answers to their specific questions. Its purpose is to create a living knowledge repository of problems and solutions which future folk may benefit from. Asking a question on StackOverflow is more like adding an article to Wikipedia than pinging a colleague for help.

If someone doesn't care about contributing to such a repository then they should ask their question elsewhere (this was true even before the rise of LLMs).

StackOverflow itself attempts to explain this in various ways, but obviously not sufficiently as this is an incredibly common misconception.

fireflash38 4 days ago | parent | prev | next [-]

That's only because of LLMs consuming pre-existing discussions on SO. They aren't creating novel solutions.

specproc 4 days ago | parent | prev [-]

What I'm appreciating here is the quality of the _best_ human responses on SO.

There are always a number of ways to solve a problem. A good SO response gives both a path forward, and an explanation why, in the context of other possible options, this is the way to do things.

LLMs do not automatically think of performance, maintainability, edge cases etc when providing a response, in no small part because they do not think.

An LLM will write you a regex HTML parser.[0]

The stats look bleak for SO. Perhaps there's a better "experience" with LLMs, but my point is that this is to our detriment as a community.

[^0]: He comes, https://stackoverflow.com/questions/1732348/regex-match-open...

stocksinsmocks 4 days ago | parent | prev [-]

Humans do not know what’s right. What’s worse is the phenomenon of people who don’t actually know but want to seem like they know so they ask the person with the question for follow up information that is meaningless and irrelevant to the question.

Hey, can you show me the log files?

Sure here you go. Please help!

Hmm, I don’t really know what I’m looking for in these. Good luck!

4 days ago | parent [-]
[deleted]
andy81 4 days ago | parent | prev | next [-]

SO also isn't afraid to tell you that your question is stupid and you should do it a better way.

Some people take that as a personal attack, but it can be more helpful than a detailed response to the wrong question.

baq 3 days ago | parent [-]

The problem is the people who decide which questions are stupid are misaligned with the site's audience.

zahlman 4 days ago | parent | prev | next [-]

> What I always appreciate about SO is the dialogue between commenters.

Stack Overflow is explicitly not for "dialogue", recent experiments (which are generally not well received by the regulars on the meta site) notwithstanding. The purpose of the comments on questions is to help refine the question and ensure it meets standards, and in some cases serve other meta purposes like pointing at different-but-related questions to help future readers find what they're looking for. Comments are generally subject to deletion at any time and were originally designed to be visually minimal. They are not part of the core experience.

Of course, the new ownership is undoing all of that, because of engagement metrics and such.

specproc 4 days ago | parent | next [-]

Heh, OK, dialogue wasn't the right word. I am a better informed person by the power of internet pedantry.

4 days ago | parent | prev [-]
[deleted]
djfergus 4 days ago | parent | prev | next [-]

> I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience

Interesting question - the result is just words so surely a LLM can simulate an ego. Feed it the Linux kernel mailing list?

Isn’t back and forth exactly what the new MoE thinking models attempt to simulate?

And if they don’t have the experience that is just a question of tokens?

ehnto 4 days ago | parent | next [-]

SO was somewhere people put their hard won experience into words, that an LLM could train on.

That won't be happening anymore, neither on SO or elsewhere. So all this hard won experience, from actually doing real work, will be inaccessible to the LLMs. For modern technologies and problems I suspect it will be a notably worse experience when using an LLM than working with older technologies.

It's already true for example, when using the Godot game engine instead of Unity. LLMs constantly confuse what you're trying to do with Unity problems, offer Unity based code solutions etc.

sebastiennight 4 days ago | parent | prev [-]

> Isn’t back and forth exactly what the new MoE thinking models attempt to simulate?

I think the name "Mixture of Experts" might be one of the most misleading labels in our industry. No, that is not at all what MoE models do.

Think of it rather like, instead of having one giant black box, we now have multiple smaller opaque boxes of various colors, and somehow (we don't really know how) we're able to tell if your question is "yellow" or "purple" and send that to the purple opaque box to get an answer.

The result is that we're able to use less resources to solve any given question (by activating smaller boxes instead of the original huge one). The problem is we don't know in advance which questions are of which color: it's not like one "expert" knows CSS and the other knows car engines.

It's just more floating point black magic, so "How do I center a div" and "what's the difference between a V6 and V12" are both "yellow" questions sent to the same box/expert, while "How do I vertically center a div" is a red question, and "what's the most powerful between a V6 and V12" is a green question which activates a completely different set of weights.

dpkirchner 4 days ago | parent | prev | next [-]

I don't know if this is still the case but back in the day people would often redirect comments to some stackoverflow chat feature, the links to which would always return 404 not found errors.

n49o7 3 days ago | parent | prev | next [-]

This comment and the parent one make me realize that people who answer probably value the exchange between experts more than the answer.

Perhaps the antidote involves a drop of the poison.

Let an LLM answer first, then let humans collaborate to improve the answer.

Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model.

renrutal 3 days ago | parent [-]

> This comment and the parent one make me realize that people who answer probably value the exchange between experts more than the answer.

I'm more amused that ExpertsExchange.com figured out the core of the issue, 30 years ago, down to their site's name.

solumunus 4 days ago | parent | prev [-]

You can ask an LLM to provide multiple approaches to solutions and explore the pros and cons of each, then you can drill down and elaborate on particular ones. It works very well.