Remix.run Logo
sbt 14 hours ago

I have been using it for coding for some time, but I don't think I'm getting much value out of it. It's useful for some boilerplate generation, but for more complex stuff I find that it's more tedious to explain to the AI what I'm trying to do. The issue, I think, is lack of big picture context in a large codebase. It's not useless, but I wouldn't trade it for say access to StackOverflow.

My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.

My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.

oezi 12 hours ago | parent | next [-]

One of the issues certainly is that Stackoverflow is absolutely over. Within the last twelve months the number of users just fell off a cliff.

danbruc 11 hours ago | parent | next [-]

That might be a good thing after all, at least in a certain sense. Stack Overflow has been dying for the last ten years or so. In the first years there where a lot of good questions that were interesting to answer but that changed with popularity and it became an endless sea of low effort do my homework duplicates that were not interesting to answer and annoying to moderate. If this now gets handled by large language models, it could maybe become similar to the beginning again, only those questions that are not easily answerable by looking into the documentation or asking a chat bot will end up on Stack Overflow, it could be fun again to answer questions on Stack Overflow. On the other hand if nobody looks up things on Stack Overflow, it will be hard to sustain the business, maybe even when downscaled accordingly.

birn559 10 hours ago | parent [-]

Is it really dying? There's only so much growth possible without having it flooded with low effort stuff. They try to instead grow by introducing more topics, but that's limited as well.

I personally didn't use it so much (meaning: Writing content) because it always felt a bit over engineered. From what I remember the only possible entry point is writing questions that get up voted. Not allowed to even write comments, vote and of course not allowed to answer questions. Maybe that's not correct but that has always been my impression.

In general stack exchange seems to be a great platform. I think "it's dying" has an unfortunate connotation. It's not like content just vanishes, just the amount of new stuff is shrinking.

danbruc 9 hours ago | parent [-]

Maybe it changed, but you could ask and answer questions right after signing up, just commenting, voting, and moderation rights required a certain amount of points. I only answered questions and was among the top 100 contributors at my peak activity but that was at least a decade ago.

I stopped contributing when the question quality fell of a cliff. Existing contributors got annoyed by the low effort questions, new users got annoyed because their questions got immediately closed, it was no longer fun. There were a lot of discussions on meta how to handle the situation but I just left.

So admittedly things might have changed again, I do not really know much about the development in the last ten or so years.

lithos 7 hours ago | parent | prev [-]

SO sold off their own data and made it insanely web-crawlable.

So makes sense that SO like users will use AI, not to mention they get the benefit of avoiding the neurotic moderator community at SO.

sandworm101 12 hours ago | parent | prev | next [-]

Fun test: ask chatgtp to find where Wikipedia is wrong about a subject. It does not go well, proving that it is far less trustworthy than wikipedia alone.

(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)

logicchains 12 hours ago | parent | prev | next [-]

>I have been using it for coding for some time, but I don't think I'm getting much value out of it.

I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.

surgical_fire 10 hours ago | parent | next [-]

> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did.

I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.

I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).

Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.

My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.

Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.

logicchains 6 hours ago | parent [-]

>Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly.

I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.

surgical_fire an hour ago | parent | next [-]

> I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.

It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning.

Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time.

Are there gains in terms of speed? Yeah. Are they meaningful? Kind of.

dwaltrip 5 hours ago | parent | prev [-]

Do you have any example prompts of the level of specificity and task difficulty you usually do? I oscillate between finding them useful and finding it annoying to get output that is actually good enough.

How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do?

rurp 4 hours ago | parent | prev [-]

If you ever end up working on large complicated code bases you'll likely have an easier time relating to the sentiment. LLMs are vastly better at small greenfield coding than for working on large projects. I think 100% of the people I've heard rave about AI coding are using them for small isolated projects. Among people who work on large projects sentiment seems to range from mildly useful to providing negative value.

fauigerzigerk 13 hours ago | parent | prev [-]

I used to donate to Wikipedia, but it has been completely overrun by activists pushing their preferred narrative. I don't trust it any more.

I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.

LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.

notarobot123 12 hours ago | parent | next [-]

> at least they are not as much a single point of failure

Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.

Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?

fauigerzigerk 11 hours ago | parent [-]

>Imagining that AI is somehow more neutral or resistant to influence is incredibly naive

The point is well taken. I just feel that at this point in time the reliance on Wikipedia as a source of objective truth is disproportionate and increasingly undeserved.

As I said, I don't think AI is a panacea at all. But the way in which LLMs can be influenced is different. It's more like bias in Google search. But I'm not naive enough to believe that this couldn't turn into a huge problem eventually.

ramon156 12 hours ago | parent | prev | next [-]

Can I ask for some examples? I'm not this active on Wikipedia, so I'm curious where a narrative is being spread

kristjank 12 hours ago | parent | next [-]

Franklin Community Credit Union scandal is a good example, well outlined in this youtuber's (admittedly dramatized) video: https://www.youtube.com/watch?v=F0yIGG-taFI

n4r9 10 hours ago | parent [-]

Is their argument documented anywhere in text, rather than an 8 minutes video?

notarobot123 10 hours ago | parent | prev | next [-]

Here are some examples from which you can extrapolate the more serious cases: https://en.wikipedia.org/wiki/Wikipedia:Lamest_edit_wars

fauigerzigerk 12 hours ago | parent | prev [-]

I thought about giving examples because I understand why people would ask for them, but I decided very deliberately not to give any. It would inevitably turn into a flame war about the politics/ethics of the specific examples and distract from the reasons why I no longer trust Wikipedia.

I understand that this is unsatisfactory, but the only way to "prove" that the motivations of the people contributing to Wikipedia have shifted would be to run a systematic study for which I have neither the time nor the skills nor indeed the motivation.

Perhaps I should say that am a politically centrist person whose main interests are outside of politics.

junek 11 hours ago | parent [-]

Let me guess: you hold some crank views that aren't shared by the people who maintain Wikipedia, and you find that upsetting? That's not a conspiracy, it's just people not agreeing with you.

fauigerzigerk 10 hours ago | parent [-]

Your guess is incorrect. I'm keeping well away from polarised politics as well as anti-scientific and anti-intellectual fringe views.

Panzer04 12 hours ago | parent | prev [-]

Single point of failure?

Yeah u can download the entirety of Wikipedia if you want to. What's the single point of failure?

fauigerzigerk 11 hours ago | parent [-]

Not in a technical sense. What I mean is that Wikipedia is very widely used as an authoritative source of objective truth. Manipulating this single source regarding some subject would have an outsize influence on what is considered to be true.