Remix.run Logo
nsonha a day ago

Why do you HAVE TO one-shot? No one says you have to code like those influencers. You are a software engineer, use AI like one, iteratively.

dilyevsky a day ago | parent | next [-]

The point is because it generally produces crap code you have to one shot or else iteration becomes hard. Similar to how a junior would try to refactor their mess and just make a bigger mess

nsonha 20 hours ago | parent [-]

I find it hard to believe that when the AI generates crap code, there is absolutely nothing you can do (change the prompt, modify context, add examples) to make it do what you want. It has not been my experience either. I only use AI to make small modules and refactor instead of one-shoting.

Also I find "AI makes crap code so we should give it a bigger task" illogical.

mistrial9 a few seconds ago | parent [-]

it seems that there are really, really large differences between models; how well they do, what they respond to.. even among the "best" .. the field does seem to be moving faster

ramesh31 a day ago | parent | prev [-]

>No one says you have to code like those influencers. You are a software engineer, use AI like one, iteratively.

This is my issue with all the AI naysayers at this point. It seems to all boil down to "haha, stupid noob can't code so he uses AI" in their minds. It's like they are incapable of understanding that there could simultaneously be a bunch of junior devs pushing greenfield YouTube demos of vibe coding, while at the same time expert software engineers are legitimately seeing their productivity increase 10x on serious codebases through judicious use.

Go ahead and keep swinging that hammer, John Henry.

necovek 20 hours ago | parent | next [-]

> expert software engineers are legitimately seeing their productivity increase 10x

It's funny you would say this, because we are really commenting on an article where a self-proclaimed "expert" has done that and the "10x" output is terrible.

ManuelKiessling 17 hours ago | parent [-]

I have just checked my article — the word "expert" isn't in it, so not quite sure where you got this from.

I'm working in the field professionally since June 1998, and among other things, I was the tech lead on MyHammer.de, Germany's largest craftsman platform, and have built several other mid-scale online platforms over the decades.

How well I have done this, now that's for others to decide.

Quite objectively though, I do have some amount of experience — even a bad developer probably cannot help but pick up some learnings over so many years in relevant real-world projects.

However, and I think I stated this quite clearly, I am expressively not an expert in Python.

And yet, I could realize an actually working solution that solves an actual problem I had in a very real sense (and is nicely humming away for several weeks now).

And this is precisely where yes, I did experience a 10x productivity increase; it would have certainly taken me at least a week or two to realize the same solution myself.

necovek 9 hours ago | parent [-]

Apologies for implying you are claiming to be an expert software engineer: I took the "senior" in the title and "25 years of experience" in the post to mean similar things as "expert".

I don't doubt this is doing something useful for you. It might even be mostly correct.

But it is not a positive advertisement for what AI can do: just like the code is objectively crap, you can't easily trust the output without a comprehensive review. And without doubting your expertise, I don't think you reviewed it, or you would have caught the same smells I did.

What this article tells me is that when the task is sufficiently non-critical that you can ignore being perfectly correct, you can steer AI coding assistants into producing some garbage code that very well might work or appear to work (when you are making stats, those are tricky even with utmost manual care).

Which is amazing, in my opinion!

But not what the premise seems to be (how a senior will make it do something very nice with decent quality code).

Out of curiosity why did you not build this tool in a language you generally use?

ManuelKiessling 7 hours ago | parent [-]

Because I wanted exactly this experience: can I get to the desired result — functionality-wise, if not code-wise! — even if I choose the stack that makes sense in terms of technology, not the one that I happen to be proficient in?

And if I cannot bring language-proficiency to the table — which of my capabilities as a seasoned software&systems guy can I put to use?

In the brown-field projects where my team and I have the AI implement whole features, the resulting code quality — under our sharp and experienced eyes — tends to end up just fine.

I think I need to make the differences between both examples more clear…

necovek 5 hours ago | parent [-]

Ok, I guess you shouldn't complain that you really got exactly what you wanted.

However, your writing style implied that the result was somehow better because you were otherwise an experienced engineer.

Even your clarification in the post sits right below your statement how your experience made this very smooth, with no explanation that you were going to be happy with bad code as long as it works.

ManuelKiessling 4 hours ago | parent [-]

I guess we are slowly but steadily approaching splitting-hairs-territory, so not sure if this is still worth it…

However. I‘m not quite sure where I complained. Certainly not in the post.

And yes, I’m very convinced that the result turned out a lot better than it would have turned out if an unexperienced „vibe coder“ had tried to achieve the same end result.

Actually pretty sure without my extensive and structured requirements and the guard rails, the AI coding session would have ended in a hot mess in the best case, and a non-functioning result in the worst case.

I‘m 100% convinced that these two statements are true and relevant to the topic:

That a) someone lacking my level of experience and expertise is simply not capable of producing a document like https://github.com/dx-tooling/platform-problem-monitoring-co...

And that b) using said document as the basis for the agent-powered AI coding session had a significant impact on the process as well as the end result of the session.

achierius 19 hours ago | parent | prev | next [-]

I think some of the suspicion is that it's really not 10x in practice.

Macha 15 hours ago | parent [-]

Like AI could write code perfectly as soon as I thought of it, and that would not improve my productivity 10x. Coding was never the slow part. Everything that goes around coding (like determining that the extra load here is not going to overload things, getting PMs to actually make their mind up what the feature is going to do, etc.), means that there's simply not that much time to be saved on coding activities.

nsonha 14 hours ago | parent [-]

Same argument can be said for not using any tooling really. "Tech is the easy part". No difference typing code on notepad and having zero process/engineering infrastructure I guess. Because stakeholder management is the main engineering skill apparently.

Btw, AI doesn't just code, there are AIs for debugging, monitoring etc too.

achierius 6 hours ago | parent | next [-]

There are two levels to this.

1. Tooling obviously does improve performance, but not so huge a margin. Yes, if AI could automate more elements of tooling, that would very much help. If I could tell an AI "bisect this bug, across all projects in our system, starting with this known-bad point", that would be very helpful -- sometimes. And I'm sure we'll get there soon enough. But there is fractal complexity here: what if isolating the bug requires stepping into LLDB, or dumping some object code, or running with certain stressors on certain hardware? So it's not clear that "LLM can produce code from specs, given tight oversight" will map (soon) to "LLM can independently assemble tools together and agentically do what I need done".

2. Even if all tooling were automated, there's still going to be stuff left over. Can the LLM draft architectural specs, reach out to other teams (or their LLMs), sit in meetings and piece together the big picture, sus out what the execs really want us to be working on, etc.? I do spend a significant (double-digit) percentage of my time working on that, so if you eliminate everything else -- then you could get 10x improvement, but going beyond that would start to run up against Amdahl's Law.

necovek 9 hours ago | parent | prev [-]

If you were to really measure speed improvement of notepad vs a tricked out IDE, it's probably not much. The problem would be the annoyance caused to an engineer who has to manually type out everything.

No, coding speed is really not the bottleneck to software engineer productivity.

nsonha 9 hours ago | parent [-]

> coding speed > the annoyance caused to an engineer

No one said productivity is this one thing and not that one thing, only you say that because it's convenient for your argument. Productivity is a combination of many things, and again it's not just typing out code that's the only area AI can help.

necovek 7 hours ago | parent [-]

The argument of "coding speed not a bottleneck to productivity" is not in contradiction to "productivity is a combination": it even implies it.

Again, the context here was that somebody discussed speed of coding and you raised the point of not using any tooling with Notepad.

LtWorf 15 hours ago | parent | prev | next [-]

Weren't you the guy who only writes HTML? Maybe let domain experts comment on their domain of expertise.

johnnyanmac 17 hours ago | parent | prev | next [-]

My grievances are simple: an expert programming utilizing AI will be a truly dangerous force.

But that's not what we get in this early stage of grifting. We get 10% marketing buzz on how cool this is with stuff that cannot be recreated in the tool alone, and 89% of lazy or inexperienced developers who just turn in slop with little or no iteration. The latter don't even understand the code they generated.

That 1% will be amazing, it's too bad the barrel is full of rotten apples hiding that potential. The experts also tend to keep to themselves, in my experience. the 89% includes a lot of dunning-kruger as well which makes those outspoken experts questionable (maybe a part of why real experts aren't commenting on their experience).

shove a day ago | parent | prev [-]

“Maybe you didn’t hear me, I said ‘good morning steam driver, how are you?’”