Remix.run Logo
trjordan 3 days ago

I was talking with somebody about their migration recently [0], and we got to speculating about AI and how it might have helped. There were basically 2 paths:

- Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.

- Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.

It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.

If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.

But also, it's exhausting. Thinking always is, I guess.

[0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...

klodolph 3 days ago | parent | next [-]

I’ve tried the second path at work and it’s grueling.

“Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.

I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.

bluefirebrand 3 days ago | parent | next [-]

10-20% productivity gains at the expense of making it grueling sounds like a recipe for burnout

chain030 2 days ago | parent | next [-]

The smart ones saw this early on.

The rest are just catching up to the reality now.

dvfjsdhgfv 3 days ago | parent | prev [-]

And that's how many people feel now.

bluefirebrand 2 days ago | parent [-]

I think it's a bad strategy

Burning out a substantial portion of the workforce for short term gains is going to cause way more long term decline than the short term gains are worth

krapp 2 days ago | parent [-]

I think the long term assumption is that the first path mentioned by trjordan mentioned above, where AI does all the work, is the goal. The second path is a necessary evil until the first path, which requires as yet unachieved improvements in AI (maybe approaching AGI, maybe not) becomes feasible. Burning out employees doesn't matter since they're still creating more value than they otherwise would, and they'll be replaced by AI anyway.

fluoridation 3 days ago | parent | prev | next [-]

Agreed. Either that, or the task has really, really broad success parameters.

BinaryIgor 3 days ago | parent | prev | next [-]

Exactly, same for me

3 days ago | parent | prev [-]
[deleted]
rorylaitila 3 days ago | parent | prev | next [-]

Yeah I think this is what I've tried to articulate to people that you've summed up well with "You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing" - Most of the bottleneck with any system design is the hard things, the unknown things, the unintended-consequences things. The AIs don't help you much with that.

There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.

photonthug 3 days ago | parent | next [-]

> There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.

Exactly, some kinds of refactors are like this for me. Pretty mindless, kind of relaxing, almost algebraic. It's a pleasant way to wander around the code base just cleaning and improving things while you walk down a data or control flow. If you're following a thread then you don't even make decisions really, but you also get better acquainted with parts you don't know, and subconsciously get the practice holding some kind of gestalt in your head.

This kind of almost dream-like "grooming" seems important and useful, because it preps you for working with design problems later. Definitely formatting and style type trivia should absolutely be automated, and real architecture/design work requires active engagement. But there's a sweet spot in the middle.

Even before LLMs maybe you could automate some of these refactors with tools for manipulating ASTs or CSTs, if your language of choice had those tools. But automating everything that can be automated won't necessarily pay off if you're losing fluency that you might need later.

wduquette 3 days ago | parent | prev | next [-]

In my experience, a lot of the hard thinking gets done in my back-brain while I'm doing other things, and emerges when I take up the problem again. Doing the regular work gives my back-brain time to percolate; doing hard thing after hard thing doesn't.

mrguyorama 3 days ago | parent | prev [-]

Also at the end of the day, humans aren't machines. We are goopy meat and chemistry.

You cannot exclusively do hard things back to back to back every 8 hour day without fail. It will either burn you out, or you will make mistakes, or you will just be miserable.

Human brains do not want to think hard, because millions of years of evolution built brains to be cheap, and they STILL use like 10% of our daily energy.

danenania 3 days ago | parent | prev | next [-]

I'd actually say that you end up needing to think more in the first example.

Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.

I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.

The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.

Jensson 2 days ago | parent | next [-]

> I'd actually say that you end up needing to think more in the first example.

Yes, but you are thinking about the wrong things, so the effort get spent poorly.

It is usually much more efficient to build your own mental model than to try to search for a solution that solves exactly what you need from externally. Without that mental model it is hard to evaluate whether the external solution even does what you want, so its something you need to do either way.

jama211 3 days ago | parent | prev [-]

Depends how complex the task is. Sometimes I’m handed tasks so simple but tedious that AI has meant I can breeze through these instead of burning myself out on them. Sure, it doesn’t speed things up much in terms of time, but I’m way less burnt out at the end because it’s doing all the fiddly stuff that would tire me out. I suspect the tasks I get aren’t that typical though.

danenania 3 days ago | parent [-]

Yeah, I think if it's simple enough that you can understand all the code that's generated at a glance, then it's fine. There are definitely tasks that fit this description—my comment was mainly speaking to more complex tasks.

jama211 2 days ago | parent [-]

Yeah fair

ryanobjc 3 days ago | parent | prev | next [-]

regarding #2: "Automate the dumb/boring stuff", I always think of the big short when Michael Burry said "yes I read all the boring spreadsheets, and I now have a contrary position". And ended up being RIGHT.

For example, I believe writing unit tests is way too important to be fully relegated to the most junior devs, or even LLM generation! In other fields, "test engineer" is an incredibly prestigious position to have, for example "lead test engineer, Space X/ Nasa/etc" -- that ain't a slouch job, you are literally responsible for some of the most important validation and engineering work done at the company.

So I do question the notion that we can offload the "simple" stuff and just move on with life. It hasn't really fully worked well in all fields, for example have we really outsourced the boring stuff like manufacturing and made things way better? The best companies making the best things do typically vertically integrate.

CuriouslyC 3 days ago | parent | prev | next [-]

I stay at the architecture, code organization and algorithm level with AI. I plan things at that level then have the agent do full implementation. I have tests (which have been audited both manually and by agents) and I have multiple agents audit the implementation code. The pipeline is 100% automated and produces very good results, and you can still get some engineering vibes from the fact that you're orchestrating a stochastic workflow dag!

marcosdumay 3 days ago | parent | prev [-]

The problem with LLMs is that they are not good enough to do the dumb stuff by themselves. and they are still so dumb that they will bias you once you have to intervene.

But this is the idea behind compilers, type checkers, automated testing, version control, and etc. It's perfectly valid.