Remix.run Logo
ffsm8 2 hours ago

The productivity boost entirely depends on the way the software was written.

Brownfield legacy projects with god classes and millions of lines of code which need to behave coherently across multiple channels- without actually having anything linking them from the written code? That shit is not even gonna get a 20% boost, you'll almost always be quicker on your own - what you do get is a fatigue bonus, by which I mean you'll invest yourself less for the same amount of output, while getting slightly slower because nobody I've ever interacted with is able to keep such code bases in their mind sufficiently to branch out to multiple agents.

On projects that have been architected to be owned by an LLM? Modular modilith with hints linking all channels together etc? Yeah, you're gonna get a massive productivity boost and you also will be using your brain a shitton actually reasoning things out how you'll get the LLM to be able to actually work on the project beyond silly weekends toy project scope (100k-MM LOC)

But let's be real here, most employees are working with codebases like the former.

And I'm still learning how to do the second. While I've significantly improved since I've started one year ago, I wouldn't consider myself a master at it yet. I continue to try things out and frequently try things that I ultimately decide to revert or (best case) discard before merge to main simply because I ... Notice significant impediments modifying/adding features with a given architecture.

Seriously, this is currently bleeding Edge. Things have not even begun to settle yet.

We're way too early for the industry to normalize around llms yet

cruffle_duffle 2 hours ago | parent [-]

“Seriously, this is currently bleeding Edge. Things have not even begun to settle yet. We're way too early for the industry to normalize around llms yet”

That’s the exciting part isn’t it? This stuff is completely uncharted water. The interesting thing is there is huge forests of low hanging fruit to harvest too.

The interesting thing is this tickles all the same things “actual programming” does. It’s just the programming language changed to precise English and while the fundamental CS constrains are the same there is a new layer of constraints. Some, like speed of inference, I suspect will be solved (which will be incredible) but others like context management and size will always exist for LLMs. Understanding how little adjustments to what you prompt them can have dramatic changes to their output will never go away. Knowing what to put into context is a hugely important skill.