Remix.run Logo
sarchertech 5 hours ago

I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.

I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.

But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.

I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.

This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…

I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.

What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.

Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).

Havoc 5 hours ago | parent [-]

>if so it’s on the order of 10-20%. Nothing revolutionary.

For many businesses that is revolutionary.

Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work

sarchertech 4 hours ago | parent | next [-]

I think the $1000 was worth spending just as a one time experiment. And there are use cases where LLMs are fantastic. It’s great at debugging because tracking down a bug usually takes much longer than verifying it once it’s pointed out.

Where I have a problem is with the FOMO, panic, and mania that has come down from up top. There are people in my company saying that we should be spending 3x our salaries in tokens.

But if you’re in a business where a 20% speed up is revolutionary, there are so many things that have been on the table for years that you could have been focusing on. I’ve seen at least 5 advances over that have happened over the last 20 years with that kind of boost.

That’s probably about you’d get from spending time really learning vim or eMacs.

tedd4u 4 hours ago | parent | prev [-]

How does that 10-20% change when the cost of tokens rises to meet post-IPO earnings targets? For example if it increases 2, 5, or 10x, does this 10-20% gain net out? (Rhetorical question)