Remix.run Logo
postalcoder 3 hours ago

One of the nice things about the "dumber" models (like GPT-4) was that it was good enough to get you really far, but never enough to complete the loop. It gave you maybe 90%. 20% of which you had to retrace -- so you had to do 30% of the tough work yourself, which meant manually learning things from scratch.

The models are too good now. One thing I've noticed recently is that I've stopped dreaming about tough problems, be it code or math. The greatest feeling in the world is pounding your head against a problem for a couple of days and waking up the next morning with the solution sketched out in your mind.

I don't think the solution is to be going full natty with things, but to work more alongside the code in an editor, rather than doing things in CLI.

boredemployee 2 hours ago | parent | next [-]

The big issue I see coming is that leadership will care less and less about people, and more about shipping features faster and faster. In other words, those that are still learning their craft are fucked up.

The amount of context switching in my day-to-day work has become insane. There's this culture of “everyone should be able to do everything” (within reason, sure), but in practice it means a data scientist is expected to touch infra code if needed.

Underneath it all is an unspoken assumption that people will just lean on LLMs to make this work.

Oras 3 hours ago | parent | prev | next [-]

You still have the system design skills, and so far, LLMs are not that good in this field.

They can give plausible architecture but most of the time it’s not usable if you’re starting from scratch.

When you design the system, you’re an architect not a coder, so I see no difference between handing the design to agents or other developers, you’ve done the heavy lifting.

In that perspective, I find LLMs quite useful for learning. But instead of coding, I find myself in long sessions back and forth to ask questions, requesting examples, sequence diagrams .. etc to visualise the final product.

Thanemate an hour ago | parent [-]

I see this argument all the time, and while it sounds great on paper (you're an architect now, not a developer) people forget (or omit?) that a product needs far fewer architects than developers, meaning the workforce gets in fact trimmed down thanks to AI advancements.

simianwords 20 minutes ago | parent | prev | next [-]

you can now access similar models for way cheaper prices. grok 4.1 fast is around 10x cheaper but performs slightly better

queenkjuul 2 hours ago | parent | prev | next [-]

Idk i very much feel like Claude Code only ever gets me really far, but never there. I do use it a fair bit, but i still write a lot myself, and almost never use its output unedited.

For hobby projects though, it's awesome. It just really struggles to do things right in the big codebase at work.

dude250711 2 hours ago | parent | prev [-]

> The greatest feeling in the world is pounding your head against a problem for a couple of days and waking up the next morning with the solution sketched out in your mind.

And then you find out someone else had already solved it. So might as well use the Google 2.0 aka ChatGPT.

griffzhowl an hour ago | parent [-]

Well, this is exactly the problem. This tactic works until you get to a problem that nobody has solved before, even if it's just a relatively minor one that no one has solved because no one has tried to because it's so specific. If you haven't built up the skills and knowledge to solve problems, then you're stuck.