Remix.run Logo
1a527dd5 4 hours ago

A year ago I would have agreed wholeheartedly and I was a self confessed skeptic.

Then Gemini got good (around 2.5?), like I-turned-my-head good. I started to use it every week-ish, not to write code. But more like a tool (as you would a calculator).

More recently Opus 4.5 was released and now I'm using it every day to assist in code. It is regularly helping me take tasks that would have taken 6-12 hours down to 15-30 minutes with some minor prompting and hand holding.

I've not yet reached the point where I feel letting is loose and do the entire PR for me. But it's getting there.

kstrauser 4 hours ago | parent | next [-]

> I was a self confessed skeptic.

I think that's the key. Healthy skepticism is always appropriate. It's the outright cynicism that gets me. "AI will never be able to [...]", when I've been sitting here at work doing 2/3rds of those supposedly impossible things. Flawlessly? No, of course not! But I don't do those things flawlessly on the first pass, either.

Skepticism is good. I have no time or patience for cynics who dismiss the whole technology as impossible.

spaceywilly 4 hours ago | parent | prev [-]

I would strongly recommend this podcast episode with Andrej Karpathy. I will poorly summarize it by saying his main point is that AI will spread like any other technology. It’s not going to be a sudden flash and everything is done by AI. It will be a slow rollout where each year it automates more and more manual work, until one day we realize it’s everywhere and has become indispensable.

It sounds like what you are seeing lines up with his predictions. Each model generation is able to take on a little more of the responsibilities of a software engineer, but it’s not as if we suddenly don’t need the engineer anymore.

https://www.dwarkesh.com/p/andrej-karpathy

daxfohl 14 minutes ago | parent | next [-]

Though I think it's a very steep sigmoid that we're still far on the bottom half of.

For math it just did its first "almost independent" Erdos problem. In a couple months it'll probably do another, then maybe one each month for a while, then one morning we'll wake up and find whoom it solved 20 overnight and is spitting them out by the hour.

For software it's been "curiosity ... curiosity ... curiosity ... occasionally useful assistant ... slightly more capable assistant" up to now, and it'll probably continue like that for a while. The inflection point will be when OpenAI/Anthropic/Google releases an e2e platform meant to be driven primarily by the product team, with engineering just being co-drivers. It probably starts out buggy and needing a lot of hand-holding (and grumbling) from engineering, but slowly but surely becomes more independently capable. Then at some point, product will become more confident in that platform than their own engineering team, and begin pushing out features based on that alone. Once that process starts (probably first at OpenAI/Anthropic/Google themselves, but spreading like wildfire across the industry), then it's just a matter of time until leadership declares that all feature development goes through that platform, and retains only as many engineers as is required to support the platform itself.

sheeh 3 hours ago | parent | prev [-]

AI first of all is not a technology.

Can people get their words straight before typing?

shawabawa3 2 hours ago | parent [-]

Is LLM a technology? Are you complaining about the use of AI to mean LLM? Because I think that ship has sailed