Remix.run Logo
al_borland 11 hours ago

A principal engineer at Google posted on Twitter that Claude Code did in an hour what the team couldn’t do in a year.

Two days later, after people freaked out, context was added. The team built multiple versions in that year, each had its trade offs. All that context was given to the AI and it was able to produce a “toy” version. I can only assume it had similar trade offs.

https://xcancel.com/rakyll/status/2007659740126761033#m

My experience has been similar to yours, and I think a lot of the hype is from people like this Google engineer who play into the hype and leave out the context. This sets expectations way out of line from reality and leads to frustration and disappointment.

tw061023 25 minutes ago | parent | next [-]

That, uh, says a lot about Google, doesn't it?

keybored 4 hours ago | parent | prev | next [-]

> A principal engineer at Google posted on Twitter that Claude Code did in an hour what the team couldn’t do in a year.

I’ll bring the tar if you bring the feathers.

That sounds hyperbolic but how can someone say something so outrageoulsy false.

JimDabell 4 minutes ago | parent | next [-]

Who are you referring to here? If you follow the link, you will see that the Google engineer did not say that.

thornewolf 4 hours ago | parent | prev [-]

as someone who worked at the company, i understood the meaning behind the tweet without the additional clarification. i think she assumed too much shared context when making the tweet

keybored 4 hours ago | parent [-]

A principal engineer at Google made a public post on the World Wide Web and assumed some shared Google/Claude-context. Do you hear yourself?

tokioyoyo 36 minutes ago | parent [-]

Working in a large scale org gets you accustomed to general problems in decision making that aren’t that obvious. Like I totally understood what she means and in my head nodded with “yeah that tracks”.

dysleixc 5 hours ago | parent | prev | next [-]

May I ask about your level of experience and which AI you tried to use? I have a strong suspicion these two factors are rarely mentioned, which leads to miscommunication. For example, in my experience, up until recently you could get amazing results, but only if you had let's say 5+ years of experience AND were willing to pay at least $100/month for Claude Code AND followed some fairly trivial usage policies (e.g., using the "ultrathink" keyword, planning mode etc) AND didn't feel lazy actually reading the output. Quite often people wouldn't meet one of those criteria and would call out the AI hype bubble.

cocoto 4 hours ago | parent | next [-]

From the very beginning everyone tells us “you are using the wrong model”. Fast forward a year, the free models become as good as last year premium models and the result is still bad but you still hear the same message “you are not using the last model”… I just stopped caring to try the new shiny model each month and simply reevaluate the state of the art once a year for my sanity. Or maybe my expectation is clearly too high for these tools.

amoss 4 hours ago | parent | prev | next [-]

This discussion is a request for positive examples to demonstrate any of the recent grandiose claims about ai assisted development. Attempting to switch instead to attacking the credentials of posters only seems to supply evidence that there are no positive examples, only hype. It doesn't seem to add to the conversation.

consp 4 hours ago | parent | prev [-]

> would call out the AI hype bubble

Which is what it is by describing it as a tool needing thousands of dollars and years of time in learning fees while being described as "replaces devs" in an instant. It is a tool and when used sparingly by well trained people, works. To the extend that any large statistical text predictor would.

hahahahhaah 5 hours ago | parent | prev | next [-]

Yeah that was bullshit (like most AI related crap... lies, damn lies, statistics, ai benchmarks). Like saying my 5 year old said words that would solve the Greenland issue in an hour. But words not put to test lol, just put on a screen and everyone say woah!!! AI can't ship. That stil needs humans.

srcport56445 7 hours ago | parent | prev | next [-]

Humans regularly design entire Uber, google, youtube, twitter, whatsapp etc in 45 mins in system design interviews. So AI designing some toy version is meh.

edanm 7 hours ago | parent | prev [-]

You're choosing to focus on specific hype posts (which were actually just misunderstandings of the original confusingly-worded Twitter post).

While ignoring the many, many cases of well-known and talented developers who give more context and say that agentic coding does give them a significant speedup (like Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison).

NoPicklez 6 hours ago | parent | next [-]

Why not in that case provide an example to rebut and contribute as opposed to knocking someone elses example even if it was against the use of agentic coding.

edanm 5 hours ago | parent [-]

Serious question - what kind of example would help at this point?

Here are a sample of (IMO) extremely talented and well known developers who have expressed that agentic coding helps them: Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison. This is just randomly off the top of my head, you can find many more. None of them claim that agentic coding does a years' worth of work for them in an hour, of course.

In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up, most of them significant. The "AI doesn't help me" crowd is, as far as I can tell, an online-only phenomenon. In real life, everyone has used it to at least some degree and finds it very valuable.

trashb an hour ago | parent | next [-]

Those are some high profile (celebrity) developers.

I wonder if they have measured their results? I believe that the perceived speed up of AI coding is often different from reality. The following paper backs this idea https://arxiv.org/abs/2507.09089 . Can you provide data that objects this view, based on these (celebrity) developers or otherwise?

embedding-shape 26 minutes ago | parent [-]

Almost off-topic, but got me curious: How can I measure this myself? Say I want to put concrete numbers to this, and actually measure, how should I approach it?

My naive approach would be to just implement it twice, once together with an LLM and once without, but that has obvious flaws, most obvious that the order which you do it with impacts the results too much.

So how would I actually go about and be able to provide data for this?

Adrig 2 hours ago | parent | prev | next [-]

A lot of comments reads like a knee jerk reaction to the Twitter crowd claiming they vibe code apps making 1m$ in 2 weeks.

As a designer I'm having a lot of success vibe coding small use cases, like an alternative to lovable to prototype in my design system and share prototypes easily.

All the devs I work with use cursor, one of them (front) told me most of the code is written by AI. In the real world agentic coding is used massively

margorczynski 2 hours ago | parent | prev | next [-]

I think it is a mix of ego and fear - basically "I'm too smart to be replaced by a machine" and "what I'm gonna do if I'm replaced?".

The second part is something I think a lot about now after playing around with Claude Code, OpenCode, Antigravity and extrapolating where this is all going.

menaerus an hour ago | parent [-]

I agree it's about the ego .. about the other part I am also trying to project few scenarios in my head.

Wild guess nr.1: large majority of software jobs will be complemented (mostly replaced) with the AI agents, reducing the need for as many people doing the same job.

Wild guess nr.2: demand for creating software will increase but the demand for software engineers creating that software will not follow the same multiplier.

Wild guess nr.3: we will have the smallest teams ever with only few people on board leading perhaps to instantiating the largest amount of companies than ever.

Wild guess nr.4: in near future, the pool of software engineers as we know them today, will be drastically downsized, and only the ones who can demonstrate they can bring the substantial value over using the AI models will remain relevant.

Wild guess nr.5: getting the job in software engineering will be harder than ever.

akoboldfrying 4 hours ago | parent | prev [-]

Nit: s/Reddit/Redis/

Though it is fun to imagine using Reddit as a key-value store :)

edanm 2 hours ago | parent [-]

Aaarg I was typing quickly and mistyped. :face-palm:

Thanks for the correction.

NBJack 5 hours ago | parent | prev [-]

Citation needed. Talk, especially in the 'agentic age', is cheap.