Remix.run Logo
The Gorman Paradox: Where Are All the AI-Generated Apps?(codemanship.wordpress.com)
137 points by ArmageddonIt 11 hours ago | 195 comments
perrygeo 10 hours ago | parent | next [-]

The link in the last paragraph provides some data to back up the claim. https://mikelovesrobots.substack.com/p/wheres-the-shovelware... - If the goal is to increase the rate of software production, there isn't much evidence that AI has moved the needle.

Sure, code gen is faster now. And the industry might finally be waking up to the fact that writing code is a small part of producing software. Getting infinitely faster at one step doesn't speed up the overall process. In fact, there's good evidence it that rapid code gen actually slows down other steps in the process like code review and QA.

decasia 9 hours ago | parent | next [-]

Strongly agreeing with this comment…

I realized early on in my enterprise software job that if I produce code faster than average for my team, it will just get stuck in the rest of our review and QA processes; it doesn’t get released any faster.

It feels like LLM code gen can exacerbate and generalize this effect (especially when people send mediocre LLM code gen for review which then makes the reviews become painful).

ninkendo 7 hours ago | parent | next [-]

It doesn’t even need to be the case that the LLM produces worse code. Software development is like a gas that expands to fill its container. If the schedule allows a fixed amount of time before shipping, and the time to write the code shrinks to zero, it just means the other parts of the process will fill the remaining time, even if the LLM did a decent job in the first place.

binary132 8 hours ago | parent | prev | next [-]

So much wasted time debating whether the 1000 lines of generated code are actually necessary when the actual transform in question is 3 of them. “But it works”, goes the refrain.

LeChuck 5 hours ago | parent | prev [-]

Theory of Constraints right there. Producing faster than the slowest resource in the chain is detrimental to the entire process. You waste resources and create difficulties upstream.

https://en.wikipedia.org/wiki/Theory_of_constraints

maddmann 9 hours ago | parent | prev | next [-]

One alternative explanation to the lack of shovelware, people are deploying software at an individual level. Perhaps millions of new people are using vibe code tools to build tools that are personalized and folks aren’t interested in trying to sell software (the hardest part of many generic saas tool is marketing/etc)

Perhaps looking at iOS, steam, and android release is simply not a great measure of where software is headed. Disappointing that the article didn’t think go a little more outside the box.

epiccoleman 7 hours ago | parent | next [-]

This is at least a little bit true for me. Examples:

https://github.com/epiccoleman/scrapio

https://github.com/epiccoleman/tuber

These are both projects which have repeatedly given me a lot of value but which have very little market mass appeal (well, tuber is pretty fuckin' cool, imho, but you could just prompt one up yourself).

I've also built a handful of tools for my current job which are similarly vibe coded.

The problem with these kinds of things from a "can I sell this app" perspective is that they're raw and unpolished. I use tuber multiple times a week but I don't really care enough about it to get it to a point where I don't have to have a joke disclaimer about not understanding the code. If one of my little generated CLIs or whatever fails, I don't mind, I still saved time. But if I wanted to charge for any of them I'd feel wrong not polishing the rough edges off.

conartist6 9 hours ago | parent | prev | next [-]

You would still expect to see traces of the economic value they were creating elsewhere.

maddmann 9 hours ago | parent | next [-]

Can you specify? I am personally using ai coding tools to replace subscription tools. While it’s valuable to me, in aggregate it would be a potential decline in economic activity for traditional services (this would only play out years from now in aggregate). We need to keep in mind that good ai coding tools like Claude code or to some extent,lovable, have barely come into existence.

t0mas88 8 hours ago | parent | next [-]

I've mostly seen this done for things where there is no prefect commercial tool because it's a small market.

For example a flight school that I work with has their own simple rental administration program. It's a small webapp with 3 screens. They use it next to a SaaS booking/planning tool, but that tool never replaced their administrative approach. Mainly because it wouldn't support local tax rules and some discount system that was in place there. So before the webapp they used paper receipts and an spreadsheet.

I think the challenge in the future with lots of these tools is going to be how they're maintained and how "ops" is done.

conartist6 7 hours ago | parent [-]

It just doesn't seem that different to me. The difficulty of building and maintaining a 3 screen webapp hasn't changed significantly. Flight schools are a niche sure (and I've been around them; I'm a private pilot) but really all the innovation that lets a flight school own a webapp has nothing to do with AI, it happened in web browsers and in React and lots of investment in abstractions until we made it pretty trivial to build and own a simple webapp.

Somehow AI took over the narrative, but it's almost never the thing that actually created the value that it gets credit for creating.

maddmann 5 hours ago | parent [-]

Are you arguing that the difficulty of producing a fully functioning poc is no different today than 2-3 years ago?!

Personally, I’ve been writing software for 10 years professionally. It is much easier, especially for someone with little coding experience, to create a quite complex and fully featured web app.

It makes sense that ai models are leveraging frameworks like next js/react/supabase, they are trained/tuned on a very clear stack that is more compatible with how models function. Of course those tools have high value regardless of ai. But ai has rapidly lowered the barrier to entry, and allows be to go much much farther, much faster.

conartist6 4 hours ago | parent | next [-]

No, I'm arguing that it's gotten steadily easier and easier to build high-level projects all the time over the last 20 years. React is obviously a huge part of that. There's a zillion React tutorials out there, so the value of making React accessible to beginners -- once again, that value was not created by AI, but rather by bloggers and youtubers and conversational evangelists.

I also just don't think "going fast" in that sense is such a big a deal. You're talking about frantic speed. I think about speed in terms of growth. The goal is to build sturdy foundations so that you keep growing on an exponential track. Being in a frantic hurry to finish building your foundations is not a good omen for the quality of what will be built on them.

maddmann an hour ago | parent [-]

New software may end up being less about legacy foundations and more about bespoke software, fast iteration, throw away single purpose code, etc.

AI is likely to change fundamental paradigms around software design by significantly decreasing the cost of a line of code/feature/bugfix/and or starting from scratch and enabling more stakeholders to help produce software.

ponector 4 hours ago | parent | prev [-]

3 years ago you throw hundreds of dollars to the Upwork and have an app in result. Nowadays it's much cheaper/faster with LLM, but the difficulty is pretty much the same.

conartist6 8 hours ago | parent | prev [-]

AI usually costs you the ability to pursue the right work. It blinds you and it numbs you. It will feed your ego while guiding you to spend your time doing ordinary stuff, the same ordinary stuff it is guiding everyone to do. People just can't see it because they all spend their time talking to AI now instead of talking to each other -- that's the blindness.

maddmann 8 hours ago | parent [-]

I was asking you a specific question and am curious of your answer. The impact of ai “blinding” people isn’t an “economic indicator” and hardly something that has been proven. Of course there are major issues with how people use ai just like any technology.

The aggregate impact isn’t known yet and the tech is still in its infancy.

conartist6 7 hours ago | parent [-]

The economy looks normal-ish in graphs if you don't consider that the graph shows the AI sector thriving while all other sectors are in recession. It's the kind of graph you'd expect to see if there were one sector leeching the life out of all the others.

nbates80 4 hours ago | parent | prev | next [-]

Don’t tell my boss but I am producing code much faster than before. I just use most of the extra time for myself

NuclearPM 8 hours ago | parent | prev [-]

Where?

prymitive 7 hours ago | parent | prev [-]

Looking forward to the “just in vibe” software ecosystem where your entire os is an LLM coding agent that creates a tool you need when you need it.

pureliquidhw 9 hours ago | parent | prev | next [-]

The Goal was written 40 years ago and talks about, among other things, the paradox/illusion of local optima. This isn't new, AI coding assistants are at some level just another NCX-10. This isn't a book recommendation thread, but I highly recommend that book to anyone, even if you've read its IT equivalent, The Phoenix Project.

user_7832 8 hours ago | parent [-]

To add on: if you don't have the patience for the full book, watch the movie, it's not too long.

One day in our uni class, the prof played the movie instead of teaching. It is the only class I distinctly remember today in terms of the scope of what it taught me.

somenameforme 8 hours ago | parent [-]

Is the name of the movie 'The Goal'? A rather search proof title if so.

user_7832 5 hours ago | parent | next [-]

Yeah, that's the name. I think it was even on YouTube at some point of time, I bet one of the archival websites has a copy of it.

FrustratedMonky 7 hours ago | parent | prev [-]

add Goldratt the author.

"The Goal" is a movie based on Eliyahu Goldratt's best-selling business novel that teaches the principles of the Theory of Constraints

rsynnott 7 hours ago | parent | prev | next [-]

> This was unsettling. It was impossible not to question if I too were an unreliable narrator of my own experience. Was I hoodwinked by the screens of code flying by and had no way of quantifying whether all that reading and reviewing of code actually took more time in the first place than just doing the thing myself?

I don’t understand why this was so surprising to people. People are _terrible_ at self-reporting basically anything, and “people think it makes them mildly more productive but it actually makes them mildly less productive” is a fairly common result for ineffective measures aimed at increasing productivity. If you are doing a thing that you believe is supposed to increase productivity, you’ll be inclined to think that it is doing so unless the result is dramatically negative.

mattacular 9 hours ago | parent | prev | next [-]

> And the industry might finally be waking up to the fact that writing code is a small part of producing software.

Typing code and navigating syntax is the smallest part. We're a solid 30 years into industrialized software development. Better late than never?

ponector 4 hours ago | parent | prev | next [-]

>>In fact, there's good evidence it that rapid code gen actually slows down other steps in the process like code review and QA

Luckily there is a solution, quite popular nowadays: layoff QA team and say developers should test themselves. Couple with rubber stamping merge requests and now you have higher velocity. All development metrics are up!

djeastm 8 hours ago | parent | prev | next [-]

Could it be that while there may not be an increases in the raw number of apps released, there's more functionality within existing apps?

I say this because I've had my own app for years and I am now using AI more and more to add features I wouldn't have attempted before (and a lot of UI enhancements)... but I haven't made a new domain name or anything.

brokensegue 8 hours ago | parent | prev | next [-]

What about this data https://innovationgraph.github.com/global-metrics/git-pushes

rsynnott 5 hours ago | parent [-]

That seems only compelling if you assume that all git pushes are of equal worth. If you instead assume that LLM codegen things push people into smaller PRs (because getting them to do anything big reliably is difficult) or more mistaken PRs which then require more PRs to correct them (anecdotally, it sure _feels_ like everything’s getting a lot buggier) that also scans.

rwmj 6 hours ago | parent | prev | next [-]

Amdahl's Law in action.

outside1234 7 hours ago | parent | prev | next [-]

The curse of Amdahl’s Law. You can only speed up a process so much by optimizing only one part, so if writing code is 10% of the process, the largest speed up you can get is 1.11. Even if it is 50% of the process, the largest speed up is 2x.

scotty79 9 hours ago | parent | prev | next [-]

> If the goal is to increase the rate of software production, there isn't much evidence that AI has moved the needle.

There is only one thing that triggers growth and it is demand. When there's a will, there's a way. But if there's no additional will, new ways won't bring any growth.

Basically AI will show up on the first future spike of demand for software but not before.

My software output increased by manyfold. But none of the software AI wrote for me shows up on the internet. Those are all internal tools, often one off, written for specific task, then discarded.

rsynnott 5 hours ago | parent [-]

I don’t think I buy that; in particular, most steam games are not demand driven. They’re mostly created by hobbyists, who do not, and do not expect to, make much money on them.

api 9 hours ago | parent | prev | next [-]

Lol. Rapid “code gen” can slow down the process. Some of the best programmers have negative productivity when joining existing projects if you measure it by lines of code added.

Simplicity is harder than complexity. I want to tattoo this on my forehead.

Huge amounts of poor quality code is technical debt. Before LLMs you frequently saw it from offshore lowest bidder operations or teams of very junior devs with no grey beards involved.

AshamedCaptain 9 hours ago | parent [-]

LLMs have indeed increased developer productivity: now 1 developer with LLM can generate the same amount of technical debt as 100 junior developers.

jaxn 7 hours ago | parent [-]

I have been using AI to remove tech debt faster than ever before.

lkjdsklf 6 hours ago | parent [-]

Similar here, but most of what I've removed was pointless ai generated code that made it through lazy reviewers.

So much dead and useless code generated by these tools... and tens of thousands of lines of worthless tests..

honestly I don't mind it that much... my changed lines is through the roof relative to my peers and now that stack ranking is back........

13415 9 hours ago | parent | prev [-]

In my experience of using AI for development, the development speed is exactly the same as before. What you gain in letting AI writing some simple rote tasks, you lose by increased debugging and wasting time with sycophantic enthusiastic suggestions that turn out to be totally wrong two hours later. It's kind of infuriating because AI is so useful when it works, yet wastes your time in the worst way when it doesn't.

le-mark 9 hours ago | parent [-]

Is it really that useful though? I spend a lot of my time with ai wondering how much of what I’m reading is BS and how to verify. Some tools make checking easier than others.

rjh29 9 hours ago | parent | next [-]

It's best used as a glorified autocomplete or for refactoring. Autocomplete is useful but coding time is like 10% of development, upfront design / debugging / discussions / testing is a bigger issue. Refactoring needs to be checked as AI can't be trusted, and can end up costing as much time as it saves if it makes mistakes.

scotty79 8 hours ago | parent [-]

> glorified autocomplete

I feel like people who feel that about AI never really tried it in agentic mode.

I disable AI autocomplete. While bringing some value it messes with the flow of coding and normal autocomplete in ways I find annoying. (Although half of the problems would probably disappear if I just rebound it to CapsLock instead of Tab, which is the indent key).

But when switched to agentic mode I start with blank canvas and just describe applications I want. In natural language. I tell which libraries to use. Then I gradually evolve it using domain language or software development language whatever fits best to my desires about code or behavior. There are projects where I don't type any code at all and I inspect the code very rarely. I'm basically a project manager and part time QA while the AI does all the development including unit testing.

And it unncannily gets things right. At least Gemini 3 Pro (High) does. Sonnet 4.5 occasionally gets things wrong and difference in behavior tells me that it's not a fundamental problem. It's something that gets solved with stronger LLMs.

rjh29 6 hours ago | parent [-]

Yes it works for certain people, but the whole premise of the OP is that it isn't working that well for the majority of people. I wonder if you are a manager already or aspire to be one. Because the workflow you describe, I've tried it and I find it exhausting.

scotty79 an hour ago | parent | next [-]

Let me give you an example of my workflow from tonight:

1. I had two text documents containing plain text to compare. One with minor edits (done by AI).

2. I wanted to see what AI changed in my text.

3. I tried the usual diff tools. They diffed line by line and result was terrible. I searched google for "text comparison tool but not line-based"

4. As second search result it found me https://www.diffchecker.com/

5. Initially it did equally bad job but I noticed it had a switch "Real-time diff" which did exactly what I wanted.

6. I got curious what is this algorithm. So I asked Gemini with "Deep Research" mode: "The website https://www.diffchecker.com/ uses a diff algorithm they call real-time diff. It works really good for reformatted and corrected text documents. I'd like to know what is this algorithm and if there's any other software, preferably open-source that uses it."

7. As a first suggestion it listed diff-match-patch from Google. It had Python package.

8. I started Antigravity in a new folder, ran uv init. Then I prompted the following:

"Write a commandline tool that uses https://github.com/google/diff-match-patch/wiki/Language:-Py... to generate diff of two files and presents it as side by side comparison in generated html file."

[...]

"I installed the missing dependance for you. Please continue." - I noticed it doesn't use uv for installing dependencies so I interrupted and did it myself.

[...]

"This project uses uv. To run python code use

uv run python test_diff.py" - I noticed it still doesn't use uv for running the code so its testing fails.

[...]

"Semantic cleanup is important, please use it." - Things started to show up but it looked like linear diff. I noticed it had a call to semantic cleanup method commented out so I thought it might help if I push it in that direction.

[...]

"also display the complete, raw diff object below the table" - the display of the diff still didn't seem good so I got curious if it's the problem with the diffing code or the display code

[...]

"I don't see the contents of the object, just text {diffs}" - it made a silly mistake by outputting template variable instead of actual object.

[...]

"While comparing larger files 1.txt and 2.txt I notice that the diff is not very granular. Text changed just slightly but the diff looks like deleting nearly all the lines of the document, and inserting completely fresh ones. Can you force diff library to be more granular?

You seem to be doing the right thing https://github.com/google/diff-match-patch/wiki/Line-or-Word... but the outcome is not good.

Maybe there's some better matching algoritm in the library?" - it seemed that while on small tests that Antigravity made itself it worked decently but on the texts that I actually wanted to compare was still terrible although I've seen glimpses of hope because some spots were diffed more granularly. I inspected the code and it seemed to be doing character level diffing as per diff-match-patch example. While it processed this prompt I was searching for solution myself by clicking around diff-match-patch repo and demos. I found a potential solution by adjusting cleanup, but it actually solved the problem by itself by ditching the character level diffing (which I'm not sure I would have come up with at this point). Diffed object looked great but as I compared the result to https://www.diffchecker.com/ output it seemed that they did one minor thing about formatting better.

[...]

"Could you use rowspan so that rows on one side that are equivalent to multiple rows on the other side would have same height as the rows on the other side they are equivalent to?" - I felt very clumsily trying to phrase it and I wasn't sure if Antigravity will understand. But it did and executed perfectly.

I didn't have to revert a single prompt and interrupted just two times at the beginning.

So I basically went from having two very similar text files and knowing very little about diffing to knowing a bit more and having my own local tool that let's me compare texts in satisfying manner, with beautiful highlighting and formatting, that I can extend or modify however I like, that mirrors interesting part of the functionality of the best tool I found online. And all of that in the time span shorter than it took me to write this comment (at least the coding part was, I followed few wrong paths during my search for a bit).

My experience tells me that even if I could replicate what I did today (keeping motivated is an issue for me), it would most likely be multi-day project full of frustration and hunting small errors and venturing into wrong paths. Python isn't even my strongest language. Instead it was a pleasant and fun evening with occasional jaw drops and feeling so blessed that I live in SciFi times I read about as a kid (and adult).

Oh, yeah, I didn't use auto-complete once, because it mostly sucks. ;-)

scotty79 4 hours ago | parent | prev [-]

I'm a senior, mostly full-stack-web developer, but I also have done some desktop apps and dabbled in other platforms as well. I wrote code for over 20 years in many languages. I never had any aspiration for managing anything or anyone but I was freelancing a lot so I have a good, working understanding of every role in the software development process and I embodied each of those for a bit at one point or another. I never built a complete product out of my own volition, but I have a lot of ideas. However deeply understanding how much work would the implementation take, I was always discouraged. ADHD probably was also a factor.

I lead a team once and wasn't particularly fond of it. For me AI is godsend. It's like being a tech lead and product owner but without having to deal with people and multitude of their idiosyncrasies.

I can understand how AI can't work well for a developer whose work is limited to reading tickets in Jira and implementing them in 3-5 business days, because that's exactly whom AI replaces. I also did that during my career and I liked it but I can see that if all you do at work is swing a shovel you might find it hard to incorporate power digger into your daily work process. But if you can step out a bit it feels great. You can still keep you shovel and chisel nice corners or whatever in places where digger did less than stellar job. But the digger just saves so much work.

Try Antigravity from Google. Not for your daily work. Just to make some stupid side projects that come to your mind, you don't care about, or process some data, make a gui for something, literally whatever, it costs nothing. I hope you'll see what I see.

13415 6 hours ago | parent | prev [-]

IMHO, AI can spot errors in an instant when I give it the files. This can be a real time-saver. I also use it as a glorified autocomplete, it's amazingly good at converting programming code in a mechanical fashion, like adding some lines in every function or changing some API.

The problem is that it can give very bad general advise in more complex cases, and it seems to be especially clueless about software architecture. I need to learn to force myself to ignore AI advice when my initial reaction is "Hm, I don't know." It seems that my bullshit detector is better than any AI so far even if I know the topic less good.

hintymad 6 minutes ago | parent | prev | next [-]

I'm not sure if people are playing innocent or are really this naive. AI can generate apps that are similar to what people have built multiple times. But it's still a far cry for an AI to generate a brand-new system that has new similar predecessors that the AI has seen before. Besides, a service as sophisticated as Spotify has at least thousands of design points that may require thousands of pages to lay out the detailed spec. Yet we expect that a person magically builds the app with a few paragraphs of prompts? Of course, it's possible for someone to use AI as a helper to generate the the code incrementally, but I'd assume that's not what the author mean by "AI-Generated Apps", correct? If that is exactly what the author meant, the real question is why: what's the incentive behind the probably multi-personal-year effort to replicate an existing system?

ghc 9 hours ago | parent | prev | next [-]

The premise is extremely flawed. If users are able to generate their own apps instead of having to buy them, it shrinks the TAM for those apps. If a meatpacker makes its own CRM, it's not going to put it on an app store or try to sell it!

Building software and publishing software are fundamentally two different activities. If AI tilts the build vs. buy equation too far into the build column, we should see a collapse in the published software market.

The canary will be a collapse in the outsourced development / consulting market, since they'd theoretically be undercut by internal teams with AI first -- they're expensive and there's no economy of scale when they're building custom software for you.

conartist6 9 hours ago | parent | next [-]

Right but now you're talking about 5 or 20 or 100 or 1000 companies building CRM software. They're basically doing the mostly the same work over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and (I would like you to know that I typed every single one of these "and over"s with my very own fingers) and over and over and over and over and over and over and over and over and over and over and I think only the AI companies really benefit from that.

I feel silly explaining this as if it's a new thing, but there's a concept in social organization called "specialization" in which societies advance because some people decide to focus on growing food while some people focus on defending against threats and other people focus on building better tools, etc. A society which has a rich social contract which facilitates a high degree of specialization is usually more virile than a subsistence economy in which every individual has all the responsibilities: food gathering, defense, toolmaking, and more.

I wonder if people are forgetting this when they herald the arrival of a new era in which everyone is the maker of their own tools...

gfdvgfffv 8 hours ago | parent | next [-]

The thing is that also empowering individuals to do specialized activities by way of a tool (instead of themselves having to specialize) is a hallmark of progress? Like I don’t need a “professional” to wash my clothes, I don’t need to wash my clothes myself. I use a washing machine.

I don’t need to hire a programmer. I don’t need to be a programmer. I can use a tool to program for me.

(We sure as hell aren’t there yet, but that’s a possibility).

conartist6 7 hours ago | parent | next [-]

Using an AI is still like hiring someone to do programming work for you. It's going to cost money. Why would you waste money? We have sewing machines, but you don't make all your own clothes do you?

rtp4me 7 hours ago | parent [-]

If the cost of the raw materials and worker were less than the price tag at the store, sure, I would probably opt to make my own clothes. They would fit me perfectly, and I can get the right shade of blue instead of bluish.

In the case of AI, Claude costs $100 or $200/mo for really good coding tasks. This is much less expensive than hiring someone to do the same thing for me.

conartist6 6 hours ago | parent [-]

That sounds like a nice hobby.

rtp4me 6 hours ago | parent [-]

Which part is the hobby? Clothes making or using Claude to generate real production code?

conartist6 6 hours ago | parent [-]

Both. I would note that "real production code" is not necessarily a high bar. For example it does not rule out gross negligence. Most of the companies that outsource their thinking and working to Claude will die of it.

rtp4me 5 hours ago | parent [-]

I have a different point of view. Claude code is extremely good at creating and maintaining solid, everyday code including Ansible playbooks (used in production), creating custom dev/ops scripts for managing servers (again, used in production), creating Grafana dashboards (again, production), comparing database performance between nodes, etc. Just because a person did not hand-write this code does not make it any less production ready. In fact, Claude reviewed our current Ansible code base and already highlighted a few errors (the files written by hand). Plus, we get the benefit of having Claude write and execute test plans for each version we create. Well worth the $100/mo we pay.

And to your note that real production code is not necessarily a high bar, what is "real production code"? Does it need to be 10,000 lines of complex C/rust code spread across a vast directory structure that requires human-level thinking to be production ready? What about smaller code bases that do one thing really well?

Honestly, I think many coders here on HN dismiss the smaller, more focused projects when in reality they are equally important as the large, "real" production projects. Are these considered non-production because the code was not written by hand?

conartist6 4 hours ago | parent [-]

All it sounds like to me is that Ansible is production-ready, Grafana is production ready, the compilers and runtimes you're using are production-ready.

Each of those things is a mountain of complexity compared to the molehill of writing a single script. If you're standing on top of a molehill on top of a mountain, it's not the molehill that's got your head in the clouds.

claytongulick 7 hours ago | parent | prev | next [-]

> (We sure as hell aren’t there yet, but that’s a possibility)

What makes you think so?

Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:

AI is great at summarization and classification, but totally unreliable with generation.

That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.

Are you seeing something else?

gfdvgfffv an hour ago | parent | next [-]

I have used Claude to write a lot of code. I am however already a programmer, one with ~25 years of experience. I’ve also lead organizations of 2-200 people.

So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.

senordevnyc 7 hours ago | parent | prev [-]

This seems really vague. What does "totally unreliable" mean?

If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.

But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.

conartist6 an hour ago | parent [-]

I know the tech is here to stay, and the best parts of it are where it provides accessibility and tears down barriers to entry.

My work is to make sure that you don't need to reach for AI just because human typing speed is limited.

I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.

wizzwizz4 7 hours ago | parent | prev [-]

Your washing machine can only deal with certain classes of clothing. It will completely destroy others, and has no way to determine what clothing has been put into it. Meanwhile, the average untrained-but-conscientious human will, at worst, damage a small portion of an item of clothing before spotting the problem and acting to mitigate it. (If the clothing is "absolutely must never come into contact with water" levels of dry-clean only, they might still trash the whole item, but they aren't likely to make the same mistake twice.)

Programming is far more the latter kind of task than the former. Data-processing or system control tasks in the "solve ordinary, well-specified problem" category are solved by executing software, not programming.

singpolyma3 7 hours ago | parent | prev | next [-]

Not only AI companies benefit the companies themselves benefit too from getting software that actually meets (more of) their needs rather than whatever some dev imagined they might need.

zkmon 8 hours ago | parent | prev | next [-]

That's right. Infact, I see more outsourcing to happen, due to risk delegation and complexity management. AI would only make humans lazier and risk averse. Complexity of regulations, government reach, security risks would only increase. Risk can't distributed to AI employees (agents). A supervisor of AI agent populations can't be held responsible for all the bugs and complexity in a AI-generated product.

danaris 7 hours ago | parent | prev [-]

God, yes.

I see so many people quote that damnable Heinlein quote about specialization being for insects as if it's some deep insight worth making the cornerstone of your philosophy, when in fact a) it's the opinion of a character in the book, and b) it is hopelessly wrong about how human beings actually became as advanced as we are.

We're much better off taking the Unix philosophy (many small groups of people each getting really really good at doing very niche things, all working together) to build a society. It's probably still flawed, but at least it's aimed in the right direction.

blazespin 19 minutes ago | parent | prev [-]

Nobody collapses, everything just shrinks.

And we're seeing that in the labor numbers.

Sometimes things are harder to see because it's chipping away and everywhere at the margins.

jackfranklyn 10 hours ago | parent | prev | next [-]

davydm nails it. The gap isn't in generating code - it's in everything else that makes software actually work.

I've been building accounting tools for years. AI can generate a function to parse a bank statement CSV pretty well. But can it handle the Barclays CSV that has a random blank row on line 47? Or the HSBC format that changed last month? Or the edge case where someone exports from their mobile app vs desktop?

That's not even touching the hard stuff - OAuth token refresh failures at 3am, database migrations when you change your mind about a schema, figuring out why Xero's API returns different JSON on Tuesdays.

The real paradox: AI makes starting easier but finishing harder. You get to 80% fast, then spend longer on the last 20% than you would have building from scratch - because now you're debugging code you don't fully understand.

estimator7292 4 hours ago | parent | next [-]

When I got hired at my current job, they handed me an AI generated app. It did a pretty reasonable job on the frontend, I think (I'm not a React guy), but the backend was a disaster. Part of it involved parsing a file and they had somehow fed the AI a test file with the first 20B truncated. I can tell that the AI tried hard to force the parser to match the file spec and ended up inserting checks for magic byte values that made no sense.

It took me a few days to realize what was happening. Once I got some good files it was just a couple hours to understand the problem. Then three weeks untangling the parser and making it actually match the spec.

And then three months refactoring the whole backend into something usable. It would have taken less time to redo it from scratch. If I'd known then what I know now, I would have scrapped the whole project and started over.

dinfinity 3 hours ago | parent | next [-]

Did you use AI to help you understand the code and what it was doing (incorrectly)?

vaylian 4 hours ago | parent | prev [-]

The AI can generate a lot of Chesterton Fences. It's difficult to figure out why they are there and if they are needed.

nunez 6 hours ago | parent | prev | next [-]

LLMs (well, most of the frontier and popular open-source models) are actually quite good at abiding by weird formats like this given that your prompt describes them clearly enough. The real problem is that you'll have to manually spot-check the results, as LLMs are also very good at adding random incorrectness. This can take just as long (or longer!) than writing the code + tests yourself.

dimitri-vs 6 hours ago | parent | prev | next [-]

As someone that's currently building accounting (and many many other) tools for myself: yes, it can.

But with a big fat asterisk that you: 1. Need to make it aware of all relevant business logic 2. Give it all necessary tools to iterate and debug and 3. Have significant experience with strengths and weaknesses of coding agents.

To be clear I'm talking about cli agents like Claude Code which IMO is apples and oranges vs ChatGPT (and even Cursor).

rtp4me 7 hours ago | parent | prev | next [-]

Interesting, but isn't the real issue here how external systems can/will update their output at random? Given you are probably a domain expert in this situation, you can easily solve the issue based on past experience. But, what if a junior person encountered these errors? Do you think they have enough background to solve these issues faster than an AI tool?

samsullivan 4 hours ago | parent | prev | next [-]

all of these problems are better articulated at the level you just explained them. the code for these issues is convoluted and is only of use when an entity (human or not) can actually manipulate the symbolic text that achieves that task. a random oauth stub is of 0 use to the most skilled programmers without documentation as to what contracts and invariants are. bits in a file is just a means

KellyCriterion 10 hours ago | parent | prev | next [-]

No, it cant handle those perfectly - but it can help you to develop the required code to do that correctly much faster :-)

garden_hermit 10 hours ago | parent [-]

This just returns us to the question — if it makes all these things so easy and fast, where are the AI-generated apps? Where is the productivity boost?

thunky 9 hours ago | parent | next [-]

How do you expect this boost will appear?

People start announcing that they're using AI to do their job for them? Devs put "AI generated" banners all over their apps? No, because people are incentivised to hide their use of AI.

Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.

If you're talking about normal people using AI to build apps those apps are all over the place, but I'm not sure how you would expect to find them unless you're looking. It's not like we really need that many new apps right now, AI or not.

callc 9 hours ago | parent | next [-]

Any metric that measures the amount of software delivered.

The link at the bottom of the post (https://mikelovesrobots.substack.com/p/wheres-the-shovelware...) goes over this exactly.

> Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.

It’s an excuse. It’s the dream peddled by AI companies: automate intelligence so you can fire your human workers.

Look at the graphs in the post, then revisit claims about AI productivity.

The data doesn’t lie. AI peddlers do.

ogogmad 8 hours ago | parent [-]

Given the amount of progress in AI coding in the last 3 years, are you seriously confident that AI won't increase programming productivity in the next three?

This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.

rsynnott 5 hours ago | parent | next [-]

Okay, so if and when that happens, get excited about it _then_?

Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.

wizzwizz4 7 hours ago | parent | prev [-]

AI codegen isn't comparable to a highly-infectious disease: it's been a lot more than a few weeks. I don't think your analogy is apt: it reads more like rhetoric to me. (Unless I've missed the point entirely.)

anorwell 6 hours ago | parent [-]

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.

I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?

wizzwizz4 5 hours ago | parent [-]

The "COVID-19's doubling time was ≈3 days" figure was the output of an epidemiological model, based on solid and empirically-validated theory, based on hundreds of years of observations of diseases. "AI capabilities' doubling time seems to be about 7 months" is based on meaningless benchmarks, corporate marketing copy, and subjective reports contradicted by observational evidence of the same events. There's no compelling reason to believe that any of this is real, and plenty of reason to believe it's largely fraudulent. (Models from 2, 3, 4 years ago based on the "it's fraud" concept are still showing high predictive power today, whereas the models of the "capabilities forecasters" have been repeatedly adjusted.)

bccdee 9 hours ago | parent | prev | next [-]

The article provides a few good signals: (1) an increase in the rate at which apps are added to the app store, and (2) reports of companies forgoing large SaaS dependencies and just building them themselves. If software is truly a commodity, why aren't people making their own Jiras and Figmas and Salesforces? If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?

thunky 7 hours ago | parent [-]

> If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?

That's a silly argument. Someone could have made all of those clones before, but didn't. Why didn't they? Hint: it's not because it would have taken them longer without AI.

I feel like these anti-AI arguments are intentially being unrealistic. Just because I can use Nano Banana to create art does not mean I'm going to be the next Monet.

bccdee 6 hours ago | parent [-]

> Why didn't they? Hint: it's not because it would have taken them longer without AI.

Yes it is. "How much will this cost us to build" is a key component of the build-vs-buy decision. If you build it yourself, you get something tailored to your needs; however, it also costs money to make & maintain.

If the cost of making & maintaining software went down, we'd see people choosing more frequently to build rather than buy. Are we seeing this? If not, then the price of producing reliable, production-ready software likely has not significantly diminished.

I see a lot of posts saying, "I vibe-coded this toy prototype in one week! Software is a commodity now," but I don't see any engineers saying, "here's how we vibe-coded this piece of production-quality software in one month, when it would have taken us a year to build it before." It seems to me like the only software whose production has been significantly accelerated is toy prototypes.

I assume it's a consequence of Amdahl's law:

> the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used.

Toy prototypes proportionally contains a much higher amount of the type of rote greenfield scaffolding that agents are good at writing. The sticker problems of brownfield growth and robustification are absent.

garden_hermit 9 hours ago | parent | prev | next [-]

I would expect a general rise in productivity across sectors, but with the largest concentrated in the tech sector given the focus on code generation. A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI. Given the hype, one would expect an inflection point in the productivity of this sector, but it mostly just appears linear.

I am very willing to believe that there are many obscure and low-quality apps being generated by AI. But this speaks to the fact that mere generation of code is not productive, that generating quality applications requires other forms of labor that is not presently satisfied by generative AI.

thunky 4 hours ago | parent [-]

> A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI

IMO you're not seeing this because nobody is coming up with good ideas because we're already saturated with apps. And apps are already releasing features faster than anyone wants them. How many app reviews have you read that say: "Was great before the last update". Development speed and ability isn't the thing holding us back from great software releases.

rsynnott 5 hours ago | parent | prev [-]

I would expect a _big_ increase in the production of amateur/hobbyist games. These aren’t demand driven; they’re basically passion projects generally. And that doesn’t seem to be happening; steam releases are actually modestly _down_, say.

cheevly an hour ago | parent [-]

Asset generation is hard.

KellyCriterion 9 hours ago | parent | prev [-]

Its not productivity boosting in a sense of "you can leave 2h earlier", but in a sense of "you get more done faster", resulting in more stuff created. Thats my general assumption/approach for "using AI to code".

When it comes to "AI-generated apps" that work out of the box, I do not believe in them - I think for creating a "complete" app, the tools are not good enough (yet?). Context & co is required, esp. for larger apps and to connect the building blocks - I do not think there will be any remarkable apps coming out of such a process.

I see the AI tools just as a junior developer who will create datastructures, functions, etc. when I instruct it to do so: It attends in code creation & optimization, but not in "complete app architecture" (maybe as sparring partner)

senordevnyc 7 hours ago | parent | prev | next [-]

AI makes starting easier but finishing harder. You get to 80% fast, then spend longer on the last 20% than you would have building from scratch - because now you're debugging code you don't fully understand.

I run a SaaS solo, and that hasn't really been my experience, but I'm not vibe coding. I fully understand all the code that my AI writes when it writes it, and I focus on sound engineering practices, clean interfaces, good test coverage, etc.

Also, I'm probably a better debugger than AI given an infinite amount of time and an advantage in available tools, but if you give us each the same debugging tools and see who can find and fix the bug fastest, it'll run circles around me, even for code that I wrote myself by hand!

That said, as time has gone on, the codebase has grown beyond my ability to keep it all in my head. That made me nervous at first, but I've come to view it just like pretty much any job with a large codebase, where I won't be familiar with large parts of the codebase when I first jump into them, because someone else wrote it. In this respect, AI has also been really helpful to help me get back up to speed on a part of the codebase I wrote a year ago that I need to now refactor or whatever.

nostrademons 9 hours ago | parent | prev [-]

I've heard AI coding described as "It makes the first 80% fast, and the last 20% impossible."

...which makes it a great fit for executives that live by the 80/20 rule and just choose not to do the last 20%.

sunrunner 8 hours ago | parent | prev | next [-]

Perhaps because, amongst the other solid reasons already posted:

Demand is the real bottleneck: New tools expand who can ship, but they don’t expand how many problems are worth solving or audiences worth serving. Adoption tends to concentrate among "lead users" and not "everyone".

App store markets are power-law distributed (no citations sorry, it's just my belief): A tiny slice of publishers captures the most downloads/revenue. That’s discoverability, not "lack of builders".

Attention and distribution are winner-take-most: Even if creation is cheap, attention is scarce.

The hidden (recurring) cost as other commenters point out is maintenance: Tooling slashes first release cost but not lifecycle cost.

Problem-finding outweighs problem-solving: If the value of your app depends on users or data network effects, you still face the "cold start problem".

"Ease" can change the meaning of the signal: If anyone can press a button and "ship an app" the act of shipping stops signaling much. Paradoxically, effort can increase personal valuation (the IKEA effect), and a lower cost to the creator as seen from the outside kills the (Zahavi) signal.

And finally, maybe people just don't actually want to use and/or make apps that much? That's not to say that good apps aren't valuable, but the ubiquity of various platforms app stores implies that there's some huge demand, but if most app usage is concentrated amongst a small number of genuine day-to-day problem solving tools that already have heavy hitters that have been around for years, an influx of new things perhaps isn't that interesting.

falcor84 9 hours ago | parent | prev | next [-]

But this is how disruptive innovation works. I recall that even around 2005, after digital camera sales overtook the sales of film cameras, people were still asking "If digital is so good, why aren't the professional photographers using them?" and concluding that digital photography is just a toy that will never really replace print.

vouwfietsman 9 hours ago | parent | next [-]

This is not really the same level of argument. The post is arguing against the idea that software is incredibly cheap to make through AI right now, not that AI cannot ever make complete software products from scratch in the future.

callc 9 hours ago | parent | prev | next [-]

But what?

Give some concrete examples of why current LLM/AI is disruptive technology like digital cameras.

That’s the whole point of the article. Show the obvious gains.

JW_00000 9 hours ago | parent [-]

falcor's point is that we will see this in 5 to 10 years.

falcor84 8 hours ago | parent | next [-]

Exactly. I'm arguing that what we should be focused on at this relatively early stage is not the amount of output but the rate of innovation.

It's important to note that we're now arguing about the level of quality of something that was a "ha, ha, interesting" in a sidenote by Andrej Karpathy 10 years ago [0], and then became a "ha, ha, useful for weekend projects" in his tweet from a year ago. I'm looking forward to reading what he'll be saying in the next few years.

[0] https://karpathy.github.io/2015/05/21/rnn-effectiveness/

[1] https://x.com/karpathy/status/1886192184808149383?s=20

callc 9 hours ago | parent | prev | next [-]

Why so long?

If AI had such obvious gains, why not accelerate that timeline to 6 months?

Take the average time to make a simple app, divide by the supposed productivity speed up, and this should be the time we see a wave of AI coded apps.

As time goes on, the only conclusion we can reach (especially looking at the data) is that the productivity gains are not substantial.

amelius 8 hours ago | parent [-]

> Why so long?

Because in the beginning of a new technology, the advantages of the technology benefit only the direct users of the technology (the programmers in this case).

However, after a while, the corporations see the benefit and will force their employees into an efficiency battle, until the benefit has shifted mostly away from the employees and towards their bosses.

After this efficiency battle, the benefits will become observable from a macro perspective.

spit2wind 8 hours ago | parent | prev | next [-]

GPT3 was released in May 2020. Its been nearly 5 years.

falcor84 6 hours ago | parent | next [-]

Why is gpt3 relevant? I can't recall anyone using gpt3 directly to generate code. The closest would probably be Tabnine's autocompletion, which I think first used gpt2, but I can't recall any robust generation of full functions (let alone programs) before late 2022 with the original GitHub copilot.

lukeschlather 7 hours ago | parent | prev [-]

The first digital camera was released in around 1975? Digital cameras overtook film camera sales in 2005, 30 years later.

amelius 8 hours ago | parent | prev [-]

This gives me hope that we will finally see some competition to the Android/iOS duopoly.

rsynnott 4 hours ago | parent | prev | next [-]

That feels a bit different. By 2005 it was obvious that digital cameras would, at some point in the future, be good enough to replace most high-end film camera use, unless Moore’s Law went out the window entirely. So it was highly likely that digital cameras would take over. There is no inevitability to llm coding tools hitting that ‘good enough’ state.

And they’re not even really talking about the future. People are making extremely expansive claims about how amazing llm coding tools are _right now_. If these claims were correct, one would expect to see it in the market.

ori_b 8 hours ago | parent | prev | next [-]

It's asking "If digital is so good, why aren't there more photos?"

falcor84 6 hours ago | parent [-]

Exactly, let's take this analogy.

TFA is only looking at releases on app stores (rather than eg the number of github repos, which has been growing a lot). The analog would be of the number of photos being published around 2025, which I believe had been pretty steady. It's only with the release of smart phones and facebook a few years afterwards that we started seeing a massive uptick in the overall number of photos out there.

exasperaited 9 hours ago | parent | prev | next [-]

It is an aside, but: I am not sure I encountered any professional photographers saying that in 2005, FWIW; only non-serious photographers were still prattling on about e.g. the mystical and conveniently malleable "theoretical" resolution of film being something that would prevent them ever switching.

There were still valid practical and technical objections for many (indeed, there still is at least one technical objection against digital), the philosophical objections are still as valid as they were (and if you ask me digital has not come close to delivering on its promise to be less environmentally harmful).

But every working press photographer knew they would switch when there were full-frame sensors that were in range of budget planning that shot without quality compromise at the ISO speed they needed or when the organisations they worked for completed their own digital transition. Every working fashion photographer knew that viable cameras already existed.

ETA: Did it disrupt the wider industry? Obviously. Devastatingly. For photographers? It lowered the barrier to entry and the amount they could charge. But any working photographer had encountered that at least once (autofocus SLRs did the same thing, minilabs did the same thing, E6 did it, etc. etc.) and in many ways it was a simple enabling technology because their workflows were also shifting towards digital so it was just the arrival of a DDD workflow at some level.

Putting aside that aside, I am really not convinced your comparison isn't a category error, but it is definitely an interesting one for a couple of reasons I need to think about for a lot longer.

Not least that digital photography triggered a wave of early retirements and career switches, that I think the same thing is coming in the IT industry, and that I think those retirements will be much more damaging. AI has so radically toxified the industry that it is beginning to drive people with experience and a decade or more of working life away. I consider my own tech retirement to have already happened (I am a freelancer and I am still working, but I have psychologically retired, and very early; I plan to live out my working life somewhere else, and help people resisting AI to continue to resist it)

newsoftheday 6 hours ago | parent [-]

> it is beginning to drive people with experience and a decade or more of working life away.

I was planning to work until mid 60's FT but retired this year because of, as you put it, AI toxification.

binary132 8 hours ago | parent | prev [-]

It’s actually comical watching the AI shills trot out the same points in every argument about the utility of LLMs. Now you’re supposed to say that after 10 years of digital, the only people sticking with film were the “curmudgeons”.

I for one hail the curmudgeons. Uphold curmudgeon thought.

dunsany 9 hours ago | parent | prev | next [-]

Most of ours are internal-only because we don't need or want to release them to the public. Sometimes there isn't much of an UI - they're one-off vibe-coded apps for specialized functions within our organization meant for a small number of people. Beginning to think of the vibe-coded apps akin to spreadsheets with lots of macros.

jcims 9 hours ago | parent | next [-]

This is my experience.

I don’t trust the process enough to commit to it for user facing services, but I regularly find toy use cases and itches to scratch where the capability to crank out something useful in 20 minutes has been a godsend.

>Beginning to think of the vibe-coded apps akin to spreadsheets with lots of macros.

This resonates.

SecretDreams 9 hours ago | parent | prev [-]

> Beginning to think of the vibe-coded apps akin to spreadsheets with lots of macros.

These things normally die a sigmoidal death after the creator changes jobs.

NitpickLawyer 9 hours ago | parent [-]

Maybe not anymore, as it's pretty simple to take a "vibe-coded" repo and have a modern agent change it according to your specs. Re-vibe it from time to time, and there goes your technical debt. If the app could be vibed at a random date, chances are it will keep be inside the capabilities of future models / agents /etc.

zkmon 9 hours ago | parent | prev | next [-]

>> Where is everybody?

The AI businesses are busy selling AI to each other. Non-tech businesses are busy spending their AI budgets on useless projects. Everybody is clueless, and like - let's jump in just like we did for blockchain, because we don't want to lose out or be questioned on our tech adaption.

api 9 hours ago | parent [-]

It’s much more like dot.com than crypto. Like dot.com there is a “there” there, but nobody knows what to do yet and there’s a ton of speculative fluff. Crypto was more pure fluff, possibly the most vacuous bubble we’ve had in recent memory.

The best AI companies will be founded a year after the crash.

bityard 10 hours ago | parent | prev | next [-]

I guess the author doesn't hang out on Reddit much. A lot of the tech hobbyist subs I used to enjoy are now nothing but a flood of self-promotional marketing posts for vibe coded apps.

nunez 6 hours ago | parent | prev | next [-]

This paradox almost doesn't matter.

Non-technical business stakeholders who own requirements for line-of-business apps can generate *working* (to them) end-to-end prototypes just by typing good-enough English into a text box.

Not just apps, too! Anything! Spreadsheets, fancy reports, customer service, you name it --- type what you need into the box and wait for it to vend what took an entire team days/weeks to do. Add "ultra-think really hard" to the prompt to trigger the big boy models and get the "really good" stuff. It sounds a little 1984, but whatever, it works.

Design? Engineering? QA? All of those teams are just speed bumps now. Sales, Legal, Core business functions, and a few really experienced nerds is all you need. Which has always been the dream for many business owners.

It doesn't matter that LLMs provide 60% of the solution. 60% > 0%, and that's enough to justify offshoring everything and everyone in the delivery pipeline to cheaper labor (or use that as a threat to suppress wages and claw back workers' rights), including senior engineers who are being increasingly pressured to adopt these tools.

A quick jaunt through /r/cscareerquestions on Reddit is enough to see that this train blasted off from its station with a full tank of fuel for the long-haul.

There's always the possibility that several really bad things happen that makes the entire industry remember that software engineering is an actual discipline and that treating employees well is generally a good thing. For now, all of this feels permanent, and all of it sucks.

NicuCalcea 10 hours ago | parent | prev | next [-]

I stumble upon AI-generated websites and apps quite frequently. They look like crap, but they're there.

xwindowsorg 9 hours ago | parent [-]

I think the blogpost also hints at the reason why "walled garden" (loosely speaking) approach of Replit won't work. Give people the lego pieces and the freedom to assemble them as they deem fit. Give developers fluid water + cement mix with which they can fill their puddle and solve their problems.

alangibson 7 hours ago | parent | prev | next [-]

AI code generation has something important in common with 3D printed houses. Both optimize the easy part.

3D printed houses can only manage the underlying structure of the house. It doesn't cover finishing work which is much harder on the nerves and pocketbook.

Likewise, AI code generation is only useful for the actual implementation. That's maybe 20% of the work. Coordination with stakeholders, architecture, etc are much harder on the nerves and pocketbook.

alexsmirnov 4 hours ago | parent | prev | next [-]

A lot of discussions around vibe AI coding flaws: awful architecture, performance problems, security holes, lack of maintainability, bugs, and low code quality. All correct, but none of those is matter if:

- you create small utility that covers only features needed only for you. As many researches show that any individual uses only less than 20% of software functionality, your tool covers only 10-20% that matters for you

- it only runs locally, on user computer or phone, and never has more than one customer. Performance, security, compliances do not matter

- the code lies next to application, and small enough to fix any bug instantly, in a single AI agent run

- as a single user, you don't care about design, UX, or marketing. Do the job is only matter

It means, majority of vibe coded applications run under radar, used only by a few individuals. I can see it myself: I have a bunch of vibe code utilities that never intended for a broad auditory . And, many of my friend and customers, mention the same: "I vibe coded utility that does ... for me". This means a big consequences for software development: the area for commercial development shrinks, nothing that can be replaced by the small local utility has a market value.

pico303 6 hours ago | parent | prev | next [-]

One of my coworkers (not a developer, but familiar enough with technology) asked about a new “low code” app and wondered if our engineering team used that. My response, I think, provides something of an answer to this paradox: it’s not writing the code that’s time consuming; it’s the requirements gathering. It’s not enough to generate code or even have the LLM build or deploy the software. It’s all the knowledge and experience around crafting an app that’s likely preventing the shovelware. Turns out code is only a part of our job.

Also, why is this the “Gorman” Paradox? He literally links to the article that I remember as first proposing this paradox (though I don’t think the original author called it a paradox). This author just summarizes that article. It should be the Judge Paradox, after the original author.

MK2k 5 hours ago | parent | prev | next [-]

Vibe coded CRMs do exist. They are not open to the public though, as they are just used by the company that built them tailored to their needs is all.

Same with more complex systems: entire shop systems with payment integration, ERP and all – heavily supported by LLM code tools with a 3-10x productivity boost, just done by the CTO and no additional developers needed. They exist, the shop greets it’s customers and all you see is Vue and Tailwind as the tech stack where Shopify could’ve been. It’s now completely owned by the company selling things (they just don't sell the software).

Jordan-117 5 hours ago | parent [-]

This has been similar to my own personal experience. I've generated a number of scripts and browser extensions to do various things perfectly fitting my use cases, but they're far too niche to be worth publishing.

patapong 9 hours ago | parent | prev | next [-]

I would expect that most apps generated today contain at least some Ai generated code, whether through chat completion or agentic use. But, I think such tools currently mostly support people who already are able to create apps.

As others have said, I think a lot of the difficulty in creating an app lies in making the numerous choices to make the app function, not necessarily in coding. You need "taste" and the ability to push through uncertainty and complexity, which are orthogonal to using Ai in many cases.

chrsw 9 hours ago | parent | prev | next [-]

I work on deeply embedded software that doesn't have what you'd commonly think of as a "UI". So, unless there are bugs or we ship faster or something like that, users will never have any idea how much of our code is AI generated.

But it's happening.

pgt 7 hours ago | parent | prev | next [-]

With the help of AI, I made EACL (Enterprise Access ControL), a situated ReBAC authorization library based on SpiceDB, built in Clojure and backed by Datomic: https://github.com/theronic/eacl

EACL replaced SpiceDB at work for us, and would not have been possible without AI.

anovick 8 hours ago | parent | prev | next [-]

Already-established top-of-the-market apps will not be dethroned overnight.

For them, it will be a slow death (e.g. similar to how Figma unseated Adobe in the digital design art space).

As for new app markets, you will surely see (compared to past generations) smaller organizations being able to achieve market dominance, often against much more endowed competitors. And this will be the new normal.

II2II 9 hours ago | parent | prev | next [-]

I think the answers are fairly simple.

If you're talking about internally developed software: AI generated apps suffer from the same pitfalls.

If you're talking about third-party alternatives: AI generated apps suffer from the same pitfalls.

Bonus reasons: advertising your product as AI generated will likely be seen as a liability. It tends to be promoted as a means of developing software more rapidly or for eliminating costly developers. There is relatively little talk about the quality of software quality, and most of the talk we do see is from developers who have a lot to lose from the shift to AI generated software. (I'm not saying they're wrong, just that they are the loudest because they have the most to lose.)

jmkni 8 hours ago | parent | prev | next [-]

I've been experimenting with "vibe coding" recently, and it's been interesting.

I was playing around with v0, and was able to very quickly get 'kinda sorta close-ish' to an application I've been wanting to build for a while, it was quite impressive.

But then the progress slowed right down, I experienced that familiar thing many others have where, once you get past a certain level of complexity, it's breaking things, removing features, re-introducing bugs all while burning through your credits.

It was at this point I remembered I'm actually a software engineer, so pushed the code it had written to github and pulled it down.

Total mess. Massive files, duplicated code all over the place, just a shitshow. I spent a day refactoring it so I could actually work with it, and am continuing to make progress on the base it built for me.

I think you can vibe code the basis of something really quickly, but the AI starts to get confused and trip over it's own shitty code. A human would take a step back and realise they need to do some refactoring, but AI just keeps adding to the pile.

It has saved me a good few months of work on this project, but to really get to a finished product it's going to be a few more months of my own work.

I think a lot of non-technical people are going to vibe-code themselves to ~60-70% of the way there and then hit a wall when the AI starts going around in circles, and they have no idea how to work with the generated code themselves.

brazukadev 7 hours ago | parent [-]

> I think you can vibe code the basis of something really quickly, but the AI starts to get confused and trip over it's own shitty code

Or you can get back to vibecoding after fixing things and establishing a good base. then it helps you go faster until you feel like understanding and refactoring things because it got some things wrong. It is a continuous process.

jmkni 7 hours ago | parent [-]

Yeah that's totally fair

jrm4 8 hours ago | parent | prev | next [-]

Oh, AI could hurt the concepts of "apps" badly and still be good for people.

Strong chance that the push in innovation in this space doesn't get reflected in "apps sold or downloaded," and in fact hurts this metric (and perhaps people buying and selling code in general) -- but still results of "people and organizations solving their own problems with code."

nayroclade 9 hours ago | parent | prev | next [-]

I'd be wary about interpreting a simple trend of App Store / Google Play apps without other context. Both are walled gardens, with developer fees and review processes managed by gatekeepers with an incentive and an ability to artificially control the rate of new apps. I would ask: What is the trend of app store review waiting times? What is the trend of rejections? What is the trend of delistings?

slrainka 9 hours ago | parent | prev | next [-]

I think there is a different way to look at it. My personal experience is that enterprises that are at the forefront of adopting new ways of working, are now much more comfortable taking risks with building applications and insourcing SaaS functionality. The amount of custom software build is actually increasing and the codebase are getting more complex. Is there a price to pay down the road? Maybe.

tylerchilds 9 hours ago | parent [-]

The definition of technical debt

We took risks today in the hopes that these decisions will make enough money to offset the labor cost of the decision.

Ai promises to eliminate labor, so businesses correctly identify AI risks as free debt.

dns_snek 8 hours ago | parent [-]

> Ai promises to eliminate labor, so businesses correctly identify AI risks as free debt.

These aren't promises, they're just hopes and dreams. Unless these businesses happen to be signing contracts with AI providers to replace their labor in a few years, they're incorrectly identifying AI risks as free debt.

tylerchilds 2 hours ago | parent [-]

I absolutely agree with you and I’m hyperbolizing to highlight exactly how incorrect that “promise” is

Realistically, the execs see it as either them or their subordinates and the idea of a captain dying with a ship is not regarded as noble amongst themselves. So they’ll sacrifice the crew, if only for one more singular day at sea.

krackers 2 hours ago | parent | prev | next [-]

Another conspicuous thing is the lack of vibe-coded PRs on mature open source projects. Maybe it's because these projects have erected policies limiting AI contributions, but given the high scores on SWEBench, you'd expect _something_ to come of it?

And yet in real world use you get stuff like https://github.com/scikit-learn/scikit-learn/pull/32101 (not to pick on that particular author since I pulled it completely at random. But it's notable that this also was not a fly-by-night PR by a complete newb. The author seems to have reasonable credentials and programming experience, and this is a fairly "self-contained" PR limited to just one function. Yet despite those advantages, the LLM output couldn't pass muster.)

ant512 7 hours ago | parent | prev | next [-]

Meta spends enormous amounts of money hiring AI experts instead of just using AI to improve its AI. AI just isn't there yet.

But when AI can be used to improve itself, that's when things get interesting.

timonoko 11 hours ago | parent | prev | next [-]

Gemini-CLI made me linux GCODE viewer totally on its own. It could view the result itself, so no feedback needed. I only provided test gcode-file.

https://github.com/timonoko/Plotters/blob/main/showgcode_inc...

jordemort 9 hours ago | parent [-]

wonder who it plagiarized here

fortran77 9 hours ago | parent | prev | next [-]

Steve Jobs once came to speak at my company when he was running NeXT. Almost nobody came to the talk, in the company cafeteria. The CEO of our company had to make an announcement on the PA encouraging folks to come. Finally, about 20 people (out of ~750) showed up.

He started talking aobut Objective-C and how it was 10x more productive than other programming languages and how easy it is to write good applications quickly with it. Someone shouted out the question: "If it's so easy and fast to write applications, where are all the NeXT killer apps?" There was no good answer....

II2II 9 hours ago | parent | next [-]

Killer apps aren't always survivors. Consider how Visicalc fell to Lotus 1-2-3, and how Lotus 1-2-3 fell to Excel. Arguably, the killer app for NeXT was WorldWideWeb. While it's successors weren't developed in Objective-C, the prototype for the world wide web was.

Objective-C itself didn't have much of a chance for many reasons. One is that most APIs were based upon C or C++ at the time. The availability of Objective-C on other platforms will do little to improve productivity if the resulting program is essentially C with some Objective-C code that you developed from scratch yourself. Microsoft was, for various reasons, crushing almost everyone at the time. Even titans like DEC and Sun ended up falling. Having a 10x more productive API was not going to help if it reached less than 1/100th of the market. (Objective-C, in my opinion, was an interesting but not terribly unique language so it was the NeXT API that offered the productivity boost.) Also keep in mind that it took a huge marketing push for Java to survive, and being platform agnostic certainly helpted it. Seeming as Java was based upon similar principles, and a more conventional syntax, Objective-C was also less appealing.

kragen 8 hours ago | parent [-]

I think the claim was not that the NS* API of NextStep was "10× more productive" but that the Objective-C programming language was. Objective-C is fantastic at calling existing C APIs. It's even easier than doing it in C# or LuaJIT, and much easier than doing it in Python, Perl, Tcl, Java, JS, etc.

You're right that there are programs that are just a thin layer of glue over existing C APIs, and the existing C API is going to largely determine how much effort that is. But there are other programs where calls to external APIs are only a small fraction of the code and effort. If OO was the huge productivity boost Jobs was claiming, you'd expect those programs to be much easier to write in Objective-C than in C. Since they made the choice to implement Objective-C as part of GCC, people could easily write them on other Unixes, too. Mostly they didn't.

My limited experience with Objective-C is that they are easier to write, just not to the extent Jobs claimed. OO makes Objective-C code more flexible and easier to test than code in C. It doesn't make it easier to design or debug. And, as you say, other languages were OO to a similar extent as Objective-C while similarly not sacrificing efficiency, such as C++ and (many years later) Java and C#.

krackers 2 hours ago | parent | prev | next [-]

If you can remember anything more about it, you should write up a blog. Now I'm curious how Steven handled that question

chihuahua 9 hours ago | parent | prev [-]

I think there's a good answer to that: to a first approximation, no one bought NeXT machines; therefore, there was no demand for NeXT apps and therefore no one produced any.

But it's unlikely that Steve Jobs of all people would want to provide that explanation.

Around 2001 my company sent me to a training class for Objective-C and as far as I can remember, it's like a small tweak of C++ with primitive smart pointers, so I doubt that it's 10x more productive than any other language. Maybe 1.01x more productive.

kragen 8 hours ago | parent [-]

That is not correct. Objective-C has a completely different OO system from C++. All they have in common is that they're both extended subsets of C. Retain/release are also not smart pointers; Objective-C doesn't have the C++ features needed to implement smart pointers.

Objective-C++ is a different matter, but it was written many years after the time we are discussing.

chihuahua 7 hours ago | parent [-]

I apologize that my memory has faded over the intervening 25 years.

What I do remember is that it's an odd language, but nothing about it suggested that it would even be 2x more productive than C or C++ or Java.

I didn't get to use it much after the week-long class; the only reason the company sent 3 of us across the country for a week is because the CTO had a bizarre obsession with Objective-C and OS X.

kragen 7 hours ago | parent [-]

I think it's universally agreed at this point that OO didn't provide the order of magnitude improvement in software development velocity that Jobs was touting. I do think ObjC is more flexible than C or C++.

pancsta 7 hours ago | parent | prev | next [-]

The most AI gens Ive seen was in r/selfhosted, with some ppl from the sub complaining about the trend (with mods deleting the complaints).

singpolyma3 7 hours ago | parent | prev | next [-]

https://tools.simonwillison.net/

smokel 9 hours ago | parent | prev | next [-]

If it is so easy to make a product, then why would you go to the trouble of marketing it? A competitor could wipe out your market in as little time as you spent yourself.

My bet is that we will see much more software, but more customized, and focused precisely on certain needs. That type of software will mostly be used privately.

Also, don't underestimate how long it will take for the masses to pick up new tools. There are still people, even here on Hacker News, proclaiming that AI coding assistants do not offer value.

xnx 9 hours ago | parent [-]

Exactly this. I've written a bunch of useful tools I would never have even attempted before. I don't even bother to share them for free because I assume someone would be able to write their own in the time it would take them to find and understand mine.

journey2s 11 hours ago | parent | prev | next [-]

They are out there. I bet many are just trials (people testing it out) and many more are fixing the bugs in the AI-generated code.

preommr 9 hours ago | parent | prev | next [-]

Not only is this wrong on multiple levels (there are lots of new ai-slop apps flooding the internet, and marketplaces e.g. steam has ~10k games marked as using ai), but it's always cringe when someone names something after themselves like this.

callc 9 hours ago | parent | next [-]

Source?

See contradicting data here: https://mikelovesrobots.substack.com/p/wheres-the-shovelware...

preommr 7 hours ago | parent [-]

Those charts are horribly misleading. There's random articles you can find [0], or just use steamDB to see that (now that steam requires games to disclose ai usage) there's like 10k+ games made that use AI. Also you can see that there's jump in the number of games released from 2023-2024.

Companies like lovable are reporting millions of projects that are basically slop apps. They're just not released as real products with an independent company.

The data is misleading - it's like saying high-quality phone cameras had no impact on the video industry. Just look at how much of network tv is filmed with iphone cameras. At best you might have some ads, and some minor projects using it, but nothing big. Completely ignoring that youtube or tiktok are built off of people's phone cameras and their revenue rivals major networks.

I am sorry, I just don't want to have this conversation about AI and it's impact for the millionth time because it just devolves into semantics, word games, etc. It's just so tiring.

[0] https://www.gamesradar.com/platforms/pc-gaming/steams-slop-p...

[1] https://steamdb.info/stats/releases/

bccdee 9 hours ago | parent | prev [-]

Sure, but I think the argument they're making is, if AI can produce good, non-slop applications at high speed, why isn't there a glut of new, high-quality software? Slop kinda proves their point.

preommr 7 hours ago | parent [-]

Why do companies have games with crap performance if the underlying hardware is so good? Why did the avro arrow get scrapped? Why are there countries with energy prices much higher than what nuclear offers?

There's a world of difference between the technical capabilities of a technology, and people actually executing it.

oulipo2 8 hours ago | parent | prev | next [-]

I generally agree. AI is far from being good enough to ship entire apps on its own. So for now it's more like a kind of "power templating assistant" for devs who already know what they're doing

that said, the article says "Why buy a CRM solution or a ERM system when “AI” can generate one for you in hours or even minutes"

and I'd say that it's also wrong to see it that way, because the issue with "rolling your own" is not so much shipping the initial version, than maintaining and developing and securing it over time. This is what takes a lot of manpower, and that's why you usually want to rely on an external vendor, who focuses exclusively on maintaining and updating that component, so you don't have to do it on your own

mellosouls 8 hours ago | parent | prev | next [-]

If somebody is going to have the conceit to name an argument after themselves - and imply some sort of equivalence with a great physicist they could at least make some effort with the details.

How do they know there aren't apps and services with significant AI contributions? Any closed source app is by definition a black box in regard to the composition.

We decry AI slop but this article shows no AI is needed for that.

senordevnyc 7 hours ago | parent | prev | next [-]

I built and grew a SaaS from $0 to $200k ARR in the last year. Putting aside that AI is at the core of the product (we replace some low-skill human labor in a super niche market), I never could have built this product in this timeframe without AI. So when I got laid off from my big tech job six months ago, I'd have just gone to look for another job.

I also know multiple non-technical people who have built little apps for themselves or for their company to use internally that previously would have purchased some off the shelf software.

erichocean 7 hours ago | parent | prev | next [-]

I dunno, I'm working on a TUI for a Claude Code like UI in Clojure right now, and after examining all of the available libraries and tools, just had Opus work with me to generate a new library (with tests) that does what I want.

Works great, and it's easy to add features to (as evidenced by the dozen or stuff I've added already).

Is that a "new library"? It certainly doesn't show up in any statistic the OP could point to.

insane_dreamer 7 hours ago | parent | prev | next [-]

Where we might see AI-generated code having a bigger impact is in internal tools, which usually just need to "work" but not be production grade. CRUD type apps that replace spreadsheets and that sort of thing.

samyar 8 hours ago | parent | prev | next [-]

Even if we solve coding part by replacing it entirely with AI there are still many other factors involved making software. AI alone simply is not native to the procedure we have to release software

spwa4 3 hours ago | parent | prev | next [-]

Oh perhaps the same reason we're not seeing AI impact across the entire economy? Because everything is demand limited, and AI can only increase supply.

The big problem for demand is wealth concentration, ie. the problem is money. Lots of people not having money, specifically. So unless AI actually becomes a way to transfer money to a great deal of people, it won't move the needle much. And if it becomes an actual reason to do layoffs (as opposed to an excuse, like it is now) then it will actually have negative impact.

freen 8 hours ago | parent | prev | next [-]

Invisible. Trust me.

bossyTeacher 10 hours ago | parent | prev | next [-]

At this point, the question we should all be worried about is what is going to happen once the biggest investors see and internalize these articles? Will the economy withstand the collapse of the AI industry and temporary damage to adjacent tech sectors or will this combined with the dodgy loans taken by Meta/Amazon/Alphabet pull the wider economy into a recession?

gombosg 9 hours ago | parent | next [-]

I think that just because AI won't be as good for tech as initially promised, it still has penetration potential in the wider economy.

OK I don't have numbers to back it up but I wouldn't be surprised if most of the investment and actual AI use was not tech (software engineering), but other use cases.

bpt3 9 hours ago | parent | prev [-]

Software development is a tiny portion of the AI value proposition. Content summarization and generation for natural language is a much broader use case, among others.

davydm 11 hours ago | parent | prev | next [-]

Simply put, the tools are not at the level the grifters want you to believe.

You'll always find someone claiming to have made a thing with ai alone, and some of these may even work to an extent, but the reality is that the models have zero understanding, so if you're looking to replicate something that already exists (or exists in well defined, open source parts), you're going to get further than if you're thinking truly outside the box.

Then there's library and framework churn. AI models aren't good with this (as evidenced by the hours I wanted trying to get any model to help me through a webpack4 to webpack5 upgrade. There was no retained context and no understanding, so it kept telling me to ado webpack4 things that don't work in 5).

So if you're going to make something that's easily replicated in a well-dpcunented framework with lots of stack overflow answers, you might get somewhere. Of course, you could have gotten then yourself with those same inputs, and, as a bonus, you'd actually understand what was done and be able to fix the inevitable issues, as well as extend with your own functionality. If you're asking for something more niche, you have to bring a lot more to the table, and you need a fantastic bullshit detector as the model will confidently lead you down the wrong path.

Simply put, ai is not the silver bullet it's sold as, and the lack of app explosion is just more evidence on that pile.

callc 9 hours ago | parent | next [-]

> Then there's library and framework churn. AI models aren't good with this (as evidenced by the hours I wanted trying to get any model to help me through a webpack4 to webpack5 upgrade. There was no retained context and no understanding, so it kept telling me to ado webpack4 things that don't work in 5).

I experienced this too asking LLM to help with a problem with a particular Arduino board. Even with being a very popular microcontroller, it is probably giving blended answers from the 15 other types of Arduino boards, not the one I have.

smokel 9 hours ago | parent | prev [-]

> so if you're looking to replicate something that already exists

I find that most software development falls squarely into this category.

m0llusk 6 hours ago | parent | prev | next [-]

This is the biggest red flag for me. In contrast, as Ruby and Rust were adopted they generated many tools, libraries, frameworks, and full applications which answered unmet needs in clever and interesting ways. What has generative LLMs produced? Apparently it is nearly all slop or minor iterations. In some cases there are significant bug fixes accumulating which is about as good as it gets. Still many powerful tools coming from the generative LLM wave, but probably worth waiting for best practices to emerge and pricing to stabilize.

Lapsa 9 hours ago | parent | prev | next [-]

every couple months I try my luck with a very sophisticated prompt akin to: "make me a web application that generates $1k of profit a month. do not hallucinate, it's ok - family friendly, ultrathink or go to jail"

nacozarina 11 hours ago | parent | prev | next [-]

they are all around us, polluting our world with as many fake videos, lies, scams, and bs as it can be buggy-whipped into generating.

The industrial age was plagued by smog. And so shall be the Information Age.

bonoboTP 10 hours ago | parent [-]

Smog and immensely useful products, yes.

WesolyKubeczek 8 hours ago | parent | prev | next [-]

I can't help but notice (yeah, I know, lots of stupid takes start with this) that many articles about vibe coding and AI enablement describe how you can write an agent which will help you vibe-code an application which is wrapping some model to generate even more applications that largely do more of the same.

Nobody is writing a tutorial on how to use ML to make a robot to fold laundry. Instead, it's you spend tokens to spend more tokens to spend even more tokens. At this point, the word "token" starts bearing unwelcome connotations with "tokens" from NFTs and Brave's "BAT" fads.

dboreham 9 hours ago | parent | prev | next [-]

Hmm. I noted this paradox here several weeks ago.

ParanoidShroom 9 hours ago | parent | prev [-]

Can people spot them?

https://countrx.app/ is something I vibed in a month. Can people here tell? Sure the typical gradiënt page is something to spot, but native apps i think are harder. I would love to see app store and Google Play Store stats to see how many new apps are onboarded.

Looking at distribution channels like Google Play, they added significant harder thresholds to be able to publish an app to reduce low quality new apps. Presumably due to gen ai?

Edit: Jesus guys, the point I'm trying to make is that there are probably a lot more out there that are not visible... Im not claiming i developed the holy grail vibe coding.

icepat 9 hours ago | parent | next [-]

I can yes, the phone sample images on your landing page all render as blank screens, and image loading is not web optimized (they load slowly).

Example: https://imgur.com/a/Sh3DtmF

ParanoidShroom 6 hours ago | parent | next [-]

Thanks for the that feedback but my point is about native apps. not a landing page

StilesCrisis 9 hours ago | parent | prev | next [-]

Works fine in Mobile Safari.

esseph 9 hours ago | parent | prev [-]

Works fine on Chrome and Firefox, both web and mobile.

pxx 9 hours ago | parent | prev | next [-]

no the point is that there should be _more_ shovelware like your app. the fact you were able to publish shovelware doesn't mean that there's a "revolution"; the number of apps published per time doesn't seem to be going up.

sockopen 9 hours ago | parent | prev | next [-]

Half of the images on the iOS App Store have the Gemini watermark on them (and the Google Play Store link is busted). I would assume most people would think this was built with AI.

frisia 8 hours ago | parent | prev | next [-]

Complete with AI generated reviews? Bold move for a medical app.

bediger4000 9 hours ago | parent | prev | next [-]

Google Play store link is broken, 2025-12-14T07:40 -0700

ParanoidShroom 8 hours ago | parent [-]

Yeah I know is awaiting review as i mentioned in the post, the point I'm trying to make is that there are probably a bunch out there already being gen ai apps. I don't think there is a clear way to recognize them

brazukadev 7 hours ago | parent | prev [-]

> Edit: Jesus guys, the point I'm trying to make is that there are probably a lot more out there that are not visible.

This is visible now and is terrible AI slop. Proved the point.