Remix.run Logo
colechristensen 10 hours ago

My information to the contrary is my experience in the last few weeks building things with LLMs including tooling to help build things with LLMs. The is experience is one of ... I'm a product manager and devsecops engineer bullying an LLM with the psychology of a toddler into building great software which it can do very successfully. A single instance of a model with a single rolling context window and one set of prompts absolutely can't do what you want, but that's not what I've been doing.

Oneshotting applications isn't interesting to me because I do want to be involved, there are things I have opinions about that I won't know I will have until we get there and there are definitely times where I want to pivot a little or a lot in the middle of development based on experience, an actually agile development cycle.

In the same way I wouldn't want to hire a wedding planner or house builder to plan my wedding or build my home based entirely on a single short meeting before anything started, I don't want to one shot software.

There are all sorts of things where I want to get myself out of the loop because they're stupid problems, some of them I've fixed, others I'd rather fix later because doing the thing is more interesting than pausing and building the tools to make the thing.

There is I think an inverse relationship between the complexity of the tooling and the amount of human involvement; for me I've reached or am quite near the amount of human involvement where I'm much more excited about building stuff than saving more of my attention.

I'm being a bit vague because I'm not sure I want to share all of my secrets just yet.

buu700 9 hours ago | parent [-]

Just to be clear, what I was proposing was a single tool which would, on the basis of a single ~30-minute interaction, purchase a domain name, set up a cloud environment, build a full-stack application + cross-platform native apps + useful tests with near-100% coverage, deploy a live test environment, and compile each platform's native app — all entirely autonomously. Are you saying you've used or built something similar to that? That is super interesting if so, even if you're unable to share. A major subset of that could also still be incredibly useful, but the whole solution I described is a very high bar.

I've been very successful building with custom LLM workflows and automation myself, but that's beyond the capabilities of any tooling I've seen, and I wouldn't necessarily expect great results with current models even if current tooling were fully capable of what I described. Even with such tooling, the cost of inference is high enough to deter careless usage without much more rigorous work on the initial spec and/or micromanagement of the development process.

I'm not necessarily advocating for one-shotting in any given context. I'm simply pointing out that there would be huge advantages to LLMs and tooling sufficiently advanced to be fully capable of doing so end-to-end, especially at dramatically lower cost than current models and at superhuman quality. Such an AI could conceivably one-shot any possible project idea, in the same sense that a competent human dev team with nothing but a page of vague requirements and unlimited time could at least eventually produce something functional.

The value of such an AI is that we'd use it in ways that sound ridiculous today. Maybe a chat with some guy at a bar randomly inspires a neat idea, so you quickly whip out your phone and fire off some bullet point notes; by the time you get home, you have 10 different near-production-ready variations to choose from, each with documentation on the various decisions its agent made and why, and each one only cost $5 in account credit. None is quite perfect, but through the process you've learned a lot and substantially refined the idea; you give it a second round of notes and wake up to a new testable batch. One of those has the functional requirements just right, so you make the final decisions on non-functional requirements and let it roll one last time with strict attention to detail on code quality and a bunch of cycles thrown at security review.

That evening, you check back in and find a high-quality final implementation that meets all of your requirements with a performant and scalable architecture, with all infrastructure deployed and apps submitted to all stores/repositories. You subsequently allocate a sales and marketing budget to the AI, and eventually notice that you suddenly have a new source of income. Now imagine that instead of you, this was actually your friend who's never written a line of code and barely knows how to use a computer.

I still agree with you that current models have been "good enough" for some time, in the sense that if LLMs froze today we could spend the next decade collectively building on and with them and it would totally transform the economy. But at the same time, there's definitely latent demand for more and/or better inference. If LLMs were to become radically more efficient, we wouldn't start shuttering data centers; the economy would just become that much more productive.

nl 4 hours ago | parent [-]

Have you tried Loveable, Replit, V0 etc?

Outside of purchasing the domain and native apps for you they cover a very significant amount of this.

If you insist on Native Apps, it's possible Google Jules could do it. With Gemini 2.5 it wasn't strong enough but I think it has Gemini 3 now which can definitely do native apps just fine.