Remix.run Logo
storus 14 hours ago

An honest question - why would we need any frameworks at all for vibe coded apps? I can just tell the coding agent to use pure HTML5/Vanilla JS/CSS on the frontend and pure whatever on the backend and it would do it. No need for hundreds/thousands dependencies. For deployment I can ask the coding agent to do the same.

debazel 7 hours ago | parent | next [-]

My experience with actually trying this is that current LLMs benefit greatly from having a framework to build on.

More code in the context window doesn't just increase the cost, it also degrades the overall performance of the LLM. It will start making more mistakes, cause more bugs, add more unnecessary abstractions, and write less efficient code overall.

You'll end up having to spend a significant amount of time guiding the AI to write a good framework to build on top of, and at that point you would have been better off picking an existing framework that was included in the training set.

Maybe future LLMs will do better here, but I wouldn't recommend doing this for anything larger than a landing page with current models.

fny 7 hours ago | parent | prev | next [-]

Why not code in assembly?

I kid but any reason you can think of applies to app development too.

1. Good abstractions decrease verbosity and improve comprehension

2. Raw HTML/CSS/JS are out of distribution just like assembly (no one builds apps like this)

3. Humans need to understand and audit it

4. You'll waste time and tokens reinventing wheels

This inuitively makes sense. LLMs mimic human behavior and thought, so for all the reasons you'd get lost in a pile of web spaghetti or x86, so would an LLM.

monooso 4 hours ago | parent [-]

> Raw HTML/CSS/JS are out of distribution just like assembly (no one builds apps like this)

Plenty of people build apps with vanilla CSS and JS (and HTML is just HTML). It's a really nice way to work.

Here are a few links to get you started.

https://dev.37signals.com/modern-css-patterns-and-techniques...

https://simonwillison.net/2025/May/31/no-build/

https://bradfrost.com/blog/post/raw-dogging-websites/

ehnto 3 hours ago | parent | prev | next [-]

Echoing other thoughts here but also, it's like getting your first 10,000+ lines of output code for 0 token cost, and no prompting effort, no back and forth or testing etc.

Just jump straight to business logic, scaffolding is done for you already.

I think in your question as well is an idea that apps from now on will be bespoke, small and unique entities but the truth is we are still going to be mostly solving already solved problems, and enterprise software will still require the same massive codebases as before.

The real win of frameworks is they keep your workers, AI or human, constrained to an existing known set of tools and patterns. That still matters in long term AI powered projects too. That and they provide battle hardened collection of solutions that cover lots of edge cases you would never think to put in your prompts.

stopachka 14 hours ago | parent | prev | next [-]

A few reasons:

1. Unlimited projects: when you spin up traditional backends, you usually use VMs. It's expensive to start many of them. With Instant, you create unlimited projects

2. User experience: traditional CRUD apps work, but they don't feel delightful. I you want to support features like multiplayer, offline mode, or optimistic updates, you'll have to write a lot more custom infra. Instant gives you these out of the box, and the agents find it easier to write than CRUD code

3. Richer features: sometimes you'll want to add more than just a backend. For example, maybe you want to store files, or share cursors, or stream tokens across machines. These often need more bespoke systems (S3, Redis, etc). Instant already comes with these out of the box, and the agents know how to use them.

There are a few demo sections in the post that show this off. For example, you can click button and you'll get a backend, without needing to sign up. And in about 25 lines of code, you'll make a real-time todo app.

boxedemp 12 hours ago | parent [-]

>multiplayer

How does it compare to photon networking? I've been using photon and webrtc mostly. I haven't had any issues, but I'm always interested in finding better solutions!

stopachka 11 hours ago | parent [-]

Photon looks interesting! I am not too familiar with it, but from what I understand Photon and WebRTC are for communicating messages between clients. Those messages can be very fast, because they aren't blocked by writes to disk. Instant has two similar services, Presence & Streams. The primary sync engine is more for storing relational data.

Scholmo 3 hours ago | parent | prev | next [-]

Its build in guard rails.

Its potentially also reduced context it has to know.

Its also a very good way to scale. Lets build a very small and well tested library for x, llm uses x for case y. it doesn't have to worry about x, its content, its security.

calvinmorrison 13 hours ago | parent | prev | next [-]

same reasons human do. context and abstraction.

storus 11 hours ago | parent [-]

I can get rid of irrelevant abstraction bloat that way and make code perfectly tailored for what is needed. This was traditionally expensive which led to abstraction being packaged in frameworks.

IncreasePosts 13 hours ago | parent | prev | next [-]

You don't necessarily, but each token costs money for the AI to spit out. And probably more money when that output is used as input later. Delegating to a library makes sense financially.

storus 11 hours ago | parent [-]

With local inference on pretty decent local models we have nowadays (Qwen-3.5 and better) it's not much of a concern anymore.

walthamstow 2 hours ago | parent | next [-]

Sure, if you've got a £5k laptop

Bishonen88 7 hours ago | parent | prev [-]

what percentage of people is using local models for anything serious? I reckon single digits if even that. And for a corporate work environment, probably close to 0.

stepan_l 7 hours ago | parent | prev [-]

[dead]