Remix.run Logo
daxfohl a day ago

Nah, I think that's the opposite of what to do. That requires you to specify all requirements up front, then press go and pray. Even if it worked perfectly, it takes us back to the stone ages of waterfall design. With LLMs, missing one requirement that would be obvious to a human (don't randomly delete accounts) often leads to a fun shortcut from the LLM perspective (hey if there's a race condition then I can fix it by deleting the account)!

The real value of LLMs is their conversational ability. Try something, iterate, try something else, iterate again, have it patch a bug you see, ask if it has recommendations based on where you are headed, flesh things out and fine tune them real time. Understand its misunderstandings and help it grasp the bigger picture.

Then at the end of the session, you'll have working code AND a detailed requirements document as an output. The doc will discuss the alternatives you tried along the way, and why you ended up where you did.

It's much like this in graphics too. Yeah you could spend a ton of time coming up with the single one-shot prompt that gives you something reasonably close to what you need, which is how it worked in the past. But now that approach is silly. It's much easier to work iteratively, change one thing, change another, until you have exactly what you need, in a much faster and more creative session.

So yeah you could use LLMs as a compiler, but it's so much more engaging not to.

tamnd a day ago | parent [-]

Totally agree. It is why we're building Mochi, https://github.com/mochilang/mochi a small language that treats AI, datasets, and graph queries as first-class citizens, not just targets for code generation.

It's inspired by the evolution you mentioned: early compilers generating Assembly, now AI tools generating Python or SQL. Mochi leans into that by embedding declarative data queries, AI generation, and streaming logic directly into the language. Here is how it looks:

  type Person {
    name: string
    age: int
    email: string
  }

  let p = generate Person {
    prompt: "Generate a fictional software engineer"
  }
  print(p.name)

  let vec = generate embedding {
    text: "hello world"
    normalize: true
  }
  print(len(vec))
We see this as the natural next step after traditional compilers, more like intent compilers. The old "compiler to Assembly" phase now maps to "LLM prompt scaffolding" and prompt engineering is quickly becoming the new backend pass.

Would love feedback if this resonates with others building around AI + structured languages.

daxfohl a day ago | parent [-]

Sounds like a fun project, but I have a hard time imagining it ever really catching on. I'd compare it to workflow managers. Lots of people created lots of DSLs around that, but nothing ever really caught on until Temporal, because nobody wants the cognitive overhead of needing to maintain a separate language just for workflows, especially if you don't know whether that language is going to still be around in five years.

With Temporal, you write the workflow logic in whatever language you normally use, as an ordinary async function, and if you follow the rules, it just works. Even though it looks like an ordinary procedural function, it survives server reboots, can sleep for months, etc.

I'd recommend dropping the new language approach ASAP, and shifting toward more of a Temporal-like approach. That said, Temporal does a lot under the hood and on the Temporal server side to make it worth the money. Here, I have a hard time seeing what this would provide beyond a "TReturnType LLMProxy.callAI<TReturnType>(string prompt)" function that sends the prompt and expected return type, and parses the response to the desired type. There's not even a need for a separate server tool, it's just a function.

So IDK if there's a product there or not. Seems like you'd need to figure out a way to add more intrinsic value than just a library function. But I think the new language idea, while fun to work on, is probably not going to get very far in the real world.