Remix.run Logo
kenjackson 9 hours ago

Has anyone tried creating a language that would be good for LLMs? I feel like what would be good for LLMs might not be the same thing that is good for humans (but I have no evidence or data to support this, just a hunch).

Sheeny96 8 hours ago | parent | next [-]

The problem with this is the reason LLMs are so good at writing Python/Java/JavaScript is that they've been trained on a metric ton of code in those languages, have seen the good the bad and the ugly and been tuned to the good. A new language would be training from scratch and if we're introducing new paradigms that are 'good for LLMs but bad for humans' means humans will struggle to write good code in it, making the training process harder. Even worse, say you get a year and 500 features into that repo and the LLM starts going rogue - who's gonna debug that?

reitzensteinm 7 hours ago | parent [-]

But coding is largely trained on synthetic data.

For example, Claude can fluently generate Bevy code as of the training cutoff date, and there's no way there's enough training data on the web to explain this. There's an agent somewhere in a compile test loop generating Bevy examples.

A custom LLM language could have fine grained fuzzing, mocking, concurrent calling, memoization and other features that allow LLMs to generate and debug synthetic code more effectively.

If that works, there's a pathway to a novel language having higher quality training data than even Python.

mbreese 2 hours ago | parent [-]

I recently had Codex convert an script of mine from bash to a custom, Make inspired language for HPC work (think nextflow, but an actual language). The bash script submitted a bunch of jobs based on some inputs. I wanted this converted to use my pipeline language instead.

I wrote this custom language. It's on Github, but the example code that would have been available would be very limited.

I gave it two inputs -- the original bash script and an example of my pipeline language (unrelated jobs).

The code it gave me was syntactically correct, and was really close to the final version. I didn't have to edit very much to get the code exactly where I wanted it.

This is to say -- if a novel language is somewhat similar to an existing syntax, the LLM will be surprisingly good at writing it.

voxleone 7 hours ago | parent | prev | next [-]

>Has anyone tried creating a language that would be good for LLMs?

I’ve thought about this and arrived at a rough sketch.

The first principle is that models like ChatGPT do not execute programs; they transform context. Because of that, a language designed specifically for LLMs would likely not be imperative (do X, then Y), state-mutating, or instruction-step driven. Instead, it would be declarative and context-transforming, with its primary operation being the propagation of semantic constraints. The core abstraction in such a language would be the context, not the variable. In conventional programming languages, variables hold values and functions map inputs to outputs. In a ChatGPT-native language, the context itself would be the primary object, continuously reshaped by constraints. The atomic unit would therefore be a semantic constraint, not a value or instruction.

An important consequence of this is that types would be semantic rather than numeric or structural. Instead of types like number, string, bool, you might have types such as explanation, argument, analogy, counterexample, formal_definition.

These types would constrain what kind of text may follow, rather than how data is stored or laid out in memory. In other words, the language would shape meaning and allowable continuations, not execution paths. An example:

@iterate: refine explanation until clarity ≥ expert_threshold

koolba 9 hours ago | parent | prev | next [-]

There are two separate needs here. One is a language that can be used for computation where the code will be discarded. Only the output of the program matters. And the other is a language that will be eventually read or validated by humans.

branafter 8 hours ago | parent | prev | next [-]

Most programming languages are great for LLMs. The problem is with the natural language specification for architectures and tasks. https://brannn.github.io/simplex/

simonw 9 hours ago | parent | prev | next [-]

There was an interesting effort in that direction the other day: https://simonwillison.net/2026/Jan/19/nanolang/

conception 9 hours ago | parent | prev | next [-]

I don’t know rust but I use it with llms a lot as unlike python, it has fewer ways to do things, along with all the built in checks to build.

999900000999 8 hours ago | parent | prev [-]

I want to create a language that allows an LLM to dynamically decide what to do.

A non dertermistic programing language, which options to drop down into JavaScript or even C if you need to specify certain behaviors.

I'd need to be much better at this though.

branafter 7 hours ago | parent | next [-]

You're describing a multi-agent long horizon workflow that can be accomplished with any programming language we have today.

999900000999 7 hours ago | parent [-]

I'm always open to learning, are there any example projects doing this ?

branafter 6 hours ago | parent | next [-]

The most accessible way to start experimenting would be the Ralph loop: https://github.com/anthropics/claude-code/tree/main/plugins/...

You could also work backwards from this paper: https://arxiv.org/abs/2512.18470

999900000999 6 hours ago | parent [-]

Ok.

I'm imagining something like.

"Hi Ralph, I've already coded a function called GetWeather in JS, it returns weather data in JSON can you build a UI around it. Adjust the UI overtime"

At runtime modify the application with improvements, say all of a sudden we're getting air quality data in the JSON tool, the Ralph loop will notice, and update the application.

The Arxiv paper is cool, but I don't think I can realistically build this solo. It's more of a project for a full team.

fwip 6 hours ago | parent | prev [-]

yes "now what?" | llm-of-choice

gregoryl 8 hours ago | parent | prev [-]

What does that even mean?