Remix.run Logo
Gabriel439 4 hours ago

Author here: it's not even clear that agents can reliably permute their training data (I'm not saying that it's impossible or never happens but that it's not something we can take for granted as a reliable feature of agentic coding).

As I mentioned in one of the footnotes in the post:

> People often tell me "you would get better results if you generated code in a more mainstream language rather than Haskell" to which I reply: if the agent has difficulty generating Haskell code then that suggests agents aren't capable of reliably generalizing beyond their training data.

If an agent can't consistently apply concepts learned in one language to generate code in another language, then that calls into question how good they are at reliably permuting the training dataset in the way you just suggested.

rytis 4 hours ago | parent | next [-]

> if the agent has difficulty generating Haskell code then that suggests agents aren't capable of reliably generalizing beyond their training data.

doesn't that apply to flesh-and-bone developers? ask someone who's only working in python to implement their current project in haskell and I'm not so sure you'll get very satisfying results.

Frieren 3 hours ago | parent | next [-]

> doesn't that apply to flesh-and-bone developers?

No, it does not. If you have a developer that knows C++, Java, Haskell, etc. and you ask that developer to re-implement something from one language to another the result will be good. That is because a developer knows how to generalize from one language (e.g. C++) and then write something concrete in the other (e.g. Haskell).

ozlikethewizard 3 hours ago | parent | prev | next [-]

The hard bit of programming has never been knowing the symbols to tell the computer what to do. It is more difficult to use a completely unknown language, sure, but the paradigms and problem solving approaches are identical and thats the actual work, not writing the correct words.

lukevp 2 hours ago | parent [-]

Saying that the paradigms of Python and Haskell are the same makes it sound like you don’t know either or both of those languages. They are not just syntactically different. The paradigms literally are different. Python is a high level duck typed oo scripting language and Haskell is a non-oo strongly typed functional programming language. They’re extremely far apart.

cassianoleal 2 hours ago | parent | prev | next [-]

Your argument fails where it equates someone who only codes in one language to an LLM who is usually trained in many languages.

In my experience, a software engineer knows how to program and has experience in multiple languages. Someone with that level of experience tends to pick up new languages very quickly because they can apply the same abstract concepts and algorithms.

If an LLM that has a similar (or broader) data set of languages cannot generalise to an unknown language, then it stands to reason that it is indeed only capable of reproducing what’s already in its training data.

debugnik 3 hours ago | parent | prev [-]

But the model has seen pretty much all the public Haskell code around, and possibly been trained to write it in different settings.

mike_hearn 2 hours ago | parent | prev | next [-]

Your argument is far too dependent on observations made about the model's ability with Haskell, which is irrelevant. The concepts in Haskell are totally different to almost any other language - you can't easily "generalize" from an imperative strict language like basically everything people really use to a lazy pure FP language that uses monads for IO like Haskell. The underlying concepts themselves are different and Haskell has never been mainstream enough for models to get good at it.

Pick a good model, let it choose its own tools and then re-evaluate.

graemep 3 hours ago | parent | prev | next [-]

I am very sceptical mainstream languages will be better. I have seen plenty of bad Python from LLMs. Even with simple CRUD apps and when provided with detailed instructions.

lukan 3 hours ago | parent | prev | next [-]

"that suggests agents aren't capable of reliably generalizing beyond their training data."

Yes? If they could, we would have a strong general intelligence by now and only few people are claiming this.

ChrisGreenHeur 4 hours ago | parent | prev [-]

It can also mean that the other programming language is above the cognitive abilities of the LLM