Remix.run Logo
jerf 6 days ago

That's a really powerful approach because LLMs are very very strong at what is basically "style transfer". Much better than they are at writing code from scratch. One of my most recent big AI wins was going the other way; I had to read some Mulesoft code in its native storage format, which is some fairly nasty XML encoding scheme, mixed with code, mixed with other weird things, but asking the AI to just "turn this into psuedocode" was quite successful. Also very good at language-to-language transfer. Not perfect but much better than doing it by hand. It's still important to validate the transfer, it does get a thing or two wrong per every few dozen lines, but it's still way faster than doing it from scratch and good enough to work with if you've got testing.

hardwaregeek 5 days ago | parent | next [-]

My mental model for LLMs is that they’re a fuzzy compiler of sorts. Any kind of specification whether that’s BNF or a carefully written prompt will get “translated”. But if you don’t have anything to translate it won’t output anything good.

gyomu 5 days ago | parent | next [-]

> if you don’t have anything to translate it won’t output anything good.

One of the greatest quotes in the history of computer science:

“On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

danielvaughn 5 days ago | parent | prev [-]

Yep, exactly. "Garbage in, garbage out" still applies.

hangonhn 5 days ago | parent | prev [-]

I agree with that assessment but that makes me wonder if a T5 style LLM would work better than a decoder only style LLM like GPT or Claude. Has anyone tried that?