Remix.run Logo
lmm 3 hours ago

> LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions. Code is the detail being filled in.

They can generate boilerplate, sure. Or they can expand out a known/named algorithm implementation, like pulling in a library. But neither of those is generating detail that wasn't there in the original (at most it pulls in the detail from somewhere in the training set).

tibbe 2 hours ago | parent [-]

They do more than that. If you ask for ui with a button that button won't be upside down even if you didn't specify its orientation. Lots of the detail can be inferred from general human preferences, which are present in the LLMs' training data. This extends way beyond CS stuff like details of algorithm implementations.

zabzonk an hour ago | parent | next [-]

Isn't "not being upsidedown" just one of the default properties of a button in whatever GUI toolkit you are using? I'd be worried if an LLM _did_ start setting all the possible button properties.

MoreQARespect 15 minutes ago | parent [-]

Putting LLMs on a pedestal is very much in vogue these days.

skywhopper 2 hours ago | parent | prev [-]

That’s exactly what they said. Details “elsewhere in its training set”.