Remix.run Logo
pron 13 hours ago

I'm not saying it's not impressive or that it doesn't show great promise, but there are clearly challenges, and we don't yet know when or how they'll be solved.

From some big LLM fans I've heard that one major problem is that of trust: Unlike tools/machines, LLMs cannot be trusted to reliably succeed or fail in an obvious way; unlike people, LLMs cannot be trusted to communicate back useful feedback, such as important insights or pitfalls. So while in some respects LLMs are superior to both humans and existing automation, in others they're inferior to both.

Maybe we'll be able to fix these problems within the current LLM technology, and maybe we'll be able to do that soon, but neither of these is obviously inevitable.

My pet issue with one form of inevitability, as I mentioned above, is that if we get to a point where software can reliably write other software for us, then we're also at a point where we don't need any of other software to be actually written, at least not in some human-readable form. There will just be one (kind of) program that does what we ask it to; why would we ask it to write programs?

danilevsky 4 hours ago | parent [-]

[dead]