Remix.run Logo
csomar 2 hours ago

You can think about LLM-generated UIs/apps the same way you think about LLM-generated responses. It's a bunch of garbage, but if you know what you're looking for, you might find something useful.

This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented. If the site claims it'll "take you to space," you don't take that literally, you just treat it as another AI artifact. But with numbers, you have no way to tell what's accurate and what's just made up.

mmooss 2 hours ago | parent [-]

> It's a bunch of garbage, but if you know what you're looking for, you might find something useful.

If you mean an LLM can be a brainstorming and hypothesis machine, and you have prior expertise to evaluate the proposals, then I can see that value. (Maybe that's what you meant, of course.)

But prior expertise is absolutely necessary. Otherwise we make ourselves victims of mis/disnformation. People say the Internet is a cesspool of mis/disinfo, yet nobody thinks it could affect them - we're all too smart, of course (no really, I'm the exception!). [0]

> This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented.

I don't see the difference. If it's obvious nonsense, in numbers or in text, it's detectable. Everything else, see above.

[0] Research shows that thinking is a big reason people get fooled, and better educated people are easier to fool.