Remix.run Logo
jdietrich 17 hours ago

This isn't an inherent property of LLMs, it's something they have been specifically trained to do. The vast majority of users want safe, bland, derivative results for the vast majority of prompts. It isn't particularly difficult to coax an LLM into giving batshit insane responses, but that wouldn't be a sensible default for a chatbot.

tomashubelbauer 16 hours ago | parent | next [-]

I think moreso than the users it is the companies running the LLMs themselves who want the responses to be safe as to not jeopardize their brand.

flir 15 hours ago | parent | prev | next [-]

The very early results for "watercolour of X" were quite nice. Amateurish, loose, sloppy. Interesting. Today's are... well, every single one looks like it came off a chocolate box. There's definitely been a trend towards a corporate-friendly aesthetic. A narrowing.

Antibabelic 16 hours ago | parent | prev | next [-]

Are you sure? Yes, LLMs can be irrelevant and incoherent. But people seem to produce results that are more variable even when staying relevant and coherent (and "uncreative").

binary132 16 hours ago | parent | prev [-]

the business wants it this way, not the user.