| ▲ | slopinthebag 5 hours ago | |
Lol. I tried doing some image generation with SOTA models. I explicitly asked it not to do something it was doing and it would literally do the thing, and straight up tell me it didn't. Unless someone has a cognitive impairment it's just simply not a failure mode of cooperative humans. Same with hallucinations. Both humans and AI can be wrong, but a human has the ability to admit when they don't understand or know something, AI will just make it up. I don't understand why people would ever trust anything important to something with the same failure mode as AI. It's insane. | ||
| ▲ | astrange 2 hours ago | parent [-] | |
Image generation models are usually not LLMs. Only Nano Banana Pro is capable of following negative directions like that. | ||