| ▲ | evanelias 3 days ago | ||||||||||||||||
[flagged] | |||||||||||||||||
| ▲ | rafabulsing 3 days ago | parent | next [-] | ||||||||||||||||
Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems. | |||||||||||||||||
| ▲ | dahart 3 days ago | parent | prev | next [-] | ||||||||||||||||
That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices. | |||||||||||||||||
| |||||||||||||||||
| ▲ | wpm 3 days ago | parent | prev [-] | ||||||||||||||||
This is a bad faith argument and you know it. | |||||||||||||||||
| |||||||||||||||||