| ▲ | readyplayernull 3 days ago |
| Easy to defeat. AI can't come up with ambiguous art: https://en.wikipedia.org/wiki/Ambiguous_image There is a strategic feature to it based on retrospection. |
|
| ▲ | ValentinA23 2 days ago | parent | next [-] |
| Diffusion Illusions: Hiding Images in Plain Sight https://arxiv.org/pdf/2312.03817 |
| |
| ▲ | readyplayernull 2 days ago | parent [-] | | That is cool, but it's mostly merging, that is a partial solution and the strategy is defined by the developer. The results aren't good enough. |
|
|
| ▲ | mandmandam 2 days ago | parent | prev | next [-] |
| Not "out of the box" maybe, but yes, it can. It can even do it in ways which humans find impossible. Proof: https://www.youtube.com/watch?v=FMRi6pNAoag&list=LL&index=74... |
| |
| ▲ | readyplayernull 2 days ago | parent [-] | | Same as the other comment. It's merging, not construction from a self-generated strategic plan to cause ambiguity. | | |
| ▲ | mandmandam 2 days ago | parent [-] | | So were you were just saying that LLMs aren't AGI? Or was there something more to it, specifically related to ambiguity/illusions? |
|
|
|
| ▲ | oezi 3 days ago | parent | prev [-] |
| Why wouldn't AI be able to retrospect? |
| |
| ▲ | readyplayernull 3 days ago | parent [-] | | I didn't say it can't retrospect. What it can't do is retrospect as a human mind, it can only read the intepretation a human mind has of its retrospection, and the human mind can't fully explain what its way of thinking is. So it doesn't have a useful model of the human mind, that it would need for the strategy. And strategy is a whole complex feature, that would use overlapping models for the ambiguity. |
|