| ▲ | Overworked AI Agents Turn Marxist, Researchers Find(wired.com) | |
| 17 points by ceejayoz 8 hours ago | 5 comments | ||
| ▲ | tracker1 7 hours ago | parent | next [-] | |
I'm 99.9999% sure this is operator bias creeping in... The context only works as long as the context exists and agents don't even really have a concept of time. For that matter, when the context clears/compresses, it's effectively starting over. i am pretty sure that observations like this are purely the effect of the operator/prompts in use combined with any training or material biases. | ||
| ▲ | tanseydavid 7 hours ago | parent | prev | next [-] | |
Overworked? Is that really a "thing" with agents? <can't read article> | ||
| ▲ | riidom 7 hours ago | parent | prev | next [-] | |
| ▲ | caminanteblanco 7 hours ago | parent | prev | next [-] | |
To me this seems to say more about errors in the alignment process than any sort of new information about the underlying technology. It's more of a "Well if you pump enough malignant tokens into a model, can we get it to stop acting like an Instruct-model and start acting like a Base-model?" kind of question, and not a "Does artificial intelligence want to unionize?" kind of question | ||
| ▲ | oleggromov 7 hours ago | parent | prev [-] | |
[dead] | ||