| ▲ | freeplay 3 hours ago |
| From a technical standpoint, this is pretty cool. From a human standpoint, this feels so unbelievably dystopian. |
|
| ▲ | bee_rider 3 hours ago | parent [-] |
| If a human was being grilled like this by an LLM, I’d call that my dystopian. If companies have LLMs that address each other in a somewhat adversarial manner, that seems not so bad. They don’t have feelings to protect after all, so it is kind of nice if they can cut through each other’s bullshit. |
| |
| ▲ | thenewwazoo 2 hours ago | parent [-] | | Imagine if there were some kind of way to compress the interrogation down to known-valid aspects, avoiding the parts that are unnecessary for machines. You could have some kind of a programmatic interface... | | |
| ▲ | bee_rider an hour ago | parent [-] | | Yea let’s call it the Agent Prioritized Interrogation interface. Yeah, I take your point. It seems like the idea, though, is to work with services that are specifically trying to expose some kind of special LLM based interface. I dunno if that’s prominent or useful, I avoid that kind of thing. |
|
|