▲ | stevage 3 days ago | |||||||||||||||||||||||||||||||
It seems weird to me that copilot sometimes refuses to execute code but sometimes allows it. What exactly are they aiming for? | ||||||||||||||||||||||||||||||||
▲ | wizzwizz4 3 days ago | parent [-] | |||||||||||||||||||||||||||||||
They're not. It's better to think of Copilot as a collaborative storytelling session with a text autocomplete system, which some other program is rudely hijacking to insert the result of running certain commands. Sometimes the (completion randomly selected from the outputs of the) predictive text model goes "yes, and". Other times, it goes "no, because". As observed in the article, if it's autocompleting the result of many "yes, and"s, the story is probably going to have another "yes, and" next, but if a story starts off with a certain kind of demand, it's probably going to continue with a refusal. | ||||||||||||||||||||||||||||||||
|