▲ | triwats a day ago | |
Firstly I think this is neat, but the dam has burst. This might be great for educational institutions but the idea of people needing to know what everyline does as output feels mute to me in the face of agentic AI. | ||
▲ | johnsillings a day ago | parent [-] | |
Sadly, this doesn't work on the line-level yet. I know that wasn't the main purpose of your comment, but figured I'd mention that first. Getting more to the heart of your question: the main use-case for this (and the reason Span developed it) is to understand the impact of AI coding assistants in aggregate for their customers. The explosion of AI-generated code is creating some strange issues that engineering teams need to take into account, but visibility is super low right now. The main idea is that – with some resolution around which code is AI-authored and human-authored – engineering teams can better understand when and how to deploy AI-generated code (and when not to). |