Remix.run Logo
How to Improve at Sensemaking AI?(commoncog.com)
3 points by rrherr 6 hours ago | 1 comments
rrherr 6 hours ago | parent [-]

> Software engineers began to fracture into three groups. The first group is the “never AIs”.

> The second group consists of ‘pragmatic AI adopters’. I identify the most with this second group.

> And so it is the frame of a third group that has posed a challenge for me. Some call this the ‘software dark factory folks’. This group believes that it is possible to have AI coding agents write code with little to no human intervention.

> If software dark factories are possible, then the entire practice of software engineering is going to change. But how can you take the software dark factory folks seriously when you encounter incredibly stupid coding agent behaviours in your own day-to-day work? Nothing they say lines up with your own frame; all their ‘data points’ are off-the-cuff remarks that may be rejected due to your own lived experiences.

> Avoiding frame fixation _sounds_ easy when it’s about another person’s domain. It’s less easy when a) it’s about your own domain, b) when the new frame goes against everything you believe about your own hard-won expertise, and c) _when the new frame is fundamentally uncertain._

> I want to talk about that last point. We do _not_ know if these ‘software dark factories’ are possible. I’m not saying that the folks writing field reports are lying, or that the benefits they’re already seeing are fake. I’m saying that we _can’t_ know what the tradeoffs are, and where the limits of this approach lies. Nobody can. This is a new technology with new affordances. Nobody can know what’s possible here. This is what uncertainty feels like.

> But I think it’s also true that you need to take this ‘dark factory’ frame seriously. There are enough field reports now from enough unrelated people that indicate that _something_ is going on. More importantly, the potential impact on your career — if you are a software engineer — is too large to ignore.

> Thankfully, the Data-Frame theory already offers us one way out: you _don’t_ have to believe their frame. You may hold on to your current frame, and elaborate a second frame in parallel. [This post explains how... ]