Remix.run Logo
ghc 3 days ago

This post is a good example of why groundbreaking innovations often come from outsiders. The author's ideas are clearly colored by their particular experiences as an engineering manager or principal engineer in (I'm guessing) large organizations, and don't particularly resonate with me. If this is representative of how engineering managers think we should build AI tooling, AI tools will hit a local maximum based on a particular set of assumptions about how they can be applied to human workflows.

I've spent the last 15 years doing R&D on (non-programmer) domain-expert-augmenting ML applications and have never delivered an application that follows the principles the author outlines. The fact that I have such a different perspective indicates to me that the design space is probably massive and it's far too soon to say that any particular methodology is "backwards." I think the reality is we just don't know at this point what the future holds for AI tooling.

mentalgear 3 days ago | parent [-]

I could of course say one interpretation is that the ml-systems you build have been actively deskilling (or replacing) humans for 15 years.

But I agree that the space is wide enough that different interpretations arise depending on where we stand.

However, I still find it good practice to keep humans (and their knowledge/retrieval) as much in the loop as possible.

ghc 3 days ago | parent [-]

I'm not disagreeing that it's good to keep humans in the loop, but the systems I've worked on give domain experts new information they could not get before -- for example, non-invasive in-home elder care monitoring, tracking "mobility" and "wake ups" for doctors without invading patient privacy.

I think at its best, ML models give new data-driven capabilities to decision makers (as in the example above), or make decisions that a human could not due to the latency of human decision-making -- predictive maintenance applications like detecting impending catastrophic failure from subtle fluctuations in electrical signals fall into this category.

I don't think automation inherently "de-skills" humans, but it does change the relative value of certain skills. Coming back to agentic coding, I think we're still in the skeuomorphic phase, and the real breakthroughs will come from leveraging models to do things a human can't. But until we get there, it's all speculation as far as I'm concerned.