▲ | ghc 3 days ago | |||||||
This post is a good example of why groundbreaking innovations often come from outsiders. The author's ideas are clearly colored by their particular experiences as an engineering manager or principal engineer in (I'm guessing) large organizations, and don't particularly resonate with me. If this is representative of how engineering managers think we should build AI tooling, AI tools will hit a local maximum based on a particular set of assumptions about how they can be applied to human workflows. I've spent the last 15 years doing R&D on (non-programmer) domain-expert-augmenting ML applications and have never delivered an application that follows the principles the author outlines. The fact that I have such a different perspective indicates to me that the design space is probably massive and it's far too soon to say that any particular methodology is "backwards." I think the reality is we just don't know at this point what the future holds for AI tooling. | ||||||||
▲ | mentalgear 3 days ago | parent [-] | |||||||
I could of course say one interpretation is that the ml-systems you build have been actively deskilling (or replacing) humans for 15 years. But I agree that the space is wide enough that different interpretations arise depending on where we stand. However, I still find it good practice to keep humans (and their knowledge/retrieval) as much in the loop as possible. | ||||||||
|