Remix.run Logo
HillRat 3 hours ago

Design thinking, at least in its formal STS approach, is essentially applied sociology; it's about using various toolkits to build a sufficient understanding of a domain from the "inside out" (using desk and field research) so that you can design valuable experiences that build upon the expertise of those actually inside the domain. In this, it's a bridge between UX/product and users/stakeholders (technical stakeholders are admittedly too often an afterthought, but that's a process problem). If anyone comes in and attempts to blindly shove workshops at you without first conducting in-depth research, interviews, and field studies in your domain, then they are (without resorting to the One True Scotsman) not doing design thinking, they're doing cargo-cult brainstorming. (It's also a process orthogonal to agile development, since by definition it's a linear process that needs to be conducted prior to developing the actual product features and requirements.)

The books and papers the OP cites are solid (Rittel and Webber, Buchanan, etc., though TRIZ, I think, is rather oversold), but in my experience the problem with most design thinking practitioners is that they aren't qualified sociologists and ethnographers, so a lot of design thinking is basically a reinvention of the last century of sociological middle-range theory and ethnographic principles, without being strongly informed by either, likely due to the field's foundation in early software requirements studies.

randcraw 2 hours ago | parent [-]

That's a great answer that offers concrete insight into what design thinkers are trying to achieve. And it seems like they have a chance to succeed if they also employ iterative experimental methods to learn whether their mental model of user experience is incorrect or incomplete. Do they?

HillRat 2 hours ago | parent [-]

Traditionally you use a lot of paper and experiential prototypes to iterate on, which doesn't cover everything but helps refine assumptions (I sometimes like starting with mocking downstream output like reports and report data, which is a quick way to test specific assumptions about the client's operations and strategic goals, which then can affect the detailed project). When I can, I also try to iterate using scenario-based wargaming, especially for complex processes with a lot of handoffs and edge cases; it lets us "chaos monkey" situations and stress-test our assumptions.

More than once early iterations have led me to call off a project and tell the client that they'd be wasting their money with us; these were problems that either could be solved more effectively internally (with process, education, or cultural changes), weren't going to be effectively addressed by the proposed project, or, quite often, because what they wanted was not what they actually needed.

Increasingly, AI technical/functional prototyping's making it into the early design process where traditionally we'd be doing clickable prototypes, letting us get cheap working prototypes in place for users to test drive and provide feedback on. I like to iterate aggressively on the data schema up front, so this fits in well with my bias towards getting the database and query models largely created during the design effort based on domain research and collaboration.