| ▲ | drob518 8 hours ago |
| The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well. |
|
| ▲ | loudmax 5 hours ago | parent | next [-] |
| AI may never be able to replace the best tech writers, or even pretty good tech writers. But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation. |
|
| ▲ | TimByte 5 hours ago | parent | prev | next [-] |
| In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing |
|
| ▲ | seanwilson 5 hours ago | parent | prev | next [-] |
| > They act as stand-ins for actual users and will flag all sorts of usability problems. I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable. I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs. I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning. And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that. |
|
| ▲ | throwaw12 6 hours ago | parent | prev | next [-] |
| > They act as stand-ins for actual users and will flag all sorts of usability problems True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems |
| |
| ▲ | drob518 an hour ago | parent | next [-] | | Yes, product managers and product owners should also be looking for usability problems. That said, the docs people are often going through procedures step by step, double-checking things, and they will often hit something that the others missed. | |
| ▲ | dxdm 5 hours ago | parent | prev | next [-] | | Realistically, PMs incentives are often aligned elsewhere. But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with. Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM. | | |
| ▲ | the_other 3 hours ago | parent [-] | | I'm with the grandparent comment. > But even if a PM cares about UX, How can a PM do their job if they don't *care* about UX? I mean... I know exactly happens because I've seen it more than once: the product slowly goes to shit. You get a bunch of PMs at various levels of seniority all pursuing separate goals, not collaborating, not actually working together to compose a coherent product; their production teams are actively encouraged to be siloed; features collide and overlap, or worse conflict; every component redefines what a button looks like; bundles bloat; you have three different rendering tools (ok, I've not seen that in practice but it seems to be encouraged by many "best practices") etc etc | | |
| ▲ | dxdm an hour ago | parent [-] | | Oh, I agree completely with you, sorry if that wasn't clear. The PM should, must, care about UX. Still, they don't always do, or at least end up not caring eventually, for various reasons. I'm just responding to this: > what were your Product Managers doing in the first place if tech writer is finding out about usability problems They might very well be doing their job of caring about UX, by using the available expertise to find problems. It's a bit like saying (forgive the imperfect analogy): what are the developers doing talking about corner cases in the business logic, isn't the PM doing their job? Yes, they are. They are using the combined expertise in the team. Let's allow the PMs to rely on the knowledge and insights of other people, shall we? Their job already isn't easy, even (or especially) if they care. |
|
| |
| ▲ | eszed 5 hours ago | parent | prev [-] | | I take your point, but a good PM will have been inside the decision-making process and carry embedded assumptions about how things should work, so they'll miss things. An outside eye - whether it's QA, user-testing, (as here) the technical writer, or even asking someone from a different team to take an informal look - is an essential part of designing anything to be used by humans. |
|
|
| ▲ | killerstorm 4 hours ago | parent | prev | next [-] |
| True. Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap. As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide. |
|
| ▲ | falcor84 8 hours ago | parent | prev [-] |
| > I don’t see AI doing either of those things well. I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer. Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful. |
| |
| ▲ | drob518 7 hours ago | parent | next [-] | | I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones. | | |
| ▲ | falcor84 6 hours ago | parent [-] | | Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users. For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have. |
| |
| ▲ | TimByte 5 hours ago | parent | prev | next [-] | | A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong | |
| ▲ | CuriouslyC 6 hours ago | parent | prev | next [-] | | Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory. | |
| ▲ | richardw 6 hours ago | parent | prev | next [-] | | It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight. | |
| ▲ | SecretDreams 6 hours ago | parent | prev [-] | | > but can't quite put my finger on what exactly it's missing. We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser. |
|