Remix.run Logo
falcor84 8 hours ago

> I don’t see AI doing either of those things well.

I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.

Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.

drob518 7 hours ago | parent | next [-]

I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.

falcor84 6 hours ago | parent [-]

Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users.

For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.

TimByte 5 hours ago | parent | prev | next [-]

A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong

CuriouslyC 6 hours ago | parent | prev | next [-]

Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.

richardw 6 hours ago | parent | prev | next [-]

It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.

SecretDreams 6 hours ago | parent | prev [-]

> but can't quite put my finger on what exactly it's missing.

We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.