Remix.run Logo
Klaster_1 2 hours ago

The article very much resonates with my experience past several months.

The project I work on has been steadily growing for years, but the amount of engineers taking care of it stayed same or even declined a bit. Most of features are isolated and left untouched for months unless something comes up.

So far, I managed growing scope by relying on tests more and more. Then I switched to exclusively developing against a simulator. Checking changes with real system become rare and more involved - when you have to check, it's usually the gnarliest parts.

Last year's, I noticed I can no longer answer questions about several features because despite working on those for a couple of months and reviewing PRs, I barely hold the details in my head soon afterwards. And this all even before coding agents penetrated deep into our process.

With agents, I noticed exactly what article talks about. Reviewing PR feels even more implicit, I have to exert deliberate effort because tacit knowledge of context didn't form yet and you have to review more than before - the stuff goes into one ear and out of another. My team mates report similar experience.

Currently, we are trying various approaches to deal with that, it it's still too early to tell. We now commit agent plans alongside code to maybe not lose insights gained during development. Tasks with vague requirements we'd implicitly understand most of previously are now a bottleneck because when you type requirements to an agent for planning immediately surface various issues you'd think of during backlog grooming. Skill MDs are often tacit knowledge dumps we previously kept distributed in less formal ways. Agents are forcing us to up our process game and discipline, real people benefit from that too. As article mentioned, I am looking forward to tools picking some of that slack.

One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate. It's as if the concept was alien to them or they could comprehend that other people handle that at different capacity than them.

datsci_est_2015 an hour ago | parent | next [-]

> One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate.

Engineering managers in my experience (even in ones with deep technical backgrounds) often miss the trees for the forest. The best ones go to bat for you, especially once verifying that they can do something to unblock or support you. But that’s still different than being in the terminal or IDE all day.

Offloading cognitive load is pretty much their entire role.

matsemann an hour ago | parent | prev | next [-]

Learning has always been to write things down. Just reading it seldom sticks.

RealityVoid 12 minutes ago | parent | next [-]

Absolutely not. Learning has been to experiment with the things until you form a effective mental model of the thing. Writing things does ab-so-luetely nothing except make you feel good in the moment. Just like listening to a lecture without engaging with the subject matter deeper.

Writing things down is important for organisational persistence of information but that is something else.

direwolf20 6 minutes ago | parent [-]

Writing is better than reading, but doing is better than writing.

shimman 4 minutes ago | parent [-]

How does this apply to coding when the act of writing IS doing? Or do you mean like coding "on your own" versus following a tutorial for example?

0wis an hour ago | parent | prev [-]

Not sure humanity learned nothing before the last 8000 years. It was just very slow. Maybe we will need new ways to learn

nsvd2 2 hours ago | parent | prev | next [-]

I think that recording dialog with the agent (prompt, the agent's plan, and agent's report after implementation) will become increasingly important in the future.

slashdev an hour ago | parent | next [-]

I have this at the bottom of my AGENTS.md:

You will also add a markdown file to the changelog directory named with the current date and time `date -u +"%Y-%m-%dT%H-%M-%SZ"`, record the prompt, and a brief summary of what changes you made, this should be the same summary you gave the developer in the chat.

From that I get the prompt and the summary for each change. It's not perfect but it at least adds some context around the commit.

atmosx 28 minutes ago | parent | next [-]

Isn’t the commit message a better place to add what and why? You might need to feed some info that the agent doesn’t have access to “we are developing feature X this change will such and such to blah blah”. The agent will write a pretty good commit message most of the times. Why do you need a markdown file? Are releasing new versions of the software for third parties?

Belphemur 11 minutes ago | parent [-]

Cheaper and faster retrieval to be added to the context and discoverable by the agent.

You need more git commands to find the right commit that contains the context you want (either you the human or the LLM burning too many token and time) than just include the right MD file or use grep with proper keywords.

Moreover you could need multiple commits to get the full context, while if you ask the LLM to keep the MD file up to date, you have everything together.

mort96 15 minutes ago | parent | prev | next [-]

How often, in your experience, do people read those auto-generated markdown files? Do you have any empirical data on how useful people find reading other people's agents' auto-generated files?

addaon an hour ago | parent | prev [-]

How often is it the same summary given to the developer in the chat?

Klaster_1 2 hours ago | parent | prev [-]

Agree, but current agents don't help with that. I use Copilot, and you can't even dump it preserving complete context, including images, tool call results and subagent outputs. And even if you could, you'd immediately blow up the context trying to ingest that. This needs some supporting tooling, like in today's submission where agent accesses terabytes of CI logs via ClickHouse.

gopher_space an hour ago | parent [-]

I've had some luck creating tiny skills that produce summaries. E.g. a current TASK.md is generated from a milestone in PLAN.md, and when work is checked in STATUS.md and README.md are regenerated as needed. AGENTS.md is minimal and shrinking as I spread instructions out to the tools.

Part of my CI process when creating skills involves setting token caps and comparing usage rates with and without the skill.

bluegatty 34 minutes ago | parent | prev [-]

We don't have the right abstractions in place to support true AI driven work. We replaced ourselves but we don't have the tools to do '1 layer up'.