Remix.run Logo
deified a day ago

There is an argument I rarely ever see in discussions like this, which is about reducing the need for working memory in humans. I'm just in the mid thirties, but my ability to keep things in working memory is vastly reduced compared to my twenties. Might just be me who's not cut out for programming or system architecturing, but in my experience what is hard for me is often what is hard for others, they just either don't think about it or ignore it and push through keeping hidden costs alive.

My argument is this; even if the system itself becomes more complex, it might be worth it to make it better partitioned for human reasoning. I tend to quickly get overwhelmed and my memory is getting worse by the minute. It's a blessing for me with smaller services that I can reason about, predict consequences from, deeply understand. I can ignore everything else. When I have to deal with the infrastructure, I can focus on that alone. We also have better and more declarative tools for handling infrastructure compared to code. It's a blessing when 18 services doesn't use the same database and it's a blessing when 17 services isn't colocated in the same repository having dependencies that most people don't even identify as dependencies. Think law of leaky abstractions.

jagraff a day ago | parent [-]

This is a good point - having your code broken up into standalone units that can fit into working memory has real benefits to the coder. I think especially with the rise of coding agents (which, like it or not, are here to stay and are likely going to increase in use over time), sections of code that can fit in a context window cleanly will be much more amenable to manipulation by LLMs and require less human oversight to modify, which may be super useful for companies that want to move faster than the speed of human programming will allow.