Remix.run Logo
nidnogg 5 hours ago

I've recently lazied out big time on a company project going down a similar rabbit hole. After having a burnout episode and dealing with sole caregiver woes in the family for the past year, I've had less and less energy to piece together intense, correct thought sequences at work.

As such I've taken to delegating substantial parts architecture and discovery to multiagent workflows that always refer back to a wiki-like castle of markdown files that I've built over time with them, fronted by Obsidian so I can peep efficiently often enough.

Now I'm certainly doing something wrong, but the gaps are just too many to count. If anything, this creates a weird new type of tech debt. Almost like a persistent brain gap. I miss thinking harder and I think it would get me out of this one for sure. But the wiki workflow is just too addictive to stop.

stingraycharles 3 hours ago | parent | next [-]

> I miss thinking harder

Me too, and I wonder where this will take us; I worry about losing the ability to think hard.

jareklupinski an hour ago | parent [-]

im hoping to re-use the newly garbage-collected memory available to me now to rediscover "play hard"

kubb 4 hours ago | parent | prev | next [-]

You’re not doing anything wrong. This isn’t a bulletproof idea. It can work, and this is what a lot of people end up with to manage complexity, but there’s a critical point beyond which things collapse: the agent can’t keep the wiki up to date anymore, the developer can’t grok it anymore.

kaashif 3 hours ago | parent [-]

Managing complexity, modularity, separation of concerns, were already critical for ensuring humans could still hold enough of the system in their brains to do something useful.

People who do not understand that will continue to not understand that it also applies to AI right now. Maybe at some point in the future it won't, not sure. But my impression is that systems grow in complexity far past the point where the system is gummed up and no-one can do anything, unless it's actively managed.

If a human can understand 10 units of complexity and their LLM can do 20, then they might just build a system that's 30 complex and not understand the failure modes until it's too late.

stingraycharles 2 hours ago | parent [-]

> People who do not understand that will continue to not understand that it also applies to AI right now.

I think this is mostly a matter of expectation management. AIs are being positioned as being able to develop software independently, and that’s certainly the end goal.

So then people come in with the expectation that the AI is able to manage that, and it fails. Spectacularly.

The LLM can certainly not manage any non-local complexity right now, and succeed in increasing the technical debt and complexity faster than ever before.

loveparade 5 hours ago | parent | prev | next [-]

That has been my experience as well. Most of the value of writing docs or a wiki is not in the final artifacts, it's that the process of writing docs updates your own mental models and knowledge so that you can make better decisions down the road.

Even if you can get an LLM to output good artifacts that don't eventually evolve into slop, which is questionable, it's really not that useful, especially not for a personal wiki.

kilroy123 4 hours ago | parent | next [-]

Makes me think of all these tools that use AI to make fancy flashcards for you to study.

It seems rather silly to me, as _creating_ those flashcards is what helps you learn, with the studying after, cementing that knowledge in your brain.

kaashif 3 hours ago | parent [-]

The ultimate connecting up of the dots would be brain implants that just give you knowledge with zero effort.

I don't know if I'd ever be comfortable with that, hopefully I'll just be retired or dead when that takes off.

nidnogg 5 hours ago | parent | prev | next [-]

And what happens when the bucket of knowledge gets too big and starts to overflow? I feel as if, by delegating that process of building knowledge too much, I end up accruing knowledge gaps of my own. Funnily enough it mirrors the LLM/agent's performance.

Maybe my recent prompts reflects how badly up to speed I am at a given time? I don't know. A slightly related note - I recently heard the term AI de-skilling, this treads close to it imo.

nidnogg 4 hours ago | parent | prev [-]

The worst part to me, by far, is having nothing more than a bunch of "smart" markdown files to show as my deliverables for the day. Sometimes this stacks for many days on end. Usually the bigger the knowledge gaps are, the more I procrastinate on real work.

Talk about back to school feelings (!)

mikkupikku an hour ago | parent | prev [-]

Just w.r.t having time to think harder, have you considered getting a hobby that forces you to go offline and do something repetitive so your mind can wander? I do this with walks (phone left at home) and sometimes swimming laps. Physical exercise may not seem appealing if you're in burnout territory, but I think it's worth trying because for me at least it's a different, mostly orthogonal, kind of fatigue.