Remix.run Logo
Ask HN: Is AI 'context switching' exhausting?
16 points by interstice 4 days ago | 16 comments

I've always had this distinct struggle when switching in and out of being 'in charge', the best example I can think of is the difference between a driver vs a passengers awareness of the road.

Using AI for code has reminded me of this sensation, switching in and out of 'driving' feels more exhausting than being 100% one or the other. I have a theory that enforcing reduced engagement has all sorts of side effects in any format.

Wondering if anyone else has run into this feeling, and if so have you tried anything successfully to address it?

karmakaze 3 days ago | parent | next [-]

I don't really perceive it like that. For me it's more like I'm driving and the AI passenger keeps interjecting with insidiously upbeat back-seat-driver-instructions. What I find tiresome is the pause waiting for responses that break my flow state. Using faster models is tiring from having to extensively correct its understanding of the prompts and output. I don't vibe code, I use the AI to solve specific design or implementation problems and I'll recognize a suitable solution when it presents one, or get it to critique one I'm proposing.

ativzzz 2 days ago | parent | prev | next [-]

While the AI is running, go work on something else. Go write a doc, or write a test, answer an email, work on another part of your feature that won't interfere with the AI, etc

I know some people have trouble with the context switching but I've been full stack at small companies my whole career so I context switch constantly every day so I'm used to it.

PaulHoule 3 days ago | parent | prev | next [-]

Personally I like the older kind of chatbots where I can ask it to write me something little (a function, a SQL query, ...) and I have it in 10-30 seconds and can think about it, try it, look in the manual to confirm it, or give it feedback or ask for something else. This can be a lot more efficient than looking in incomplete or badly organized manuals (MUI, react-router, ...) or filtering out the wrong answers on Stack Overflow that Stack Overflow doesn't filter out.

I can't stand the more complex "agents" like Junie that will go off on a chain of thought and give an update every 30 seconds or so and then 10 minutes later I get something that's occasionally useful but often somewhere between horribly wrong and not even wrong.

interstice 3 days ago | parent [-]

This resonates, even though copy pasting from Claude et al seems like it should be inefficient somehow it feels less prone to getting completely off track compared to leaving something like cursor or aider chat running.

codingdave 3 days ago | parent | prev | next [-]

I do not find it exhausting at all, but I'm a technical product manager, not a full-time coder. So even before LLMs came out, I was in the mode of delegating some work, doing other work, and validating the work when it came back from delegation. That decision of "do vs. delegate" is just part of that job, at least in the gigs I've done. AI simply speeds up the task completion and feedback cycle when delegating.

rabbittail 3 days ago | parent | prev | next [-]

Absolteuly, I made a WebSocket-based persistent memory system that stores conversation context in DynamoDB and automatically injects it into subsequent AI interactions. Instead of context switching, you get a consistent collaborative relationship where the AI maintains full project awareness across sessions. I use the websocket to so Claude Code makes seperate calls to the api so it will autonomously fill the DB with knowledge.

andy99 4 days ago | parent | prev | next [-]

I had a vibe-coding phase that I think largely followed the popular arc and timeline from optimism through to disappointment.

Definitely felt some burnout or dumbness after it, trying to get back into thinking for myself and actually writing code.

I think it's like gambling, you're sort of chasing an ideal result that feels close but never happens. That's where the exhaustion comes from imo, much more than if you were switching from manager to IC which I don't find tiring. I think its more a dopamine withdrawal than context switching.

interstice 3 days ago | parent [-]

Dopamine makes sense, since its kind of switching between 'sources' of Dopamine, like one is a sugar rush and the other is slow release like reading a book.

At the moment I have a bit of a tick tock where I'll vibe code to a point, get frustrated when it gets stuck on something I can fix myself in a minute or two. Then switch off using AI entirely for a while until I get bored of boilerplate and repeat the cycle.

mjrbrennan 3 days ago | parent | prev | next [-]

Yes, I’ve only just started trying out Claude Code and I do not mesh well with this method of asking AI to do something, then having to wait a few minutes and come back and check its work.

I find this leads so easily to distraction and I find this workflow very boring. If I’m going to use AI I want to use it in a more integrated way, or in a more limited way like just querying ChatGPT.

Will still try Claude more but I’m really not a fan so far.

paulcole 3 days ago | parent | prev | next [-]

> I have a theory that enforcing reduced engagement has all sorts of side effects in any format

This isn’t a particularly novel theory because you are basically saying “Doing different things makes different things happen.” Shocker.

Do you find AI immensely valuable for coding? Would you be happy to be a 100% passenger in your coding analogy?

interstice 3 days ago | parent [-]

As stated too vague to be interpretable, I agree. The theory isn't novel either, it's well known that easier to use = less engaged, What's less clear but still somewhat well discussed is semi automated = lower quality & more mistakes.

In future I wouldn't mind doing both, since I could be driving and a passenger in multiple contexts with less overhead. Currently attempting to work this way is painful due to the aformentioned switching.

joegibbs 3 days ago | parent | prev | next [-]

I think I’ve mostly gotten used to it. At the start, definitely, but now my method is to have 3 or 4 agent tasks running o3 to perform smaller actions than I was previously trying to do. There is a second where I have to remember what each one was doing but it’s still much faster than manually doing it.

v5v3 2 days ago | parent | prev | next [-]

Can you resolve this by using ai coding in ask mode only, where it won't change the code directly but only tell you what to do.

maxcomperatore 3 days ago | parent | prev | next [-]

for me it's less about dopamine crash and more about clashing headspaces. coding is pure flow, all in. ai pulls you out to play editor, and that shift kills momentum. here's what i've tried:

* batch ai work: toss it small stuff like snippets or lookups while i stay deep in my zone.

* hard limits: ai handles grunt work, not the big logic. if i'm fixing its bugs faster than writing it, i skip it.

* async vibes: treat ai like a junior coder. send a prompt, do something else, check later. no staring at the spinner.

1750horse 2 days ago | parent [-]

This feels AI-written to me.

alicekim 3 days ago | parent | prev [-]

I'm curious about this question too.