| ▲ | nu11ptr 4 days ago |
| I've been running Claude Code in my Cursor IDE for a while now via extension. I like the setup, and I direct Claude on one task at a time, while still having full access to my code (and nice completions via Cursor). I still spend time tweaking, etc. before committing. I have zero interest in these new "swarms of agents" they are trying to force on us from every direction. I can barely keep straight my code working on one feature at a time. AI has greatly helped me speed that up, but working serially has resulted in the best quality for me. I'll likely drop Cursor for good now and switch back to vanilla VsCode with CC. |
|
| ▲ | wazHFsRy 3 days ago | parent | next [-] |
| I just wish Claude code would also offer fast inline auto complete. Sometimes I’ll just want to have a function definition or some boilerplate spelled out without waiting for the slow Claude response. Or actively switching models.
——-
Maybe I can set up a shortcut for that? |
| |
| ▲ | Gagarin1917 3 days ago | parent | next [-] | | Is there a significant difference between Claude Code in VSCode and Copilot in VSCode? I’ve been using Copilot with the Claude models (including Sonnet/Opus 4.6) and it seems to work spectacularly. My subscription is only $10 a month, and it has unlimited inline suggestions. I just wonder if I’m missing anything. | | |
| ▲ | thefounder 3 days ago | parent | next [-] | | Copilot just sucks . I’ve tried them both and I stick with cc and codex mcp | |
| ▲ | wazHFsRy 3 days ago | parent | prev | next [-] | | I tried copilot for a bit in vscode as well with opus and felt something was off. Somehow as if copilots harness around it just wasn’t as good. But I can’t give solid prove. | | |
| ▲ | Mashimo 3 days ago | parent [-] | | Can't you use the official claude code vs plugin? AFAIK it uses the same binary as the cli in the background. | | |
| ▲ | wazHFsRy 3 days ago | parent [-] | | Yes and I do. My above point was just that I’d like to have fast inline auto complete. |
|
| |
| ▲ | ValentineC 3 days ago | parent | prev | next [-] | | > Is there a significant difference between Claude Code in VSCode and Copilot in VSCode? I’ve been using Copilot with the Claude models (including Sonnet/Opus 4.6) and it seems to work spectacularly. Most models are limited to 200k context in GitHub Copilot. The Claude models are now 1M context elsewhere. | |
| ▲ | mmplxx 3 days ago | parent | prev [-] | | The $10/month plan offers a quite limited number of tokens for advanced models. And if you are not careful and set the model to Auto it will quickly deplete them. |
| |
| ▲ | merlindru 3 days ago | parent | prev [-] | | Not a real solution but you could try using AquaVoice for dictation. It can gather screen context so you just say the function name out loud and it capitalizes and spells everything correctly. (Even hard cases!) |
|
|
| ▲ | dirtbag__dad 3 days ago | parent | prev | next [-] |
| This. I have effectively used multiple agents to do large refactors. I have not used them for greenfield development. How are folks leveraging the agentic swarm, and how are you managing code quality and governance? Does anyone know of a site that highlights code, features, or products produced by this type of development? |
| |
| ▲ | justindz 3 days ago | parent | next [-] | | I think it would be fantastic to have a reference site for significant, complex projects either developed or substantially extended primarily via agent(s). Every time I look at someone's incredible example of a workflow for handling big context projects, it ends up being a greenfield static microblog example with vague, arm-wavey assertions that it will definitely scale. | |
| ▲ | neuzhou 3 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | nevir 3 days ago | parent | prev | next [-] |
| Same here. And just recently made the switch back to VS Code with CC Also means you don't have to deal with Cursor's busted VS Code plugins due to licensing or forking drift (e.g. Python intellisence, etc) |
|
| ▲ | archon810 2 days ago | parent | prev | next [-] |
| You don't have to use swarms if you don't need them though, and in fact you can continue using the editor view with the side chat like before. Why switch away now just because this optional UI was announced? |
|
| ▲ | mtrifonov 3 days ago | parent | prev | next [-] |
| Same setup here. Claude Code in the terminal, one task at a time. The swarm thing never clicked for me. When I'm building I need to hold the full context in my head, and watching the agent work is actually part of that. I catch things I missed in my own prompt while it's thinking. Parallelizing that would just mean reviewing code I have no mental model for. Serial is slower on paper but the code actually works at the end.
I think these products are trying to capture no-coders, which is a recipe for disaster. They're trying to create architectures so people can say "build me X" and the agents perform magic end-to-end, output a hot pile of garbage. The actual value here is taking the finger-to-keyboard burden off the user and abstracting up to architect level. That means you still need to be able to review the goddamn code and offer an opinion on it to end up with something good. AI slop comes from people who don't have the skills and context to offer any valuable opinion or pushback to the AI.
Vanilla CC is the best IMO. |
|
| ▲ | Rover222 3 days ago | parent | prev | next [-] |
| This flow feels so slow after switching to Conductor and running X number of tasks in separate git workspaces concurrently |
| |
| ▲ | treetopia 3 days ago | parent [-] | | Have you tried Devswarm.ai yet? It's similar but can use VS Code workflows. | | |
|
|
| ▲ | linsomniac 3 days ago | parent | prev | next [-] |
| >I have zero interest in these new "swarms of agents" I think you misunderstand "swarms of agents", based on what you say above. An agent swarm, in my understanding and checked via a google search, does not imply working on multiple features at one time. It is working on one feature with multiple agents taking different roles on that task. Like maybe a python expert, a code simplifier, a UI/UX expert, a QA tester, and a devils advocate working together to implement a feature. |
| |
| ▲ | vips7L 3 days ago | parent | next [-] | | They’re not experts. | | |
| ▲ | signatoremo 3 days ago | parent | next [-] | | Sensitive but uninformed. Expert is a common AI concept, going back for decades. It wasn’t invented with LLM. https://en.wikipedia.org/wiki/Expert_system | | | |
| ▲ | grey-area 3 days ago | parent | prev [-] | | What do you mean, my prompts specifically ask for a phd level expert in every field? \s | | |
| ▲ | vidimitrov 3 days ago | parent [-] | | "Expertise" is a completely different beast from "knowledge". Expecting to gain it from a model only through prompting is similar to expecting to become capable of something only because you bought a book on the topic. | | |
|
| |
| ▲ | noodletheworld 3 days ago | parent | prev [-] | | > does not imply working on multiple features at one time. How can multiple parallel agents some local and some in the cloud be working on a single task? How can: > All local and cloud agents appear in the sidebar, including the ones you kick off from mobile, web, desktop, Slack, GitHub, and Linear.
(From the announcement, under “Run many agents in parallel”) …be working on the same task? Subagents are different, but the OP is not confused about what cursor is pushing, and it is not what you describe. | | |
| ▲ | victorbjorklund 3 days ago | parent | next [-] | | Same way a developer and designer can work on the same feature during the same week? Or two developers working on the same feature during the same week. They can have a common api contract and then one builds the frontend and the other works on the backend. | |
| ▲ | cruffle_duffle 3 days ago | parent | prev | next [-] | | Subagents are isolated context windows, which means they cannot get polluted as easily with garbage from the main thread. You can have multiple of them running in parallel doing their own separate things in service of whatever your own “brain thread”… it’s handy because one might be exploring some aspect of what you are working on while another is looking at it from a different perspective. I think the people doing multiple brain threads at once are doing that because the damn tools are so fucking slow. Give it little while and I’m sure these things will take significantly less time to generate tokens. So much so that brand new bottlenecks will open up… | |
| ▲ | linsomniac 3 days ago | parent | prev [-] | | They are confused in the word they use: the article on what Cursor is pushing does not, according to ^F, mention "swarm" at all. Since we have a word for multiple agents working on one task, it is probably best not to use that word if you are referring to multiple agents working on multiple tasks, right? I bring it up not to be pedantic, but because if you think it implies multi-tasking and dismiss it, you are missing out on it's ability to help in single-tasking. | | |
| ▲ | jiggunjer 3 days ago | parent | next [-] | | I think cursor doesn't make distinction between single or multiple logical tasks for swarm-like workloads. Subagents is the word they use for the swarm workers. Fwiw when I select multiple models for a prompt it just feeds the same prompt to them in parallel (isolated worktrees), this isn't the same as the swarm pattern in 2.4+ (default no worktrees). | |
| ▲ | noodletheworld 3 days ago | parent | prev [-] | | > I bring it up not to be pedantic The OP is fundamentally expressing the opinion that single task threads are easier to keep track of. Agree / disagree? Sure. …but dipping into pedantry about terms (swarm, subagent, vine coding, agentic engineering) really doesn't add anything to the conversation does it? You said: > I think you misunderstand "swarms of agents", based on what you say above. …but from reading the entire post I am pretty skeptical anyone was confused as to what they meant. Wrong term? Don't care. If someone calls it a hallucination? Also don't care. That cursor is focusing on “do stuff in parallel guys!”? Yeah, I care about that. > it is probably best not to use that word if you are referring to multiple agents working on multiple tasks, right? Not relevant to the thread. Also, I work with people who casually swap between using these exact words to mean both things. I donnnt caarrrrre what people call it. …when the meaning is obvious from the context, it doesnt matter. |
|
|
|
|
| ▲ | agilek 3 days ago | parent | prev | next [-] |
| Try Zed instead of VSC. Thank me later. |
| |
| ▲ | nu11ptr 3 days ago | parent [-] | | I did, but having the buttons on the bottom vs the side is a deal breaker for me, esp. since they are VERY tiny on my 4K screen. I can barely even get my mouse over them, and it seems they aren't movable to the left side like VSC? Am I missing something? Hard to believe this shipped, it is unusable for me. |
|
|
| ▲ | ifightcrime 3 days ago | parent | prev | next [-] |
| You are falling behind if you're not pushing yourself to learn and get better orchestrating multiple agents. |
| |
| ▲ | breakpointalpha 3 days ago | parent [-] | | Why is it that every legitimate concern or downside pointed out about AI is met with the same tired, low signal, rebuttal of FOMO. It's become the "no u r" argument of the AI age... :/ | | |
| ▲ | kdicjsjvjsjxh 3 days ago | parent [-] | | Because the AI apologists cannot deal with the much studied and proven placebo effect of perceived increased productivity, so they have to try and make themselves feel
better by claiming that others are lagging behind in a race no one else is really interesting in running. A snake oil scheme if ever saw one. |
|
|
|
| ▲ | kaizenb 3 days ago | parent | prev | next [-] |
| Same here. Tried agent system but no. One feature. One conversation. |
|
| ▲ | fragmede 4 days ago | parent | prev [-] |
| > have zero interest in these new "swarms of agents" they are trying to force on us from every direction. Good for you! Personally waiting for one agent to do something while I shove my thumb up my butt just waiting around for it to generate code that I'll have to fix anyway is peak opposite of flow state, so I've eagerly adopted agents (how much free will I had in that decision is for philosophers to decide) so there's just more going on so I don't get bored. (Cue the inevitable accusations of me astroturfing or that this was written by AI. Ima delve into that one and tell there was not. Not unless you count me having stonks in the US stock market as being paid off by Big AI.) |
| |
| ▲ | wilkystyle 4 days ago | parent | next [-] | | I have personally found that I cannot context switch between thinking deeply about two separate problems and workstreams without a significant cognitive context-switching cost. If it's context-switching between things that don't require super-deep thought, it's definitely doable, but I'm still way more mentally burnt-out after an hour or two of essentially speed-running review of small PRs from a bunch of different sources. Curious to know more about your work: Are your agents working on tangential problems? If so, how do you ensure you're still thinking at a sufficient level of depth and capacity about each problem each agent is working on? Or are they working on different threads of the same problem? If so, how do you keep them from stepping on each other's toes? People mention git worktrees, but that doesn't solve the conflict problem for multiple agents touching the same areas of functionality (i.e. you just move the conflict problem to the PR merge stage) | | |
| ▲ | simplyluke 4 days ago | parent | next [-] | | This is a struggle I've also been having. It's easier when I have 10 simple problems as a part of one larger initiative/project. Think like "we had these 10 minor bugs/tweaks we wanted to make after a demo review". I can keep that straight. A bunch of agents working in parallel makes me notably faster there though actually reviewing all the output is still the bottleneck. It's basically impossible when I'm working on multiple separate tasks that each require a lot of mental context. Two separate projects/products my team owns, two really hard technical problems, etc. This has been true before and after AI - big mental context switches are really expensive and people can't multitask despite how good we are at convincing ourselves we can. I expect a lot of folks experience here depends heavily on how much of their work is the former vs the later. I also expect that there's a lot of feeling busy while not actually moving much faster. | | |
| ▲ | girvo 4 days ago | parent | next [-] | | > I also expect that there's a lot of feeling busy while not actually moving much faster. Hey don’t say that too loudly, you’ll spook people. With less snark, this is absolutely true for a lot of the use I’m seeing. It’s notably faster if you’re doing greenfield from scratch work though. | |
| ▲ | jwpapi 4 days ago | parent | prev [-] | | Once I started agents and Claude code hid more and more of the changes it did from me it all went downhill.. |
| |
| ▲ | skippyboxedhero 3 days ago | parent | prev | next [-] | | Yes, also doesn't work for me. If the changes are simple, it is fine but if the changes are complex and there isn't a clear guideline then there is no AI that is good enough or even close to it. Gives you a few days of feeling productive and then weeks of trying to tidy up the mess. Also, I have noticed, strangely, that Claude is noticeably less compliant than GPT. If you ask a question, it will answer and then try to immediately make changes (which may not be related). If you say something isn't working, it will challenge you and it was tested (it wasn't). For a company that is seems to focus so much on ethics, they have produced an LLM that displays a clear disregard for users (perhaps that isn't a surprise). Either way, it is a very bad model for "agent swarm" style coding. I have been through this extensively but it will write bad code that doesn't work in a subtle way, it will tell that it works and that the issues relate to the way you are using the program, and then it will do the same thing five minutes later. The tooling in this area is very good. The problem is that the AI cannot be trusted to write complex code. Imo, the future is something like Cerbaras Code that offers a speed up for single-threaded work. In most cases, I am just being lazy...I know what I want to write, I don't need the AI to do it, and I am seeing that I am faster if I just single-thread it. Only counterpoint to this is that swarms are good for long-running admin, housekeeping, etc. Nowhere near what has been promised but not terrible. | |
| ▲ | vel0city 3 days ago | parent | prev | next [-] | | How does one work with a team of developers to solve larger problems? You break down the problems into digestible chunks and have each teammate tackle a stack of those tasks. Its far closer to being a project manager than it is being a solo developer. | |
| ▲ | jwpapi 4 days ago | parent | prev | next [-] | | I tried swarms as well, but I came back to you as well. It’s not worth it even th e small worse description double-checking, fine-tuning is not worth the effort the worse code will cost me in the future. Also when I don’t know about it. | |
| ▲ | nprateem 4 days ago | parent | prev | next [-] | | It's not that difficult. You get it to work on one deep problem, then another does more trivial bug fixes/optimizations, etc. Maybe in another you're architecting the next complex feature, another fixes tests, etc etc | |
| ▲ | fragmede 2 days ago | parent | prev [-] | | > Are your agents working on tangential problems? Unrelated problems simultaneously in the same git tree. Worktrees are unnecessary overhead if the area they're working in are disjoint. My Agents.md has instructions to commit early and often instead of one giant commit at the end, otherwise it wouldn't work. > how do you ensure you're still thinking at a sufficient level of depth and capacity about each problem each agent is working on? The context switching is hell and I have to force myself to dig deep into the MD file and understand things and not just rubber stamp the LLM output. It would be dishonest of me to say that I'm always 100% successful at that though. |
| |
| ▲ | imiric 4 days ago | parent | prev | next [-] | | I find it puzzling whenever someone claims to reach "flow" or "zen state" when using these tools. Reviewing and testing code, constantly switching contexts, juggling model contexts, coming up with prompt incantations to coax the model into the right direction, etc., is so mentally taxing and full of interruptions and micromanagement that it's practically impossible to achieve any sort of "flow" or "zen state". This is in no way comparable to the "flow" state that programmers sometimes achieve, which is reached when the person has a clear mental model of the program, understands all relevant context and APIs, and is able to easily translate their thoughts and program requirements into functional code. The reason why interrupting someone in this state is so disruptive is because it can take quite a while to reach it again. Working with LLMs is the complete opposite of this. | | |
| ▲ | jwpapi 4 days ago | parent | next [-] | | Thank you so much. These comments let me believe in my sanity in an over-hyped world. I see how people think its more productive, but honestly I iterate on my code like 10-15 times before it goes into production, to make sure it logs the right things, it communicates intent clearly, the types are shared and defined where they should be. It’s stored in the right folder and so on. Whilst the laziness to just pass it to CC is there I feel more productive writing it on my own, because I go in small iterations. Especially when I need to test stuff. Let’s say I have to build an automated workflow and for step 1 alone I need to test error handling, max concurrency, set up idempotency, proper logging. Proper intent communication to my future self. Once I’m done I never have to worry about this specific code again (ok some error can be tricky to be fair), but often this function is just practically my thought and whenever i need it. This only works with good variable naming and also good spacing of a function. Nobody really talks about it, but if a very unimportant part takes a lot of space in a service it should be probably refactored into a smaller service. The goal is to have a function that I probably never have to look again and if I have to do it answers me as fast as possible all the questions my future self would ask when he’s forgotten what decisions needed to be made or how the external parts are working. When it breaks I know what went wrong and when I run it in an orchestration I have the right amount of feedback. As others I could go very long about that and I’m aware of the other side of the coin overengineering, but I just feel that having solid composable units is just actually enabling to later build features and functionality that might be moat. Slow, flaky units aren’t less likely to become an asset.. And even if I let AI draft the initial flow, honestly the review will never be as good as the step by step stuff I built. I have to say AI is great to improve you as a developer to double check you, to answer (broad questions), before it gets to detailed and you need to experiment or read docs. Helps to cover all the basics | | |
| ▲ | fragmede 3 days ago | parent [-] | | So don't write slow flakey unit tests? Or better yet, have the AI make them not slow and not flakey? Of if you wanna be old school, figure out why they're flakey yourself and then fix it? If it's a time thing then fix that or if it's a database thing then mock the hell out of that and integration test, but at this point if your tests suck, you only have yourself to blame. | | |
| ▲ | jwpapi 3 days ago | parent [-] | | Sorry I don’t get your point and you didn’t seem to get mine. I’m saying I would guess I’m faster building manually then to let AI write it, arguably it won’t even achieve the level I feel best with in the future aka the one having the best business impact to my project. Also the way I semantically define unit tests is that they are instant and non-flaky as they are deterministic else it would be a service for me. |
|
| |
| ▲ | sefrost 3 days ago | parent | prev | next [-] | | I switched to use LLMs exclusively since around March last year and I haven’t wrote a line of code directly since then. I have followed the usual autocomplete > VS Code sidebar copilot > Cursor > Claude Code > some orchestrator of multiple Codex/Claude Codes. I haven’t experienced the flow state once in this new world of LLMs. To be honest it’s been so long that I can’t even remember what it felt like. | |
| ▲ | fragmede 3 days ago | parent | prev | next [-] | | "My flow state is better than yours"? Point is, I get engaged with the thing and lose track of time. | | |
| ▲ | Thanemate 3 days ago | parent [-] | | I can lose track of time watching a movie or playing a video game, but it's not what Mihály Csíkszentmihályi would call "flow state", but just immersion. |
| |
| ▲ | slashdave 3 days ago | parent | prev [-] | | LLMs deal with implementation details that get in the way of "flow" |
| |
| ▲ | nu11ptr 4 days ago | parent | prev | next [-] | | > Personally waiting for one agent to do something while I shove my thumb up my butt just waiting around for it to generate code that I'll have to fix anyway I spend that time watching it think and then contemplating the problem further since often, as deep and elaborate as my prompts are, I've forgotten something. I suspect it might be different if you are building something like a CRUD app, but if you are building a very complicated piece of software, context switching to a new topic while it is working is pretty tough. It is pretty fast anyway and can write the amount of code I would normally write in half a day in like 15 minutes. | | |
| ▲ | ryandrake 4 days ago | parent [-] | | In my workflow, it's totally interactive: Give the LLM some instructions, wait very briefly, look at code diff #1, correct/fix it before approving it, look at code diff #2, correct/fix it before approving it, sometimes hitting ESC and stopping the show because the agent needs to be course corrected... It's an active fight. No way I'm going to just "pre-approve all" and walk away to get coffee. The LLMs are not ready for that yet. I don't know how you'd manage a "swarm" of agents without pre-approving them all. When one has a diff, do you review it, and then another one comes in with an unrelated diff, and you context switch and approve that, then a third one comes in with a tool use it wants to do... That sounds absolutely exhausting. | | |
| ▲ | jiggunjer 3 days ago | parent [-] | | It sounds like diff #2 depends on approval of diff #1? But with cursor it's a set of diffs that'll be retroactively approved or rejected one by one.
So you can get coffee during the thinking and still have interactive checks. Swarm changes nothing about this, except affecting the thinking time. |
|
| |
| ▲ | Aurornis 4 days ago | parent | prev | next [-] | | For my work I’ve never found myself sitting around with nothing to do because there’s always so much review of the generated code that needs to be done The only way I can imagine needing to run multiple agents in parallel for code gen is if I’m just not reviewing the output. I’ve done some throwaway projects where I can work like that, but I’ve reviewed so much LLM generated code that there is no way I’m going to be having LLMs generate code and just merge it with a quick review on projects that matter. I treat it like pair programming where my pair programmer doesn’t care when I throw away their work | |
| ▲ | whackernews 3 days ago | parent | prev [-] | | Why is this comment so pale I cat read it? What’s the contrast on this is this accessible to anyone? I’m guessing it was downvoted by the masses but at the same time I’d like the choice to be able to read it I’m not that into what the general public think about something. I’m getting into downmaxxing at this point. I love that you have to earn being negative on this site. Give it to me. | | |
| ▲ | zargon 3 days ago | parent [-] | | Click on the timestamp link to go to the comment's own page where it will be rendered black instead of gray. | | |
| ▲ | bornfreddy 3 days ago | parent [-] | | Except it isn't (anymore)? Timestamp + reader mode worked though. Edit: it is black if logged in, gray if logged out. Weird. |
|
|
|