| ▲ | logicprog 6 hours ago |
| OpenCode was the first open source agent I used, and my main workhorse after experimenting briefly with Claude Code and realizing the potential of agentic coding. Due to that, and because it's a popular an open source alternative, I want to be able to recommend it and be enthusiastic about it. The problem for me is that the development practices of the people that are working on it are suboptimal at best; they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things (or even build a proper list of changes for each release), and they add, remove, refine, change, fix, and break features constantly at that accelerated pace. More than that, it's an extremely large and complex TypeScript code base — probably larger and more complex than it needs to be — and (partly as a result) it's fairly resource inefficient (often uses 1GB of RAM or more. For a TUI). On top of that, at least I personally find the TUI to be overbearing and a little bit buggy, and the agent to be so full of features that I don't really need — also mildly buggy — that it sort of becomes hard to use and remember how everything is supposed to work and interact. |
|
| ▲ | rbehrends 5 hours ago | parent | next [-] |
| I am more concerned about their, umm, gallant approach to security. Not only that OpenCode is permissive by default in what it is allowed to do, but that it apparently tries to pull its config from the web (provider-based URL) by default [1]. There is also this open GitHub issue [2], which I find quite concerning (worst case, it's an RCE vulnerability). [1] https://opencode.ai/docs/config/#precedence-order [2] https://github.com/anomalyco/opencode/issues/10939 |
| |
| ▲ | heavyset_go an hour ago | parent | next [-] | | It also sends all of your prompts to Grok's free tier by default, and the free tier trains on your submitted information, X AI can do whatever they want with that, including building ad profiles, etc. You need to set an explicit "small model" in OpenCode to disable that. | | |
| ▲ | integralid 43 minutes ago | parent [-] | | This. I work on projects that warrant a self hosted model to ensure nothing is leaked to the cloud. Imagine my surprise when I discovered that even though the only configured model is local, all my prompts are sent to the cloud to... generate a session title. Fortunately caught during testing phase. |
| |
| ▲ | ct520 4 hours ago | parent | prev | next [-] | | I second that. Have fun on windows - automatic no from me.
https://github.com/anomalyco/opencode/issues?q=is%3Aissue%20... | | | |
| ▲ | woctordho 4 hours ago | parent | prev | next [-] | | RCE is exactly the feature of coding agents. I'm happy with it that I don't need to launch OpenCode with --dangerously-skip every time. | |
| ▲ | TZubiri 3 hours ago | parent | prev | next [-] | | I assign a specific user for it, which doesn't have much access to my files. So what I want is complete autonomy. | |
| ▲ | iam_circuit 2 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | westoque 6 hours ago | parent | prev | next [-] |
| > The problem for me is that the development practices of the people that are working on it are suboptimal at best; they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things (or even build a proper list of changes for each release), and they add, remove, refine, change, fix, and break features constantly at that accelerated pace. this is what i notice with openclaw as well. there have been releases where they break production features. unfortunately this is what happens when code becomes a commidity, everyone thinks that shipping fast is the moat but at the expense of suboptimality since they know a fix can be implemented quickly on the next release. |
| |
| ▲ | siddboots 5 hours ago | parent | next [-] | | Openclaw has 20k commits, almost 700k lines of code, and it is only four months old. I feel confident that that sort of code base would have a no coherent architecture at all, and also that no human has a good mental model of how the various subsystems interact. I’m sure we’ll all learn a lot from these early days of agentic coding. | | |
| ▲ | girvo 2 hours ago | parent [-] | | > I’m sure we’ll all learn a lot from these early days of agentic coding. So far what I am learning (from watching all of this) is that our constant claims that quality and security matter seem to not be true on average. Depressingly. |
| |
| ▲ | heavyset_go an hour ago | parent | prev | next [-] | | We're still in the very early days of generative AI, and people and markets are already prioritizing quality over quantity. Quantity is irrelevant when it comes value. All code is not fungible, "irreverent code that kinda looks okay at first glance" might be a commodity, but well-tested, well-designed and well-understood code is what's valuable. | |
| ▲ | bredren an hour ago | parent | prev [-] | | Claude Code breaks production features and doesn't say anything about it. The product has just shifted gears with little to no ceremony. I expect that from something guiding the market, but there have been times where stuff changes, and it isn't even clear if it is a bug or a permanent decision. I suspect they don't even know. |
|
|
| ▲ | cpeterso 6 hours ago | parent | prev | next [-] |
| OpenCode's creator acknowledged that the ease of shipping has let them ship prototype features that probably weren't worth shipping and that they need to invest more time cleaning up and fixing things. https://x.com/thdxr/status/2031377117007454421 |
| |
| ▲ | rdedev 3 hours ago | parent | next [-] | | Uff. This is exactly what Casey Muratori and his friend was talking about in of their more recent podcast. Features that would never get implemented because of time constraints now do thanks to LLMs and now they have a huge codebase to maintain | |
| ▲ | logicprog 5 hours ago | parent | prev | next [-] | | Well that's good to hear, maybe they'll improve moving forward on the release aspect at least. | |
| ▲ | j45 3 hours ago | parent | prev [-] | | What to release > What to build > Build anything faster |
|
|
| ▲ | arcanemachiner 5 hours ago | parent | prev | next [-] |
| I'm still trying to figure out how "open" it really is; There are reports that it phones home a lot[0], and there is even a fork that claims to remove this behavior[1]: [0] https://www.reddit.com/r/LocalLLaMA/comments/1rv690j/opencod... [1] https://github.com/standardnguyen/rolandcode |
| |
| ▲ | nikcub 5 hours ago | parent | next [-] | | the fact that somebody was able to fork it and remove behaviour they didn't want suggests that it is very open that #12446 PR hasn't even been resolved to won't merge and last change was a week ago (in a repo with 1.8k+ open PRs) | | |
| ▲ | drdaeman 3 hours ago | parent [-] | | I think there’s a conflict between “open” as in “open source”, and “open” as in “open about the practice” paired with the fact we usually don’t review software’s source scrupulously enough to spot unwanted behaviors”. Must be a karmic response from “Free” /s |
| |
| ▲ | nsonha 4 hours ago | parent | prev [-] | | so how is telemetry not open? If you don't like telemetry for dogmatic reasons then don't use it. Find the alternative magical product whose dev team is able to improve the software blindfolded | | |
| ▲ | heavyset_go an hour ago | parent | next [-] | | > Find the alternative magical product whose dev team is able to improve the software blindfolded The choice isn't "telemetry or you're blindfolded", the other options include actually interacting with your userbase. Surveys exist, interviews exist, focus groups exist, fostering communities that you can engage is a thing, etc. For example, I was recruited and paid $500 to spend an hour on a panel discussing what developers want out of platforms like DigitalOcean, what we don't like, where our pain points are. I put the dollar amount there only to emphasize how valuable such information is from one user. You don't get that kind of information from telemetry. | |
| ▲ | ipaddr 3 hours ago | parent | prev [-] | | Or by testing it themselves. |
|
|
|
| ▲ | paustint 6 hours ago | parent | prev | next [-] |
| I recently listened to this episode from the Claude Code creator (here is the video version: https://www.youtube.com/watch?v=PQU9o_5rHC4) and it sounded like their development process was somewhat similar - he said something like their entire codebase has 100% churn every 6 months. But I would assume they have a more professional software delivery process. I would (incorrectly) assume that a product like this would be heavily tested via AI - why not? AI should be writing all the code, so why would the humans not invest in and require extreme levels of testing since AI is really good at that? |
| |
| ▲ | causal 3 hours ago | parent | next [-] | | I've gotta say, it shows. Claude Code has a lot of stupid regressions on a regular basis, shit that the most basic test harness should catch. | |
| ▲ | logicprog 6 hours ago | parent | prev [-] | | I mean, I'm slowly trying to learn lightweight formal methods (i.e. what stuff like Alloy or Quint do), behavior driven development, more advanced testing systems for UIs, red-green TDD, etc, which I never bothered to learn as much before, precisely because they can handle the boilerplate aspects of these things, so I can focus on specifying the core features or properties I need for the system, or thinking through the behavior, information flow, and architecture of the system, and it can translate that into machine-verifiable stuff, so that my code is more reliable! I'm very early on that path, though. It's hard! |
|
|
| ▲ | blks 6 hours ago | parent | prev | next [-] |
| Probably all describe problems stem from the developers using agent coding; including using TypeScript, since these tools are usually more familiar with Js/Js adjacent web development languages. |
| |
| ▲ | logicprog 6 hours ago | parent [-] | | Perhaps the use of coding agents may have encouraged this behavior, but it is perfectly possible to do the opposite with agents as well — for instance, to use agents to make it easier to set up and maintain a good testing scaffold for TUI stuff, a comprehensive test suite top to bottom, in a way maintainers may not have had the time/energy/interest to do before, or to rewrite in a faster and more resource efficient language that you may find more verbose, be less familiar with, or find annoying to write — and nothing is forcing them to release as often as they are, instead of just having a high commit velocity. I've personally found AIs to be just as good at Go or Rust as TypeScript, perhaps better, as well, so I don't think there was anything forcing them to go with TypeScript. I think they're just somewhat irresponsible devs. |
|
|
| ▲ | thatmf 5 hours ago | parent | prev | next [-] |
| The value of having (and executing) a coherent product vision is extremely undervalued in FOSS, and IMO the difference between a successful project in the long-term and the kind of sploogeware that just snowballs with low-value features. |
| |
| ▲ | rounce 5 hours ago | parent | next [-] | | > The value of having (and executing) a coherent product vision is extremely undervalued in FOSS Interesting you say this because I'd say the opposite is true historically, especially in the systems software community and among older folks. "Do one thing and do it well" seems to be the prevailing mindset behind many foundational tools. I think this why so many are/were irked by systemd. On the other hand newer tools that are more heavily marketed and often have some commercial angle seem to be in a perpetual state of tacking on new features in lieu of refining their raison d'etre. | | | |
| ▲ | Aperocky 5 hours ago | parent | prev [-] | | negative values even. |
|
|
| ▲ | tshaddox 5 hours ago | parent | prev | next [-] |
| I’m a little surprised by your description of constant releases and instability. That matches how I would describe Claude Code, and has been one of the main reasons I tend to use OpenCode more than Claude Code. OpenCode has been much more stable for me in the 6 months or so that I’ve been comparing the two in earnest. |
| |
| ▲ | hboon 3 hours ago | parent [-] | | I use Droid specifically because Claude Code breaks too often for me. And then Droid broke too (but rarely), and I just stuck to not upgrading (like I don't upgrade WebStorm. Dev tools are so fragile) |
|
|
| ▲ | nico 2 hours ago | parent | prev | next [-] |
| > they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things Tbf, this seems exactly like Claude Code, they are releasing about one new version per day, sometimes even multiple per day. It’s a bit annoying constantly getting those messages saying to upgrade cc to the latest version |
| |
| ▲ | ctxc an hour ago | parent [-] | | Oh wow. I got multiple messages in a day and just assumed it was a cache bug. It's annoying how I always get that "claude code has a native installer xyz please upgrade" message |
|
|
| ▲ | zackify 5 hours ago | parent | prev | next [-] |
| Yeah every time I want to like it, scrolling is glitched vs codex and Claude. And other various things like: why is this giant model list hard coded for ollama or other local methods vs loading what I actually have... On top of that. Open code go was a complete scam. It was not advertised as having lower quality models when I paid and glm5 was broken vs another provider, returning gibberish and very dumb on the same prompt |
| |
| ▲ | tmatsuzaki 4 hours ago | parent [-] | | I agree.
Since tools like Codex let you use SOTA models more cheaply and with looser weekly limits, I think they’re the smarter choice. |
|
|
| ▲ | scuff3d 4 hours ago | parent | prev | next [-] |
| Drives me nuts that we have TUIs written in friggin TS now. That being said, I do prefer OpenCode to Codex and Claude Code. |
| |
| ▲ | cies an hour ago | parent [-] | | Why to you prefer? I have a different experience, and want to learn. (I'm also hating on TS/JS: but some day some AI will port it to Rust, right?) |
|
|
| ▲ | grapheneposter 4 hours ago | parent | prev | next [-] |
| Yeah I tried using it when oh-my-opencode (now oh-my-openagent) started popping off and found it had highly unstable. I just stick with internal tooling now. |
|
| ▲ | alienbaby 3 hours ago | parent | prev | next [-] |
| its hard not to wonder if they are taking their own medicine, but not quite properly |
|
| ▲ | foobarqux 5 hours ago | parent | prev | next [-] |
| What is a better option? |
| |
| ▲ | logicprog 5 hours ago | parent | next [-] | | For serious coding work I use the Zed Agent; for everything else I use pi with a few skills. Overall, though, I'd recommend Pi plus a few extensions for any features you miss extremely highly. It's also TypeScript, but doesn't suffer from the other problems OC has IME. It's a beautiful little program. | | |
| ▲ | mmcclure 5 hours ago | parent [-] | | Big +1 to Pi[1]. The simplicity makes it really easy to extend yourself too, so at this point I have a pretty nice little setup that's very specific to my personal workflows. The monorepo for the project also has other nice utilities like a solid agent SDK. I also use other tools like Claude Code for "serious" work, but I do find myself reaching for Pi more consistently as I've gotten more confident with my setup. [1] https://github.com/badlogic/pi-mono/tree/main/packages/codin... |
| |
| ▲ | vinhnx 3 hours ago | parent | prev | next [-] | | I've been building VT Code (https://github.com/vinhnx/vtcode), a Rust-based semantic coding agent. Just landed Codex OAuth with PKCE exchange, credentials go into the system keyring. I build VT Code with Tree-sitter for semantic understanding and OS-native sandboxing. It's still early but I confident it usable. I hope you'll give it a try. | |
| ▲ | andreynering 5 hours ago | parent | prev [-] | | https://charm.land/crush | | |
| ▲ | rao-v 5 hours ago | parent [-] | | I tried crush when it first came out - the vibes were fun but it didn’t seem to be particularly good even vs aider. Is it better now? | | |
| ▲ | andreynering 5 hours ago | parent [-] | | Disclaimer: I work for Charm, so my opinion may be biased. But we did a lot of work on improving the experience, both on UX, performance, and the actual reliability of the agent itself. I would suggest you to give it a try. | | |
| ▲ | rao-v 2 hours ago | parent [-] | | Will do thanks - any standout features or clever things for me to look out for? |
|
|
|
|
|
| ▲ | bakugo 5 hours ago | parent | prev [-] |
| Isn't this pretty much the standard across projects that make heavy use of AI code generation? Using AI to generate all your code only really makes sense if you prioritize shipping features as fast as possible over the quality, stability and efficiency of the code, because that's the only case in which the actual act of writing code is the bottleneck. |
| |
| ▲ | logicprog 5 hours ago | parent [-] | | I don't think that's true at all. As I said, in a response to another person blaming it on agentic coding above, there are a very large number of ways to use coding agents to make your programs faster, more efficient, more reliable, and more refined that also benefit from agents making the code writing research, data piping, and refactoring process quicker and less exhausting. For instance, by helping you set up testing scaffolding, handling the boilerplate around tests while you specify some example features or properties you want to test and expands them, rewriting into a more efficient language, large-scale refactors to use better data structures or architectures, or allowing you to use a more efficient or reliable language that you don't know as well or find to have too much boilerplate or compiler annoyance to otherwise deal with yourself. Then there are sort of higher level more phenomenological or subjective benefits, such as helping you focus on the system architecture and data flow, and only zoom in on particular algorithms or areas of the code base that are specifically relevant, instead of forever getting lost in the weeds of thinking about specific syntax and compiler errors or looking up a bunch of API documentation that isn't super important for the core of what you're trying to do and so on. Personally, I find this idea that "coding isn't the bottleneck" completely preposterous. Getting all of the API documentation, the syntax, organizing and typing out all of the text, finding the correct places in the code base and understanding the code base in general, dealing with silly compiler errors and type errors, writing a ton of error handling, dealing with the inevitable and inoraticable boilerplate of programming (unless you're one of those people that believe macros are actually a good idea and would meaningfully solve this), all are a regular and substantial occurrence, even if you aren't writing thousands of lines of code a day. And you need to write code in order to be able to get a sense for the limitations of the technology you're using and the shape of the problem you're dealing with in order to then come up with and iterate on a better architecture or approach to the problem. And you need to see your program running in order to evaluate whether it's functionality and design a satisfactory and then to iterate on that. So coding is actually the upfront costs that you need to pay in order to and even start properly thinking about a problem. So being able to get a prototype out quickly is very important. Also, I find it hard to believe that you've never been in a situation where you wanted to make a simple change or refactor that would have resulted in needing to update 15 different call sites to do properly in a way that was just slightly variable enough or complex enough that editor macros or IDE refactoring capabilities wouldn't be capable of. That's not to mention the fact that if agentic coding can make deploying faster, then it can also make deploying the same amount at the same cadence easier and more relaxing. | | |
| ▲ | adithyassekhar an hour ago | parent [-] | | You're both right. AI can be used to do either fast releases or well designed code. Don't say both, as you're not making time, you're moving time between those two. Which one you think companies prefer? Or if you're a consulting business, which one do you think your clients prefer? |
|
|