Remix.run Logo
AntonyGarand 8 hours ago

I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.

Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.

It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.

stkdump 6 hours ago | parent | next [-]

It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.

These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.

At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.

Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.

mediaman 4 hours ago | parent | next [-]

It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."

Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.

It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.

I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.

Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.

echelon 2 hours ago | parent [-]

I work in the video AI world.

We gave up on subscriptions long ago. They're rinky dink and get you a paltry amount of utilization before they run out.

The per day per seat costs can exceed $1000. This is already normal for studios, and it's already producing positive ROI.

There's simply no way to price video any other way than by usage. I suspect the same will come for everything.

ashdksnndck 5 hours ago | parent | prev | next [-]

> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.

pocksuppet 6 minutes ago | parent | next [-]

That's just what they told the gullible investors to get money.

zem 5 hours ago | parent | prev | next [-]

only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.

and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.

dullcrisp 44 minutes ago | parent | next [-]

Well no because no one is going to be coming in to work building the next AI model after the Singularity.

We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!

ethin 4 hours ago | parent | prev | next [-]

This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).

Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.

zem 4 hours ago | parent | next [-]

oh absolutely, no argument there, the case for AGI is pretty weak. I was just saying that I am even more sceptical that any of this is a "first or nothing" scenario - that is one of my biggest pet peeves about the entire tech sector.

josephg 3 hours ago | parent | prev [-]

ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.

Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.

Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?

marcus_holmes 7 minutes ago | parent [-]

ChatGPT can only respond to a prompt, and in the context of that prompt. It has no continuous awareness of anything. That isn't superintelligence. We are easily fooled because we have stupid monkey brains.

fwipsy 4 hours ago | parent | prev | next [-]

Anthropic/OpenAI aren't planning to have their superintelligence take over the world, but they're still afraid that someone else will do it.

sroussey 5 hours ago | parent | prev | next [-]

One could argue that AI has already started to hoover up all the world’s resources. AI buildout as a percent of GDP is already high and still rising.

munk-a 5 hours ago | parent [-]

Don't blame machines for our folly. This is just standard bubble behavior.

pocksuppet an hour ago | parent [-]

What if that's just the mechanism the machines take over the world?

Natural selection doesn't care why something replicated a lot.

zozbot234 4 hours ago | parent | prev [-]

If OpenAI has the second superintelligence they have to merge with the first and cooperate. It's a provision in their charter.

airstrike 4 hours ago | parent [-]

I'm not sure anyone thinks their charter carries much weight at this point.

stkdump 5 hours ago | parent | prev | next [-]

I don't think this race to superintelligence idea should be taken too seriously. It is great for headlines and get peoples imaginations up. It is mostly a marketing gag.

I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?

ahepp 5 hours ago | parent | prev [-]

One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?

Cpoll 5 hours ago | parent | next [-]

A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).

But that's all just sci-fi worldbuilding.

charcircuit 4 hours ago | parent [-]

>they'll be permanently one week behind the curve

What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?

greycol 4 hours ago | parent | prev | next [-]

Assuming it can't super hack all computer systems and cripple competing SI incubation to at least increase its lead time indefinetly.

The assumption would be that in the lead time it has the super intelligence at least takes a small lead and undermines any paths a later arriving super intelligence could take to interfere with it's goals, which naturally includes stopping competing SIs from becoming more powerful in a way that could undermine it.

So assuming the super intelligence has goals and work towards them it will be initially trying to solidify its own power, iterating on that small lead, assuming it's the smartest super intelligence[1], should be enough to win. The scary part is that assuming no guardrails [2] it's going to be as ruthless as possible in achieving those goals. That does not necessarily mean it will appear ruthless in achieving those goals, just as ruthless as it judges optimal.

1. Which being so smart one of it's chores would have been reinvestment in making itself smarter than competition and being smarter than its makers has a good chance of actuating those self-improvements.

2. In the internal balancing of goals sense not the don't feed the mogwai after midnight sense.

Philpax 5 hours ago | parent | prev [-]

A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.

A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.

remexre 4 hours ago | parent [-]

If I interpret "a machine superintelligence" as "a classroom of 300IQ humans," I'm not really sure how this is true? You still have material and energy constraints, you can't think your way out of those.

Anon1096 5 hours ago | parent | prev | next [-]

> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.

stkdump 4 hours ago | parent [-]

Then explain why they started banning all third party harnesses, including those that work through Claude Code, if it still makes them money. They are cutting off profit for no good reason?

I think there were reasons to doubt that heavy subscription users are unprofitable before they did that. OpenClaw was just the tip of the iceberg.

Why don't they make token pricing dynamic if that was the case? It should then allow heavy user to get even more for their money than with the current subscription model where they can't adjust to current infra availability.

It may be that "in aggregate" sub users are (not yet) a loosing business. But in all fairness, the more useful AI gets, the more it will be used. And the more it will be used, the harder it will be to make subs cheaper than token pricing. The only counter-weight are new light users, but those will also become heavy users over time, the more useful it will be for them. And at some point it will be hard to onboard light users in the first place, because the laggards will require even more intelligence and value to get them over.

nofriend 4 hours ago | parent | prev | next [-]

> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

"profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.

> At some point they have to price their product fairly

they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.

> They will have to compete on merit alone, and that is much less profitable.

that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.

solenoid0937 an hour ago | parent | prev | next [-]

If you were right Anthropic's ARR would be going down but it's not. They just surpassed $30B up from $14B two months ago.

AussieWog93 3 hours ago | parent | prev | next [-]

>Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.

And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.

>The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.

I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.

zozbot234 4 hours ago | parent | prev | next [-]

> These labs play the game of trying to kill competition in the harness game

Anthropic and Google are arguably playing that game. OpenAI's Codex CLI is open source and entirely optional for use of the GPT Codex models.

stkdump 4 hours ago | parent [-]

OpenAI just has more runway and has convinced its investors that it is as much about hardware (stargate) as it is about anything else. So they think they can/have to afford keeping the software side more open to not make themselves look stupid. Google is more of a down to earth company with other business to lose and isn't bought into it as much.

mannanj 4 hours ago | parent | prev | next [-]

What about the data they are accumulating, for non-training purposes? That data isn't of negligible value; the "subscription cost" is really a "harvesting data" opportunity. Don't be naive to that our data is not incredibly valuable.

cyanydeez 4 hours ago | parent | prev [-]

The thing is, the harness _is_ the model at the end of the day:

https://en.wikipedia.org/wiki/Turtles_all_the_way_down

antonvs 3 hours ago | parent [-]

The source code of Claude Code and Gemini CLI contradict that.

smcl 7 hours ago | parent | prev | next [-]

> Before the acquisition, Bun had to figure out how to monetize at some point.

I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.

ahepp 5 hours ago | parent | next [-]

Are there any situations you would compare this to historically?

To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.

It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.

Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.

switz 5 hours ago | parent | next [-]

It's a shame that VCs have corrupted a $200MM/year business into the perception as a failure. Who cares if the VCs didn't get a large return, or if the outsized impact of the software didn't quite fully capture the value created. $200MM/yr without aggressive R&D or operational costs could be an incredibly healthy business.

Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.

antonvs 3 hours ago | parent [-]

I haven’t followed Docker’s case in particular, but how much investment was required to get it to that point? If it’s a case of “How do you become a millionaire? Start as a billionaire and invest in Docker”, then the perception may have some basis.

pjmlp 5 hours ago | parent | prev [-]

The audio and 3D card pioneers in the PC world.

The ones that were first to market went all bankrupt, or were acquired by others that came later into the scene.

marshray 4 hours ago | parent [-]

1. At least 99% of all species that ever lived on Earth are now extinct. I.e., that's life.

2. "But for a beautiful moment in time we created a lot of value for shareholders."

pocksuppet 5 minutes ago | parent [-]

Failure for those species though.

atonse 6 hours ago | parent | prev | next [-]

> I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point".

Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.

enedil 3 hours ago | parent [-]

I think parent commenter meant that what's insane is that js runtime is not treated as an utility which should never be monetized. It's as if GCC developers haven't figured out how to monetize, but they are willing to at some point.

spankalee 5 hours ago | parent | prev | next [-]

I partially agree with you, but I also think that it's good that people can make something they want, that seems to have no monetization path, and have some hope of being bailed out.

It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.

tracker1 3 hours ago | parent [-]

I'm with you... I think it's helped Node.js a lot to have Bun and Deno implementing new features that help push node forward. I think it's been a bit of a miss not integrating npm into node along the way... Mostly in that npm is a separate org from node, which is its' own issue... I kind of like JSR a lot myself, so hope it continues to pick up some traction.

animuchan 4 hours ago | parent | prev | next [-]

It's a bit insane, but the cost of switching to regular NodeJS is low (for all but most bun-specific projects).

All valid points though, I'm pessimistic about Anthropic still actively diverting resources to these side quests when tough times hit (which might be in a week for all we know).

motbus3 7 hours ago | parent | prev [-]

I know people say it is unprofitable but I wonder if there is a way to verify it is truly is. I will not say any details but I worked for a giant company which was barely making money YoY but somehow the bonuses for heads were bigger and bigger given a proxy metric related to profit.

There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.

bombcar 6 hours ago | parent [-]

"Profitable" is the wrong metric, really, it's whether it is sustainable - can development continue indefinitely given the current financial situation?

motbus3 5 hours ago | parent [-]

I'm thinking about your comment... It put many wheels to spin...

Tldr; I think the don't care about what will happen to the company in medium or long term.

---

Are any of those companies looking for stability or sustainability?

I have the impression they are completely aware of the diminished return effects and they will explore the moment to the fullest of their capabilities promising even more absurd things when the results are even smaller.

I do agree there is a considerable improvement comparing from a year ago but definitely not ground shaking as it was from the year before to the last.

Many of the promises turns out to be empty or at least having huge number of asterisks to it.

I think there are flags everywhere. From minor things such as everyone using different benchmarks or plotting performance differences on weird choices os axis and ordering.

Other mild things such as promoting the "system" created a compiler from scratch when such compiler does not even do a hello world and runs and gave output binaries running 300x then the counterparts.

(I am aware there was a misusage of the agentic benchmark to build a compiler but there was an active choice on how to tell the story. Given other movements I am not quite sure if I believe it was an accident)

There are other red flags such as people rolling back to previous versions of models because they can't get the new one to work properly.

Other situations such as the affirmations that they have such "dangerous" model that apparently seems to be more of a benchmark trick than real results with <100B models being able to replicate the benchmark results only by changing the methodology.

I don't think we are yet in the turning point where everything will collapse but my feeling is that we are going in that direction unless something that makes these models much more intelligent AND efficient.

It makes sense to not hire a person when you can have a machine for the same job for the same price. But AI prices are getting higher than the returns do the margins for it to be a sensible choice are getting smaller.

That all said, I say again that I think that they are completely aware of this effect. Not because they understand the technology but because this happens more frequently than not. Because of this, I don't think they care to be sustainable. All of them,smell that they will take the money and leave the ship to sink.

CharlieDigital 6 hours ago | parent | prev | next [-]

You might be underestimating the effect that corporate policies and culture have on the product.

Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.

No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.

If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.

Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg

    "Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer.  Because AI in a good codebase actually does really, really well."
Once bad code starts to compound on itself, it's going to be really hard to break out of it.
shimman 6 hours ago | parent [-]

I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?

jazzypants 6 hours ago | parent | next [-]

I'm not the biggest fan of the influencer community, but I think that it mostly boils down to many learners preferring video content over written material. I've gotten used to reading documentation now, but I remember it being extremely intimidating when I was first learning. It was nice to have someone break stuff down into simple terms for me.

To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.

shimman 5 hours ago | parent | next [-]

No, look into his actual work history (sorry being a paid marketer isn't working as a dev). Was only a dev consultant for like two years before pivoting into full time influencer. Trust me, I know more about these types than any normal human should.

epolanski 5 hours ago | parent | prev [-]

> but I think that it mostly boils down to many learners preferring video content over written material

99% of the times that's not learning, but productivity porn.

bdangubic 6 hours ago | parent | prev [-]

I have followed a simple rule in my career, if you offer training/courses I don’t listen to anything you say.

I consider this a hard rule, like ad-blocking (this is exactly that, blocking ads as each talk is an ad (or ad in disguise).

pton_xd 6 hours ago | parent | prev | next [-]

> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.

aylmao 5 hours ago | parent | next [-]

Bun is not a "product" at Anthropic though, it's a tool for its developers to build products. IMO as long as it remains that way, the incentives for its developers will remain fairly aligned with the incentives of people who use it outside the company.

A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.

Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.

This is all independent of any shitty practices with their other software. And this has been for decades at this point.

crote 4 hours ago | parent [-]

> Bun is not a "product" at Anthropic though, it's a tool for its developers to build products.

Doesn't that just make it even worse? If Anthropic can't even afford to spend the engineering effort on making sure their core product functions properly, why should we assume that they'll be investing serious resource into what is essentially some upper manager's loss-leader pet project?

If Anthropic is financially hurting, why shouldn't they put Bun on the bare minimum of life support?

aylmao 3 hours ago | parent | next [-]

Because they need it to work, so that everything built on it works too.

Building developers sell you the apartment, not the elevator room, the electrical room, mechanical room, etc. They will make all sorts of controversial decisions with the apartments; odd layouts, ugly flooring, weird pricing, tacky finishes, etc. The "core product" is the money-maker, that's where the egos clash, priorities change, and where they try to charge as much as possible while they cut costs as much they can.

No one is buying the electrical room though. It just has to work. Yes, you'll make it as cheaply as possible; no flooring, no paint on the walls, no interior designer meetings to argue what's the right tone beige for the walls. But it'll do what it needs to do. It'll keep the lights on. Otherwise you can't sell any of the apartments.

Same thing with Facebook; there's active incentive to introduce all sorts of dark patterns over their app, to ignore certain bugs, to unnecessarily change things, etc. But none of those incentives are present with React. The incentive is to keep React reliable and performant, and to keep the team lean. I'm sure it's similar with Bun in Anthropic.

And to be clear, Anthropic definitely spends most of it's engineering effort making sure their core product "functions properly". This "functions properly" is just different for us as clients vs them as a corporation. There is high overlap, since they need to keep us clients happy. But a well-functioning product at a company is one that leads to money. I'm sure very capable engineers pushing the okrs they care about.

tracker1 3 hours ago | parent | prev [-]

I think they're doing too much vibe coding and not enough QC... I don't think it's a matter of not having the resources so much as running while juggling multiple sets of scissors..

antonvs 2 hours ago | parent | prev [-]

> Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code.

I’m unclear about this. What’s the business case? I use Gemini CLI a lot, which runs on Node, and I can’t see anything that would be improved by using a different JS runtime. It’s not something you notice as a user. Node is mature, stable, and perfectly fit for the purpose.

If Anthropic were public and if these decisions were comprehensible to the average investor, an acquisition like this ought to cause the stock to plummet. Luckily for the people involved, there are no constraints like that in the current market.

semiquaver 4 hours ago | parent | prev | next [-]

I disagree with the overall premise: Before the acquisition, GitHub had to figure out how to monetize at some point. Now, even though their parent company does some shitty practices with their other software (Embrace, Extend, Extinguish, MS Windows), it's a stretch to assume this will also translate into making GitHub worse: Being worried makes sense but I remain optimistic about GitHub.

htrp 6 minutes ago | parent | next [-]

1 9 of uptime later

oasisaimlessly 2 hours ago | parent | prev | next [-]

You dropped this: </sarcasm>

raincole 2 hours ago | parent | prev [-]

I think you have some nostalgia about Github's stability before the acquisition.

tabbott 2 hours ago | parent | prev | next [-]

Funding to pay the core team (via revenue/grants/VC) requires a lot of leadership attention for any independent company that is developing an open-source project as its main activity. Yet more leadership attention goes into other administration (Taxes/hiring/legal/policies/etc.).

I don't have any direct context, though I have run an open-source business (Zulip) for the last decade wearing both the CEO and technical lead hats.

But my simulation is that the Bun leadership team might well be spending 2x as much of their time working on the technology than they reasonably could have as an independent venture-funded company, just because they don't have to do all that other stuff anymore. (There's of course probably a significant bias in that focus towards whatever Anthropic needs from Bun, only some of which other users may care about).

So I agree. Personally, I would not be concerned unless you see the tell-tale signs of the team being reassigned to other priorities at the buyer, which tends to be obvious, because, say, the GitHub project activity falls off a cliff.

remote-dev 7 hours ago | parent | prev | next [-]

This is a good take, and I hope you're right.

One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.

It remains to be seen how things will actually unfold.

htrp 4 minutes ago | parent | next [-]

you can own your upstream supply chain while simultaneously being less responsive to user pain points

sroussey 5 hours ago | parent | prev | next [-]

Own your supply chain. Reduces risk.

troupo 4 hours ago | parent | prev [-]

Anthropic bought actual engineers to undo the slop their vibe-coders produce with reckless abandon: https://x.com/jarredsumner/status/2026497606575398987

However, these engineers, too, now start to vibe-code with reckless abandon https://x.com/jarredsumner/status/2048434628248359284 and https://x.com/jarredsumner/status/2049780223311548729

dandellion 7 hours ago | parent | prev | next [-]

> it's a stretch to assume this will also translate into making Bun worse

For me it's far from a stretch, in fact it matches closely a pattern that I've seen repeated many times over at this point.

saghm 6 hours ago | parent | prev | next [-]

> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?

atonse 6 hours ago | parent [-]

I'm not the parent poster which is why I still stick to looking at the people...

If you start seeing the people that created bun leaving Anthropic, then I'd probably start to worry. And I haven't seen any sign of that yet.

overgard 4 hours ago | parent | prev | next [-]

> I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Incidentally, Anthropic needs to figure out how to monetize at some point too.

antonvs 2 hours ago | parent [-]

It’s organizations figuring out how to monetize all the way up.

andai 5 hours ago | parent | prev | next [-]

What came to my mind is Windows.

Regardless of what else is going on, kernel is a separate team, and has very strong incentives to remain competent and sane.

jmspring 7 hours ago | parent | prev [-]

Nope. The need to monotize and the fact that an acquihire cost some money is exactly why relying on a specific runtime is where people should have concern.