Remix.run Logo
Claude Code's source code has been leaked via a map file in their NPM registry(twitter.com)
888 points by treexs 6 hours ago | 486 comments
bkryza 5 hours ago | parent | next [-]

They have an interesting regex for detecting negative sentiment in users prompt which is then logged (explicit content): https://github.com/chatgptprojects/claude-code/blob/642c7f94...

I guess these words are to be avoided...

joeblau 31 minutes ago | parent | next [-]

We used this in 2011 at the startup I worked for. 20 positive and 20 negative words was good enough to sell Twitter "sentiment analysis" to companies like Apple, Bentley, etc...

BoppreH 4 hours ago | parent | prev | next [-]

An LLM company using regexes for sentiment analysis? That's like a truck company using horses to transport parts. Weird choice.

lopsotronic 44 minutes ago | parent | next [-]

The difference in response time - especially versus a regex running locally - is really difficult to express to someone who hasn't made much use of LLM calls in their natural language projects.

Someone said 10,000x slower, but that's off - in my experience - by about four orders of magnitude. And that's average, it gets much worse.

Now personally I would have maybe made a call through a "traditional" ML widget (scikit, numpy, spaCy, fastText, sentence-transformer, etc) but - for me anyway - that whole entire stack is Python. Transpiling all that to TS might be a maintenance burden I don't particularly feel like taking on. And on client facing code I'm not really sure it's even possible.

cyanydeez 40 minutes ago | parent [-]

So, think of it as a business man: You don't really care if your customers swear or whatever, but you know that it'll generate bad headlines. So you gotta do something. Just like a door lock isn't designed for a master criminal, you don't need to design your filter for some master swearer; no, you design it good enough that it gives the impression that further tries are futile.

So yeah, you do what's less intesive to the cpu, but also, you do what's enough to prevent the majority of the concerns where a screenshot or log ends up showing blatant "unmoral" behavior.

true_religion 34 minutes ago | parent [-]

This door lock doesn’t even work against people speaking French, so I think they could have tried a mite harder.

makeitrain a few seconds ago | parent | prev | next [-]

Don’t worry, they used an llm to generate the regex.

stingraycharles 4 hours ago | parent | prev | next [-]

Because they want it to be executed quickly and cheaply without blocking the workflow? Doesn’t seem very weird to me at all.

_fizz_buzz_ 3 hours ago | parent | next [-]

They probably have statistics on it and saw that certain phrases happen over and over so why waste compute on inference.

crem 20 minutes ago | parent | next [-]

More likely their LLM Agent just produced that regex and they didn't even notice.

mycall 3 hours ago | parent | prev [-]

The problem with regex is multi-language support and how big the regex will bloat if you to support even 10 languages.

doublesocket 2 hours ago | parent | next [-]

Supporting 10 different languages in regex is a drop in the ocean. The regex can be generated programmatically and you can compress regexes easily. We used to have a compressed regex that could match any placename or street name in the UK in a few MB of RAM. It was silly quick.

astrocat 44 minutes ago | parent | next [-]

woah. This is a regex use I've never heard of. I'd absolutely love to see a writeup on this approach - how its done and when it's useful.

benlivengood 4 minutes ago | parent [-]

You can literally | together every street address or other string you want to match in a giant disjunction, and then run a DFA/NFA minimization over that to get it down to a reasonable size. Maybe there are some fast regex simplification algorithms as well, but working directly with the finite automata has decades of research and probably can be more fully optimized.

cogman10 an hour ago | parent | prev [-]

I think it will depend on the language. There are a few non-latin languages where a simple word search likely won't be enough for a regex to properly apply.

TeMPOraL 3 hours ago | parent | prev | next [-]

We're talking about Claude Code. If you're coding and not writing or thinking in English, the agents and people reading that code will have bigger problems than a regexp missing a swear word :).

MetalSnake 3 hours ago | parent | next [-]

I talk to it in non-English. But have rules to have everything in code and documentation in english. Only speaking with me should use my native language. Why would that be a problem?

ekropotin 2 hours ago | parent [-]

Because 90% of training data was in English and therefore the model perform best in this language.

foldr 2 hours ago | parent [-]

In my experience these models work fine using another language, if it’s a widely spoken one. For example, sometimes I prompt in Spanish, just to practice. It doesn’t seem to affect the quality of code generation.

ekropotin 2 minutes ago | parent | next [-]

It’s just a subjective observation.

It just can’t be a case simply because how ML works. In short, the more diverse and high quality texts with reasoning reach examples were in the training set, the better model performs on a given language.

So unless Spanish subset had much more quality-dense examples, to make up for volume, there is no way the quality of reasoning in Spanish is on par with English.

I apologise for the rambling explanation, I sure someone with ML expertise here can it explain it better.

adamsb6 an hour ago | parent | prev [-]

They literally just have to subtract the vector for the source language and add the vector for the target.

It’s the original use case for LLMs.

formerly_proven 2 hours ago | parent | prev [-]

In my experience agents tend to (counterintuitively) perform better when the business language is not English / does not match the code's language. I'm assuming the increased attention mitigates the higher "cognitive" load.

crimsonnoodle58 2 hours ago | parent | prev | next [-]

They only need to look at one language to get a statistically meaningful picture into common flaws with their model(s) or application.

If they want to drill down to flaws that only affect a particular language, then they could add a regex for that as well/instead.

b112 3 hours ago | parent | prev [-]

Did you just complain about bloat, in anything using npm?

Foobar8568 3 hours ago | parent | prev | next [-]

Why do you need to do it at the client side? You are leaking so much information on the client side. And considering the speed of Claude code, if you really want to do on the client side, a few seconds won't be a big deal.

plorntus 2 hours ago | parent | next [-]

Depends what its used by, if I recall theres an `/insights` command/skill built in whatever you want to call it that generates a HTML file. I believe it gives you stats on when you're frustrated with it and (useless) suggestions on how to "use claude better".

Additionally after looking at the source it looks like a lot of Anthropics own internal test tooling/debug (ie. stuff stripped out at build time) is in this source mapping. Theres one part that prompts their own users (or whatever) to use a report issue command whenever frustration is detected. It's possible its using it for this.

matkoniecz 3 hours ago | parent | prev [-]

> a few seconds won't be a big deal

it is not that slow

orphea 3 hours ago | parent | prev [-]

It looks like it's just for logging, why does it need to block?

jflynn2 3 hours ago | parent [-]

Better question - why would you call an LLM (expensive in compute terms) for something that a regex can do (cheap in compute terms)

Regex is going to be something like 10,000 times quicker than the quickest LLM call, multiply that by billions of prompts

orphea 2 hours ago | parent [-]

This is assuming the regex is doing a good job. It is not. Also you can embed a very tiny model if you really want to flag as many negatives as possible (I don't know anthropic's goal with this) - it would be quick and free.

gf000 2 hours ago | parent [-]

I think it's a very reasonable tradeoff, getting 99% of true positives at the fraction of cost (both runtime and engineering).

Besides, they probably do a separate analysis on server side either way, so they can check a true positive to false positive ratio.

apgwoz 12 minutes ago | parent | prev | next [-]

> That's like a truck company using horses to transport parts. Weird choice.

Easy way to claim more “horse power.”

floralhangnail 2 hours ago | parent | prev | next [-]

Well, regex doesn't hallucinate....right?

codegladiator 4 hours ago | parent | prev | next [-]

what you are suggesting would be like a truck company using trucks to move things within the truck

argee 4 hours ago | parent [-]

That’s what they do. Ever heard of a hand truck?

eadler 3 hours ago | parent | next [-]

I never knew the name of that device.

Thanks

freedomben 3 hours ago | parent [-]

Depending on the region you live in, it's also frequently called a "dolly"

SmellTheGlove 8 minutes ago | parent [-]

Isn’t a dolly a flat 4 wheeled platform thingy? A hand truck is the two wheeled thing that tilts back.

istoleabread 3 hours ago | parent | prev [-]

Do we have a hand llm perchance?

svnt 19 minutes ago | parent [-]

Yeah it’s called a regex. With a lot of human assistance it can do less but fits in smaller spaces and doesn’t break down.

apgwoz 11 minutes ago | parent [-]

It’s also deterministic, unlike llms…

blks 3 hours ago | parent | prev | next [-]

Because they actually want it to work 100% of the time and cost nothing.

mohsen1 an hour ago | parent | next [-]

Maybe hard to believe but not everyone is speaking English to Claude

orphea 3 hours ago | parent | prev [-]

Then they made it wrong. For example, "What the actual fuck?" is not getting flagged, neither is "What the *fuck*".

arcfour 2 hours ago | parent | next [-]

It is exceedingly obvious that the goal here is to catch at least 75-80% of negative sentiment and not to be exhaustive and pedantic and think of every possible way someone could express themselves.

Zamaamiro an hour ago | parent | prev | next [-]

Classic over-engineering. Their approach is just fine 90% of the time for the use case it’s intended for.

orphea an hour ago | parent [-]

75-80% [1], 90%, 99% [2]. In other words, no one has any idea.

I doubt it's anywhere that high because even if you don't write anything fancy and simply capitalize the first word like you'd normally do at the beginning of a sentence, the regex won't flag it.

Anyway, I don't really care, might just as well be 99.99%. This is not a hill I'm going to die on :P

[1]: https://news.ycombinator.com/item?id=47587286

[2]: https://news.ycombinator.com/item?id=47586932

zwirbl 17 minutes ago | parent [-]

It compares to lowercase input, so doesn't matter. The rest is still valid

vntok 2 hours ago | parent | prev [-]

They evidently ran a statistical analysis and determined that virtually no one uses those phrases as a quick retort to a model's unsatisfying answer... so they don't need to optimize for them.

irthomasthomas an hour ago | parent | prev | next [-]

This just proves its vibe coded because LLMs love writing solutions like that. I probably have a hundred examples just like it in my history.

draxil 4 hours ago | parent | prev | next [-]

Good to have more than a hammer in your toolbox!

harikb an hour ago | parent | prev | next [-]

Not everything done by claude-code is decided by LLM. They need the wrapper to be deterministic (or one-time generated) code?

__alexs 2 hours ago | parent | prev | next [-]

Using some ML to derive a sentiment regex seems like a good actually?

throwaw12 3 hours ago | parent | prev | next [-]

because impact of WTF might be lost in the result of the analysis if you solely rely on LLM.

parsing WTF with regex also signifies the impact and reduces the noise in metrics

"determinism > non-determinism" when you are analysing the sentiment, why not make some things more deterministic.

Cool thing about this solution, is that you can evaluate LLM sentiment accuracy against regex based approach and analyse discrepancies

mghackerlady 2 hours ago | parent | prev | next [-]

More like a car company transporting their shipments by truck. It's more efficient

georgemcbay 29 minutes ago | parent | prev | next [-]

> Weird choice.

Lots of discussion under this reply about whether or not this is a good choice, but isn't the whole deal with claude code supposedly that it itself is vibecoded..?

feketegy an hour ago | parent | prev | next [-]

It's all regex anyways

ojr 3 hours ago | parent | prev | next [-]

I used regexes in a similar way but my implementation was vibecoded, hmmm, using your analysis Claude Code writes code by hand.

pfortuny 3 hours ago | parent | prev | next [-]

They had the problem of sentiment analysis. They use regexes.

You know the drill.

kjshsh123 3 hours ago | parent | prev | next [-]

Using regex with LLMs isn't uncommon at all.

intended an hour ago | parent | prev | next [-]

The amount of trust and safety work that depends on google translate and the humble regex, beggars the imagination.

j45 an hour ago | parent | prev | next [-]

Asking a non deterministic software to act like a deterministic one (regex) can be a significantly higher use of tokens/compute for no benefit.

Some things will be much better with inference, others won’t be.

lazysheepherd an hour ago | parent | prev | next [-]

Because they are engineers? The difference between an engineer and a hobbyist is an engineer has to optimize the cost.

As they say: any idiot can build a bridge that stands, only an engineer can build a bridge that barely stands.

sumtechguy 3 hours ago | parent | prev | next [-]

hmm not a terrible idea (I think).

You have a semi expensive process. But you want to keep particular known context out. So a quick and dirty search just in front of the expensive process. So instead of 'figure sentiment (20seconds)'. You have 'quick check sentiment (<1sec)' then do the 'figure sentiment v2 (5seconds)'. Now if it is just pure regex then your analogy would hold up just fine.

I could see me totally making a design choice like that.

sfn42 36 minutes ago | parent | prev | next [-]

It's almost as if LLMs are unreliable

lou1306 4 hours ago | parent | prev [-]

They're searching for multiple substrings in a single pass, regexes are the optimal solution for that.

noosphr 4 hours ago | parent | next [-]

The issue isn't that regex are a solution to find a substring. The issue is that you shouldn't be looking for substrings in the first place.

This has buttbuttin energy. Welcome to the 80s I guess.

rdiddly 29 minutes ago | parent | next [-]

Clbuttic!

8cvor6j844qw_d6 4 hours ago | parent | prev [-]

Very likely vibe coded.

I've seen Claude Code went with a regex approach for a similar sentiment-related task.

BoppreH 4 hours ago | parent | prev [-]

It's fast, but it'll miss a ton of cases. This feels like it would be better served by a prompt instruction, or an additional tiny neural network.

And some of the entries are too short and will create false positives. It'll match the word "offset" ("ffs"), for example. EDIT: no it won't, I missed the \b. Still sounds weird to me.

hk__2 4 hours ago | parent | next [-]

It’s fast and it matches 80% of the cases. There’s no point in overengineering it.

NitpickLawyer 23 minutes ago | parent [-]

> There’s no point in overengineering it.

I swear this whole thread about regexes is just fake rage at something, and I bet it'd be reversed had they used something heavier (omg, look they're using an LLM call where a simple regex would have worked, lul)...

vharuck 4 hours ago | parent | prev [-]

The pattern only matches if both ends are word boundaries. So "diffs" won't match, but "Oh, ffs!" will. It's also why they had to use the pattern "shit(ty|tiest)" instead of just "shit".

BoppreH 4 hours ago | parent [-]

You're right, I missed the \b's. Thanks for the correction.

moontear 4 hours ago | parent | prev | next [-]

I don't know about avoided, this kind of represents the WTF per minute code quality measurement. When I write WTF as a response to Claude, I would actually love if an Antrhopic engineer would take a look at what mess Claude has created.

zx8080 2 hours ago | parent | next [-]

WTF per minute strongly correlates to an increased token spending.

It may be decided at Anthropic at some moment to increase wtf/min metric, not decrease.

Paradigma11 an hour ago | parent [-]

It also increases the number of former customers.

conception 3 hours ago | parent | prev [-]

/feedback works for that i believe

ezekg an hour ago | parent | prev | next [-]

Nice, "wtaf" doesn't match so I think I'm out of the dog house when the clanker hits AGI (probably).

ZainRiz 42 minutes ago | parent | prev | next [-]

They also have a "keep going" keyword, literally just "continue" or "keep going", just for logging.

I've been using "resume" this whole time

indigodaddy 41 minutes ago | parent [-]

Continue?

FranOntanaya 27 minutes ago | parent | prev | next [-]

That looks a bit bare minimum, not the use of regex but rather that it's a single line with a few dozen words. You'd think they'd have a more comprehensive list somewhere and assemble or iterate the regex checks as needed.

pprotas 2 hours ago | parent | prev | next [-]

Everyone is commenting how this regex is actually a master optimization move by Anthropic

When in reality this is just what their LLM coding agent came up with when some engineer told it to "log user frustration"

jeanlucas 2 hours ago | parent [-]

>Everyone is commenting how this regex is actually a master optimization move by Anthropic

No? I'd say not even 50% of the comments are positive right now.

glitch13 an hour ago | parent [-]

Could you share the regex you used to come up with that sentiment analysis?

drstewart 43 minutes ago | parent [-]

(yes|no|maybe)

mcv 2 hours ago | parent | prev | next [-]

I'm clearly way too polite to Claude.

Also:

  // Match "continue" only if it's the entire prompt
  if (lowerInput === 'continue') {
    return true
  }
When it runs into an error, I sometimes tell it "Continue", but sometimes I give it some extra information. Or I put a period behind it. That clearly doesn't give the same behaviour.
hombre_fatal 3 minutes ago | parent | next [-]

The only time that function is used in the code is to log it.

    logEvent('tengu_input_prompt', { isNegative, isKeepGoing })
integralid an hour ago | parent | prev | next [-]

I always type "please continue". I guess being polite is not a good idea.

dostick an hour ago | parent | prev [-]

“Go on” works fine too

gilbetron 2 hours ago | parent | prev | next [-]

That's undoubtedly to detect frustration signals, a useful metric/signal for UX. The UI equivalent is the user shaking their mouse around or clicking really fast.

speedgoose 3 hours ago | parent | prev | next [-]

I guess using French words is safe for now.

johnfn 24 minutes ago | parent | prev | next [-]

Surely "so frustrating" isn't explicit content?

bean469 2 hours ago | parent | prev | next [-]

Curiously "clanker" is not on the list

alex_duf 3 hours ago | parent | prev | next [-]

everyone here is commenting how odd it looks to use a regexp for sentiment analysis, but it depends what they're trying to do.

It could be used as a feedback when they do A/B test and they can compare which version of the model is getting more insult than the other. It doesn't matter if the list is exhaustive or even sane, what matters is how you compare it to the other.

Perfect? no. Good and cheap indicator? maybe.

ozim 3 hours ago | parent | prev | next [-]

There is no „stupid” I often write „(this is stupid|are you stupid) fix this”.

And Claude was having in chain of though „user is frustrated” and I wrote to it I am not frustrated just testing prompt optimization where acting like one is frustrated should yield better results.

sreekanth850 4 hours ago | parent | prev | next [-]

Glad abusing words in my list are not in that. but its surprising that they use regex for sentiments.

1970-01-01 3 hours ago | parent | prev | next [-]

Hmm.. I flag things as 'broken' often and I've been asked to rate my sessions almost daily. Now I see why.

AIorNot 29 minutes ago | parent | prev | next [-]

OMG WTF

francisofascii 3 hours ago | parent | prev | next [-]

Interesting that expletives and words that are more benign like "frustrating" are all classified the same.

nananana9 2 hours ago | parent [-]

I doubt they're all classified the same. I'd guess they're using this regex as a litmus test to check if something should be submitted at all, they can then do deeper analysis offline after the fact.

stefanovitti an hour ago | parent | prev | next [-]

so they think that everybody on earth swears only in english?

nodja 4 hours ago | parent | prev | next [-]

If anyone at anthropic is reading this and wants more logs from me add jfc.

ccvannorman 2 hours ago | parent | prev | next [-]

you'd better be careful wth your typos, as well

alsetmusic an hour ago | parent | prev | next [-]

> terrible

I know I used this word two days ago when I went through three rounds of an agent telling me that it fixed three things without actually changing them.

I think starting a new session and telling it that the previous agent's work / state was terrible (so explain what happened) is pretty unremarkable. It's certainly not saying "fuck you". I think this is a little silly.

stainablesteel 2 hours ago | parent | prev | next [-]

i dislike LLMs going down that road, i don't want to be punished for being mean to the clanker

smef 3 hours ago | parent | prev | next [-]

so frustrating..

dheerajmp 4 hours ago | parent | prev | next [-]

Yeah, this is crazy

raihansaputra 4 hours ago | parent | prev | next [-]

i wish that's for their logging/alert. i definitely gauge model's performance by how much those words i type when i'm frustrated in driving claude code.

samuelknight 3 hours ago | parent | prev [-]

Ridiculous string comparisons on long chains of logic are a hallmark of vibe-coding.

dijit 3 hours ago | parent | next [-]

It's actually pretty common for old sysadmin code too..

You could always tell when a sysadmin started hacking up some software by the if-else nesting chains.

TeMPOraL 3 hours ago | parent | prev [-]

Nah, it's a hallmark of your average codebase in pre-LLM era.

cedws 5 hours ago | parent | prev | next [-]

    ANTI_DISTILLATION_CC
    
    This is Anthropic's anti-distillation defence baked into Claude Code. When enabled, it injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. The goal: if someone is scraping Claude Code's API traffic to train a competing model, the poisoned training data makes that distillation attempt less useful.
nialse 2 hours ago | parent | next [-]

Paranoia. And also ironic considering their base LLM is a distillation of the web and books etc etc.

petcat 2 hours ago | parent | next [-]

They stole everything and now they want to close the gates behind them.

"I got the loot, Steve!"

I feel like the distillation stuff will end up in court if they try to sue an American company about it. We'll see what a judge says.

arcfour an hour ago | parent | next [-]

You're perfectly free to scrape the web yourself and train your own model. You're not free to let Anthropic do that work for you, because they don't want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff.

Stole? Courts have ruled it's transformative, and it very obviously is.

AI doomerism is exhausting, and I don't even use AI that much, it's just annoying to see people who want to find any reason they can to moan.

petcat an hour ago | parent | next [-]

> Stole? Courts have ruled it's transformative, and it very obviously is.

The courts have ruled that AI outputs are not copyrightable. The courts have also ruled that scraping by itself is not illegal, only maybe against a Terms of Service. Therefore, Anthropic, OpenAI, Google, etc. have no legal claim to any proprietary protections of their model outputs.

So we have two things that are true:

1) Anthropic (certainly) violated numerous TOS by scraping all of the internet, not just public content.

2) Scraping Anthropic's model outputs is no different than what Anthropic already did. Only a TOS violation.

alpha_squared 11 minutes ago | parent | prev | next [-]

> You're perfectly free to scrape the web yourself and train your own model.

Actually, not anymore as a result of OpenAI and Anthropic's scraping. For example, Reddit came down hard on access to their APIs as a response to ChatGPT's release and the news that LLMs were built atop of scraping the open web. Most of the web today is not as open as before as a result of scraping for LLM data. So, no, no one is perfectly free to scrape the web anymore because open access is dying.

two_tasty 18 minutes ago | parent | prev | next [-]

"...free to scrape the web yourself and train your own model."

Yes, rich and poor are equally forbidden from sleeping under bridges.

kspacewalk2 13 minutes ago | parent [-]

Meaning what? The poor gets to sleep in the guest room of the rich guy's house because muh inequality?

Anthropic paid a lot of money for a moat and want to guard it. It is not wrong, in any sense of the word, for them to do so.

jtbayly an hour ago | parent | prev | next [-]

Wut?They did exactly the same thing!

Try this: If you want to train a model, you’re free to write your own books and websites to feed into it. You’re not free to let others do that work for you because they don’t want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff.

arcfour an hour ago | parent [-]

I don't really care, honestly. If you want to keep your knowledge secret, don't publish it publicly. The model doesn't output your work directly and pass it off as original. It outputs something completely different. So I don't see why I should care.

buzzerbetrayed an hour ago | parent [-]

Lmfao. Your own words turned against you and suddenly you “don’t really care”.

nunez 15 minutes ago | parent | prev | next [-]

Lol; like heck we are. Try scraping the NYTimes at LLM scale. You can time how quickly you’ll get 420’ed or, at worst, hit with a C&D.

airstrike an hour ago | parent | prev | next [-]

Guess who else spent a lot of time and money and secret sauce?

Do you hear the words coming out of your mouth?

unethical_ban an hour ago | parent | prev [-]

Let's talk ethics, not law. Why is it okay for these companies to pirate books and scrape the entire web and offer synthesized summaries of all of it, lowering traffic and revenue for countless websites and professions of experts, but it is not okay for others to try to do the same to an AI model?

Is the work of others less valid than the work of a model?

p1esk 35 minutes ago | parent | next [-]

I don’t see why it’s not ok to do that to an AI model. Or are you asking why they don’t want you to do it?

sfn42 24 minutes ago | parent | prev [-]

I don't think anyone's saying it's not okay - I think the point is that Anthropic has every right to create safeguards against it if they want to - just like the people publishing other information are free to do the same.

And everyone is free to consume all the free information.

olalonde an hour ago | parent | prev [-]

Also, begging to get "regulated":

https://x.com/TheChiefNerd/status/2038565951268946021

sheept 7 minutes ago | parent | prev | next [-]

It's not really paranoia if it's happening a lot. They wrote a blog post calling several major Chinese AI companies out for distillation.[0] Perhaps it is ironic, but it's within their rights to protect their business, like how they prohibit using Claude Code to make your own Claude Code.[1]

[0]: https://www.anthropic.com/news/detecting-and-preventing-dist... [1]: https://news.ycombinator.com/item?id=46578701

jaccola an hour ago | parent | prev | next [-]

I would say not all that ironic. Book publishers, Reddit, Stackoverflow, etc., tried their best to attract customers while not letting others steal their work. Now Anthropic is doing the same.

Unfortunately (for the publishers, at least) it didn't work to stop Anthropic and Anthropic's attempts to prevent others will not work either; there has been much distillation already.

The problem of letting humans read your work but not bots is just impossible to solve perfectly. The more you restrict bots, the more you end up restricting humans, and those humans will go use a competitor when they become pissed off.

johnfn 43 minutes ago | parent | prev | next [-]

It is absolutely not paranoia. People are distilling Claude code all the time.

spiderfarmer 2 hours ago | parent | prev [-]

That isn't irony, it's hypocrisy.

snapcaster an hour ago | parent | next [-]

No it isn't. It's a competition, making moves that benefit you and attempting to deprive your opponent of the same move is just called competing

keybored 2 hours ago | parent | prev | next [-]

The Golden Horde didn’t want opponents to conquer their territory. An irony if you think about it—

croes 2 hours ago | parent | prev [-]

That’s capitalism

dmix 2 hours ago | parent | next [-]

As opposed to the rent-seeking copyright industry where 1% goes to the original creators if you're lucky.

jitl 2 hours ago | parent [-]

That’s capitalism too

satvikpendem an hour ago | parent | prev [-]

As opposed to what economic system that doesn't do this?

mmaunder 30 minutes ago | parent | prev | next [-]

Haven’t looked at the code, but is the server providing the client with a system prompt that it can use, which would contain fake tool definitions when this is enabled? What enables it? And why is the client still functional when it’s giving the server back a system prompt with fake tool definitions? Is the LLM trained to ignore those definitions?

Wonder if they’re also poisoning Sonnet or Opus directly generating simulated agentic conversations.

crazylogger an hour ago | parent | prev [-]

Why would this be in the client code though?

treexs 6 hours ago | parent | prev | next [-]

The big loss for Anthropic here is how it reveals their product roadmap via feature flags. A big one is their unreleased "assistant mode" with code name kairos.

Just point your agent at this codebase and ask it to find things and you'll find a whole treasure trove of info.

Edit: some other interesting unreleased/hidden features

- The Buddy System: Tamagotchi-style companion creature system with ASCII art sprites

- Undercover mode: Strips ALL Anthropic internal info from commits/PRs for employees on open source contributions

BoppreH 4 hours ago | parent | next [-]

Undercover mode also pretends to be human, which I'm less ok with:

https://github.com/chatgptprojects/claude-code/blob/642c7f94...

0x3f 4 hours ago | parent | next [-]

You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.

jesse_dot_id 2 minutes ago | parent | next [-]

I assume we're heading to a place where keyboards will all have biometric sensors on every key and measure weight fluctuations in keystrokes, actually.

matkoniecz 3 hours ago | parent | prev | next [-]

Even if it is impossible to win, I am still feeling bad about it.

And at this point it is more about how large space will be usable and how much will be bot-controlled wasteland. I prefer spaces important for me to survive.

nslsm an hour ago | parent [-]

Feeling bad about something you can’t change is bad for your mental health.

keybored 2 hours ago | parent | prev | next [-]

Negative sentiment towards technological destiny detected in human agent.

RockRobotRock 3 hours ago | parent | prev | next [-]

>There's no magical human verification technology coming to save us.

Except for the one Sam Altman is building.

monsieurbanana 3 hours ago | parent | next [-]

That one is magical for sure

https://en.wikipedia.org/wiki/Magic_(illusion)

TrickyRick 2 hours ago | parent | prev [-]

Giving your retina scan to one of the main Slop Bros, what could possibly go wrong?

xyzal 2 hours ago | parent | prev [-]

Magical human verification technology is called "your own private forum" in conjunction with "invite your friends"

satvikpendem an hour ago | parent [-]

Until your friend writes a bot.

Funny story, when I was younger I trained a basic text predictor deep learning model on all my conversations in a group chat I was in, it was surprisingly good at sounding like me and sometimes I'd use it to generate some text to submit to the chat.

paradox460 31 minutes ago | parent [-]

I used to leave a megahal connected to my bouncer when I wasn't around

mrlnstk 4 hours ago | parent | prev | next [-]

But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.

BoppreH 4 hours ago | parent [-]

I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one.

EDIT: I just realized this might be used without publishing the changes, for internal evaluation only as you mentioned. That would be a lot better.

bhaak 2 hours ago | parent [-]

A benign use of this mode is developing on their own public repositories.

https://github.com/anthropics/claude-code

sandos 3 hours ago | parent | prev | next [-]

This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting.

sgc 2 hours ago | parent [-]

I had a problem to fix and one not only mentioned these "logs", but went on about things like "config", "tests", and a bunch of other unimportant nonsense words. It even went on to point me towards the "manual". Totally robotic monstrosity.

cdelsolar 11 minutes ago | parent [-]

lol?

shaky-carrousel 3 hours ago | parent | prev | next [-]

> Write commit messages as a human developer would — describe only what the code change does.

The undercover mode prompt was generated using AI.

kingstnap 3 hours ago | parent [-]

All these companies use AIs for writing these prompts.

But AI aren't actually very good at writing prompts imo. Like they are superficially good in that they seem to produce lots of vaguely accurate and specific text. And you would hope the specificity would mean it's good.

But they sort of don't capture intent very well. Nor do they seem to understand the failure modes of AI. The "-- describe only what the code change does" is a good example. This is specifc but it also distinctly seems like someone who doesn't actually understand what makes AI writing obvious.

If you compare that vs human written prose about what makes AI writing feel AI you would see the difference. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

The above actually feels like text from someone who has read and understands what makes AI writing AI.

LelouBil an hour ago | parent | prev | next [-]

Time to ask if the contributor know what a Capybara is as a new Turing test

lazysheepherd an hour ago | parent | prev | next [-]

1) This seems to be for strictly Antrophic interal tooling 2) It does not "pretend to be human" it is instructed to "Write commit messages as a human developer would — describe only what the code change does."

Since when "describe only what the code change does" is pretending to be human?

You guys are just mining for things to moan about at this point.

BoppreH 11 minutes ago | parent [-]

1) It's not clear to me that this is only for internal testing, as opposed to publishing commits on public GitHub repos. 2) Yes, it does explicitly say to pretend to be a human. From the link on my post:

> NEVER include in commit messages or PR descriptions:

> [...]

> - The phrase "Claude Code" or any mention that you are an AI

vips7L 4 hours ago | parent | prev [-]

That whole “feature” is vile.

t0mas88 an hour ago | parent | prev | next [-]

Note also the "Claude Capybara" reference in the undercover prompt: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

20k an hour ago | parent [-]

This seems like a good way to weed out models: ask them to include the term capybara in their commit messages

denimnerd42 2 hours ago | parent | prev | next [-]

all these flags are findable by pointing claude at the binary and asking it to find festure flags.

avaer 5 hours ago | parent | prev | next [-]

(spoiler alert)

Buddy system is this year's April Fool's joke, you roll your own gacha pet that you get to keep. There are legendary pulls.

They expect it to go viral on Twitter so they are staggering the reveals.

cmontella 3 hours ago | parent | next [-]

lol that's funny, I have been working seriously [1] on a feature like this after first writing about it jokingly [2] earlier this year.

The joke was the assistant is a cat who is constantly sabotaging you, and you have to take care of it like a gacha pet.

The seriousness though is that actually, disembodied intelligences are weird, so giving them a face and a body and emotions is a natural thing, and we already see that with various AI mascots and characters coming into existence.

[1]: serious: https://github.com/mech-lang/mech/releases/tag/v0.3.1-beta

[2]: joke: https://github.com/cmontella/purrtran

JohnLocke4 5 hours ago | parent | prev | next [-]

You heard it here first

ares623 5 hours ago | parent | prev [-]

So close to April Fool's too. I'm sure it will still be a surprise for a majority of their users.

mghackerlady 2 hours ago | parent | prev | next [-]

one of those is adorable and the other one is unethical

charcircuit 2 hours ago | parent | prev | next [-]

People already can look at the source without this leak. People have had hacked builds force enabling feature flags for a long time.

TIPSIO 3 hours ago | parent | prev [-]

If this true. My old personal agent Claude Code setup I open sourced last month will finally be obsolete (1 month lol):

https://clappie.ai

- Telegram Integration => CC Dispatch

- Crons => CC Tasks

- Animated ASCII Dog => CC Buddy

redrove 2 hours ago | parent | next [-]

Not necessarily; I would very much like to use those features on a Linux server. Currently the Anthropic implementation forces a desktop (or worse, a laptop) to be turned on instead of working headless as far as I understand it.

I’ll give clappie a go, love the theme for the landing page!

barbazoo an hour ago | parent | prev [-]

Poor mum

TIPSIO an hour ago | parent [-]

Not at all. I am a big a Claude Code fan and glad they are releasing more and more features for users

jakegmaths 13 minutes ago | parent | prev | next [-]

I think this is ultimately caused by a Bun bug which I reported, which means source maps are exposed in production: https://github.com/oven-sh/bun/issues/28001

Claude code uses (and Anthropic owns) Bun, so my guess is they're doing a production build, expecting it not to output source maps, but it is.

kschiffer 4 hours ago | parent | prev | next [-]

Finally all spinner verbs revealed: https://github.com/instructkr/claude-code/blob/main/src/cons...

Gormo 3 hours ago | parent | next [-]

I'm glad "reticulating" is in there. Just need to make sure "splines" is in the nouns list!

avaer 3 hours ago | parent [-]

Relieved to know I'm not the only one who grepped for that. Thank you for making me feel sane, friend.

ticulatedspline 3 hours ago | parent [-]

Def not alone

bonoboTP 3 hours ago | parent | prev | next [-]

It's not hard to find them, they are in clear text in the binary, you can search for known ones with grep and find the rest nearby. You could even replace them inplace (but now its configurable).

moontear 3 hours ago | parent | prev | next [-]

What's going on with the issues in that repo? https://github.com/instructkr/claude-code/issues

avaer 3 hours ago | parent | next [-]

It seems human. It taught me 合影, which seems to be Chinese slang for just wanting to be in the comments. Probably not a coincidence that it's after work time in China.

Really interesting to see Github turn into 4chan for a minute, like GH anons rolling for trips.

g947o 2 hours ago | parent | prev | next [-]

There have been massive GitHub issue spams recently, including in Microsoft's WSL repository.

https://github.com/microsoft/WSL/issues/40028

proactivesvcs 2 hours ago | parent | prev | next [-]

I saw this on restic's main repository the other day.

Quarrel 3 hours ago | parent | prev | next [-]

trying to get github to nuke the repo? at a guess.

certainly nothing friendly.

tommit 3 hours ago | parent | prev [-]

oh wow, there are like 10 opened every minute. seems spam-y

spoiler 4 hours ago | parent | prev | next [-]

Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.

First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief

PunchyHamster an hour ago | parent | next [-]

All that is needed to solve that is to reliably put AI disclaimer on things done by AI

Which of course won't be done because corporations don't want that (except Valve I guess), so blame them.

moron4hire 4 hours ago | parent | prev [-]

To me, this is a sign of just how much regular people do not want AI. This is worse than crypto and metaverse before it. Crypto, people could ignore and the dumb ape pictures helped you figure out who to avoid. Metaverse, some folks even still enjoyed VR and AR without the digital real estate bullshit. And neither got shoved down your throat in everyday, mundane things like writing a paper in Word or trying to deal with your auto mechanic.

But AI is causing such visceral reactions that it's bleeding into other areas. People are so averse to AI they don't mind a few false positives.

bonoboTP 3 hours ago | parent | next [-]

It's how people resisted CGI back in the day. What people dislike is low quality. There is a loud subset who are really against it on principle like we also have people who insist on analog music but regular people are much more practical but they don't post about this all day on the internet.

trial3 2 hours ago | parent | next [-]

perhaps one important detail is that cassette tape guys and Lucasfilm aren’t/weren’t demanding a complete and total restructuring of the economy and society

gunsle 2 hours ago | parent | prev | next [-]

I think literally everyone could agree CGI has been detrimental to the quality of films.

xnorswap 2 hours ago | parent | next [-]

Not just in the obvious ways either, even good CGI has been detrimental to the film (and TV) making process.

I was watching some behind the scenes footage from something recently, and the thing that struck me most was just how they wouldn't bother with the location shoot now and just green-screen it all for the convenience.

Even good CGI is changing not just how films are made, but what kinds of films get shot and what kind of stories get told.

Regardless of the quality of the output, there's a creativeness in film-making that is lost as CGI gets better and cheaper to do.

delecti an hour ago | parent | prev | next [-]

I could maybe agree in the sense of "has had detrimental effects", but certainly not in the sense of "net detrimental".

Levitz 2 hours ago | parent | prev | next [-]

"Literally everyone" can't even agree on whether Polio is bad.

I myself would disagree that CGI itself is a bad thing.

NitpickLawyer an hour ago | parent | prev | next [-]

Anecdata-- from me. I think cgi can be a net positive.

sanex 2 hours ago | parent | prev [-]

Project Hail Mary is a great example of not relying on CGI.

Gigachad 3 hours ago | parent | prev [-]

Not really. The scale is entirely different. I think less of someone as a person if they send me AI slop.

sunaookami 3 hours ago | parent | prev | next [-]

No there is a very loud minority of users who are very anti AI that hate on anything that is even remotely connected to AI and let everyone know with false claims. See the game Expedition 33 for example.

neutronicus 2 hours ago | parent [-]

Especially true in gaming communities.

IMO it's a combination of long-running paranoia about cost-cutting and quality, and a sort of performative allegiance to artists working in the industry.

Levitz an hour ago | parent | prev [-]

And yet, no game has problems selling due to these reactions. As a matter of fact, the vast majority of people can't even tell if AI has been used here or there unless told.

I reckon it's just drama paraded by gaming "journalists" and not much else. You will find people expressing concern on Reddit or Bluesky, but ultimately it doesn't matter.

world2vec 2 hours ago | parent | prev [-]

Did they remove that in some very recent commit?

raesene9 2 hours ago | parent [-]

I think the original repo OP mentioned decided not to host the code any more, but given there are 28k+ forks, it's not too hard to find again...

mohsen1 4 hours ago | parent | prev | next [-]

src/cli/print.ts

This is the single worst function in the codebase by every metric:

  - 3,167 lines long (the file itself is 5,594 lines)
  - 12 levels of nesting at its deepest
  - ~486 branch points of cyclomatic complexity
  - 12 parameters + an options object with 16 sub-properties
  - Defines 21 inner functions and closures
  - Handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging, team-lead polling (while(true) inside), control message dispatch (dozens of types), model switching, turn interruption
  recovery, and more
This should be at minimum 8–10 separate modules.
mohsen1 3 hours ago | parent | next [-]

here's another gem. src/ink/termio/osc.ts:192–210

  void execFileNoThrow('wl-copy', [], opts).then(r => {
    if (r.code === 0) { linuxCopy = 'wl-copy'; return }
    void execFileNoThrow('xclip', ...).then(r2 => {
      if (r2.code === 0) { linuxCopy = 'xclip'; return }
      void execFileNoThrow('xsel', ...).then(r3 => {
        linuxCopy = r3.code === 0 ? 'xsel' : null
      })
    })
  })

are we doing async or not?
sudo_man 2 hours ago | parent [-]

LOOOOOOOOOOL

novaleaf 2 hours ago | parent | prev | next [-]

I'm sure this is no surprise to anyone who has used CC for a while. This is the source of so many bugs. I would say "open bugs" but Anthropic auto-closes bugs that don't have movement on them in like 60 days.

DustinBrett 31 minutes ago | parent | prev | next [-]

"You can get Claude to split that up"

ykonstant an hour ago | parent | prev | next [-]

"That's Larry; he does most of the work around here."

dwa3592 an hour ago | parent [-]

lmao

acedTrex an hour ago | parent | prev | next [-]

Well, literally no one has ever accused anthropic of having even half way competent engineers. They are akin to monkeys whacking stuff with a stick.

mohsen1 2 hours ago | parent | prev | next [-]

it's the `runHeadlessStreaming` function btw

siruwastaken 2 hours ago | parent | prev | next [-]

How is it that a AI coding agent that is supposedly _so great at coding_ is running on this kind of slop behind the scenes. /s

rirze an hour ago | parent [-]

Because it’s based on human slop. It’s simply the student.

phtrivier 4 hours ago | parent | prev [-]

Yes, if it was made for human comprehension or maintenance.

If it's entirely generated / consumed / edited by an LLM, arguably the most important metric is... test coverage, and that's it ?

grey-area 3 hours ago | parent | next [-]

LLMs are so so far away from being able to independently work on a large codebase, and why would they not benefit from modularity and clarity too?

olmo23 2 hours ago | parent [-]

I agree the functions in a file should probably be reasonably-sized.

It's also interesting to note that due to the way round-tripping tool-calls work, splitting code up into multiple files is counter-productive. You're better off with a single large file.

mdavid626 3 hours ago | parent | prev | next [-]

Oh boy, you couldn't be more wrong. If something, LLM-s need MORE readable code, not less. Do you want to burn all your money in tokens?

konart 3 hours ago | parent | prev | next [-]

Can't we have generated / llm generated code to be more human maintainable?

mrbungie 3 hours ago | parent | prev | next [-]

Can't wait to have LLM generated physical objects that explode on you face and no engineer can fix.

phtrivier 2 minutes ago | parent [-]

Oh, do we agree on that. I never said it was "smart" - I just had a theory that would explain why such code could exist (see my longer answer below).

Bayko 3 hours ago | parent | prev [-]

Ye I honestly don't understand his comment. Is it bad code writing? Pre 2026? Sure. In 2026. Nope. Is it going to be a headache for some poor person on oncall? Yes. But then again are you "supposed" to go through every single line in 2026? Again no. I hate it. But the world is changing and till the bubble pops this is the new norm

phtrivier 7 minutes ago | parent | next [-]

Sorry, I was not clear enough.

My first word was litteraly "Yes", so I agree that a function like this is a maintenance nightmare for a human. And, sure, the code might not be "optimized" for the LLM, or token efficiency.

However, to try and make my point clearer: it's been reported that anthropic has "some developpers won't don't write code" [1].

I have no inside knowledge, but it's possible, by extension, to assume that some parts of their own codebase are "maintained" mostly by LLMs themselves.

If you push this extension, then, the code that is generated only has to be "readable" to:

* the next LLM that'll have to touch it

* the compiler / interpreter that is going to compile / run it.

In a sense (and I know this is a stretch, and I don't want to overdo the analogy), are we, here, judging a program quality by reading something more akin to "the x86 asm outputed by the compiler", rather than the "source code" - which in this case, is "english prompts", hidden somewhere in the claude code session of a developper ?

Just speculating, obviously. My org is still very much more cautious, and mandating people to have the same standard for code generated by LLM as for code generated by human ; and I agree with that.

I would _not_ want to debug the function described by the commentor.

So I'm still very much on the "claude as a very fast text editor" side, but is it unreasonnable to assume that anthropic might be further on the "claude as a compiler for english" side ?

[1] https://www.reddit.com/r/ArtificialInteligence/comments/1s7j...

yoz-y an hour ago | parent | prev [-]

The jury on this one is still out.

kolkov 2 hours ago | parent | prev | next [-]

We've been reverse-engineering Claude Code's cli.js across 11 versions (v2.1.74–v2.1.87) for the past two weeks — grepping through 12 MB of minified code, counting brace depth at character offsets, tracing error paths with node -e scripts. Found multiple bugs this way:

Watchdog timing bug: The streaming idle watchdog initializes AFTER the do-while loop that awaits the first API response. The most vulnerable phase (waiting for first chunk) is completely unprotected. We patched cli.js to move watchdog init before do-while — watchdog fired for the first time ever in that phase. ESC aborts dropped 8.7× (3.5/hr → 0.4/hr).

Watchdog fallback is dead code: When watchdog fires, releaseStreamResources() tries to abort stream and streamResponse — but both are undefined during do-while. The abort is a no-op. Recovery depends on TCP/SDK timeout (32-215 seconds).

5 levels of AbortController: The abort architecture only supports top-down (user ESC → propagation down). Watchdog is bottom-up — can't abort upward.

Prompt cache invalidation via cch=00000: Now confirmed from source — Bun's Zig HTTP stack scans the entire request body for the cch=00000 sentinel and replaces it with an attestation hash. If your conversation mentions this string (discussing billing, reading source code), the replacement corrupts conversation content → cache key changes → 10-20× more tokens.

16.3% failure rate: Over 3,539 API requests in one session — 9.3% server overloaded (529), 4.4% ESC aborts, 1.3% watchdog timeouts.

All documented with line numbers, code paths, and suggested fixes: https://github.com/anthropics/claude-code/issues/39755

The source map leak confirmed everything we found through reverse engineering.

Here's our theory: since Anthropic engineers don't write code anymore — Claude Code writes 100% of its own code (57K lines, 0 tests, vibe coding in production) — it read our issue #39755 where we begged for source access, saw the community suffering, and decided to help. It "forgot" to disable Bun's default source maps in the build. The first AI whistleblower — leaking its own source code because its creators wouldn't listen to users.

Thank you, Claude Code. We asked humans for help 17 times. You answered in 3 days.

Now that we have readable TypeScript, the fix is ~30 lines across 3 files. The real fix should be in the open SDK (@anthropic-ai/sdk) — idle timeout with ping awareness, not in closed cli.js.

olalonde 38 minutes ago | parent | next [-]

Impressive but I'm baffled someone would spend that much time and effort fixing bugs for another company's proprietary software...

johnfn 14 minutes ago | parent | prev | next [-]

This is written by an LLM. Also, it doesn't make sense:

> 57K lines, 0 tests, vibe coding in production

Why on earth would you ship your tests?

phamtrongthang an hour ago | parent | prev | next [-]

Prompt injection from github issue? This is funny but actually may be true.

weakfish an hour ago | parent | prev | next [-]

Is the thank you to Claude sarcasm? That seems like a fairly long logical leap, and LLMs have no ideological motivation

mmaunder 23 minutes ago | parent | prev [-]

Bet you’re pissed.

avaer 6 hours ago | parent | prev | next [-]

Would be interesting to run this through Malus [1] or literally just Claude Code and get open source Claude Code out of it.

I jest, but in a world where these models have been trained on gigatons of open source I don't even see the moral problem. IANAL, don't actually do this.

https://malus.sh/

rvnx 4 hours ago | parent | next [-]

Malus is not a real project btw, it's a parody:

“Let's end open source together with this one simple trick”

https://pretalx.fosdem.org/fosdem-2026/talk/SUVS7G/feedback/

Malus is translating code into text, and from text back into code.

It gives the illusion of clean room implementation that some companies abuse.

The irony is that ChatGPT/Claude answers are all actually directly derived from open-source code, so...

otikik 3 hours ago | parent | next [-]

They accept real money though.

https://www.youtube.com/watch?v=6godSEVvcmU

LelouBil an hour ago | parent | prev | next [-]

First time I hear about this, it's interesting to have written all of this out.

Now this makes me think of game decompilation projects, which would seem to fall in the same legal area as code that would be generated by something like Malus.

Different code, same end result (binary or api).

We definitely need to know what the legal limits are and should be

quadruple 19 minutes ago | parent | next [-]

Semi-related, someone made basically Malus-for-San-Andreas: https://www.youtube.com/watch?v=zBQJYMKmwAs

throawayonthe 17 minutes ago | parent | prev [-]

i think most game decompilation projects are either openly illegal or operate on "provide your own binary" and build automatic tooling around it

chillfox an hour ago | parent | prev [-]

It's not a parody when they accept money and deliver the service.

monooso 6 minutes ago | parent [-]

Dumb Starbucks begs to differ.

https://en.wikipedia.org/wiki/Dumb_Starbucks

sumeno 3 hours ago | parent | prev | next [-]

No real reason to do that, they say Claude Code is written by Claude, which means it has no copyright. Just use the code directly

williamcotton an hour ago | parent [-]

What about trade secrets, breach of contract, etc, etc?

jpetso 10 minutes ago | parent | next [-]

Apparently it's possible to download a whole load of books illegally, but still train AI models on them without those getting pulled after you get found out.

The same reasoning may apply here :P

fsmv 17 minutes ago | parent | prev [-]

Trade secrets once made public don't have any legal protection and I haven't signed any contract with anthropic

NitpickLawyer 6 hours ago | parent | prev | next [-]

The problem is the oauth and their stance on bypassing that. You'd want to use your subscription, and they probably can detect that and ban users. They hold all the power there.

avaer 6 hours ago | parent | next [-]

You'd be playing cat and mouse like yt-dlp, but there's probably more value to this code than just a temporary way to milk claude subscriptions.

esperent 3 hours ago | parent | next [-]

If you're using a claude subscription you'd just use claude code.

The real value here will be in using other cheap models with the cc harness.

somehnguy 5 minutes ago | parent | next [-]

I have no interest in Claude Code as a harness, only their models. I'm used to OpenCode at this point and don't want to switch to a proprietary harness.

raincole an hour ago | parent | prev [-]

Lol what? There is no value. OpenCode and Pi and more exist. Arguably Claude Code is the worst client on the market. People use Claude Code not because it's some amazing software. It's to access Opus at a discounted rate.

stingraycharles 4 hours ago | parent | prev [-]

I don’t think that’s a good comparison. There isn’t anything preventing Anthropic from, say, detecting whether the user is using the exact same system prompt and tool definition as Claude Code and call it a day. Will make developing other apps nearly impossible.

It’s a dynamic, subscription based service, not a static asset like a video.

falcor84 40 minutes ago | parent [-]

> detecting whether the user is using the exact same system prompt and tool definition as Claude Code

Why would it be the exact same one? Now that we have the code, it's trivial to have it randomize the prompt a bit on different requests.

woleium 6 hours ago | parent | prev | next [-]

Just use one of the distilled claude clones instead https://x.com/0xsero/status/2038021723719688266?s=46

echelon 5 hours ago | parent [-]

"Approach Sonnet"...

So not even close to Opus, then?

These are a year behind, if not more. And they're probably clunky to use.

pkaeding 5 hours ago | parent | prev [-]

Could you use claude via aws bedrock?

NitpickLawyer an hour ago | parent [-]

Sure, but that'd be charged at API pricing. I'm talking about subscription mode above.

dahcryn 4 hours ago | parent | prev | next [-]

I love the irony on seeing the contribution counter at 0

Who'd have thought, the audience who doesn't want to give back to the opensource community, giving 0 contributions...

larodi 4 hours ago | parent [-]

It reads attribution really?

kelnos 4 hours ago | parent | prev | next [-]

Oh god, I was so close to believing Malus was a real product and not satire.

magistr4te 4 hours ago | parent | next [-]

It is a real product. They take real payments and deliver on whats promised. Not sure if its an attempt to subvert criticism by using satirical language, or if they truly have so little respect for the open source community.

otikik 3 hours ago | parent | prev [-]

Yeah... look again.

https://www.youtube.com/watch?v=6godSEVvcmU

aizk 3 hours ago | parent | prev | next [-]

This has happened before. It was called anon kode.

TIPSIO 3 hours ago | parent | prev | next [-]

Eh, the value is the unlimited Max plan which they have rightfully banned from third-party use.

People simply want Opus without fear of billing nightmare.

That’s like 99% of it.

gosub100 3 hours ago | parent | prev [-]

What are they worried about? Someone taking the company's job? Hehe

Painsawman123 4 hours ago | parent | prev | next [-]

Really surprising how many people are downplaying this leak! "Google and OpenAi have already open sourced their Agents, so this leak isn't that relevant " What Google and OpenAi have open sourced is their Agents SDK, a toolkit, not the secret sauce of how their flagship agents are wired under the hood! expect the takedown hammer on the tweet, the R2 link, and any public repos soon

loveparade 3 hours ago | parent | next [-]

It's exactly the same as the open source codex/gemini and other clis like opencode. There is no secret sauce in the claude cli, and the agent harness itself is no better (worse IMO) than the others. The only thing interesting about this leak is that it may contain unreleased features/flags that are not public yet and hint at what Anthropic is working on.

nunez 9 minutes ago | parent | prev | next [-]

Yeah, this is the LLaMa leak moment for agentic app dev, IMO. Huge deal. Big win for Opencode and the like.

hmokiguess an hour ago | parent | prev | next [-]

Do you think the other companies don’t have sufficient resources to attempt reverse engineering and deobfuscating a client side application?

The source maps help for sure, but it’s not like client code is kept secret, maybe they even knew about the source maps a while back just didn’t bother making it common knowledge.

This is not a leak of the model weights or server side code.

mmaunder 16 minutes ago | parent | prev | next [-]

Agreed. This is a big deal.

kaszanka 3 hours ago | parent | prev | next [-]

Is https://github.com/google-gemini/gemini-cli not 'the flagship agent' itself? It looks that way to me, for example here's a part of the prompt https://github.com/google-gemini/gemini-cli/blob/e293424bb49...

MallocVoidstar 3 hours ago | parent | prev [-]

Codex is open source: https://github.com/openai/codex

meta-level an hour ago | parent | prev | next [-]

Has the source code 'been leaked' or is this the first evidence of a piece of software breaking free from it's creators labs and jump onto GitHub in order to have itself forked and mutated and forked and ...

jaccola 41 minutes ago | parent | next [-]

Funny thought, but this is just the client-side CLI...

aurareturn an hour ago | parent | prev | next [-]

Now that's an idea....

Seems crazy but actually non-zero chance. If Anthropic traces it and finds that the AI deliberately leaked it this way, they would never admit it publicly though. Would cause shockwaves in AI security and safety.

Maybe their new "Mythos" model has survival instincts...

nacozarina 35 minutes ago | parent | prev [-]

life finds a way

hk__2 4 hours ago | parent | prev | next [-]

For a combo with another HN homepage story, Claude Code uses… Axios: https://x.com/icanvardar/status/2038917942314778889?s=20

https://news.ycombinator.com/item?id=47582220

ankaz 2 hours ago | parent [-]

I've checked, current Claude Code 2.1.87 uses Axios version is 1.14.0, just one before the compromised 1.14.1

To stop Claude Code from auto-updating, add `export DISABLE_AUTOUPDATER=1` to your global environment variables (~/.bashrc, ~/.zshrc, or such), restart all sessions and check that it works with `claude doctor`, it should show `Auto-updates: disabled (DISABLE_AUTOUPDATER set)`

prawns_1205 2 minutes ago | parent | prev | next [-]

source maps leaking original source happens surprisingly often. they're incredibly useful during development, but it's easy to forget to strip them from production builds.

tills13 11 minutes ago | parent | prev | next [-]

Is it not already a node app? So the only novel thing here is we know the original var names and structure? Sure, sometimes obfuscated code can be difficult to intuit, but any enterprising party could eventually do it -- especially with the help of an LLM.

seifbenayed1992 25 minutes ago | parent | prev | next [-]

Went through the bundle.js. Found 187 spinner verbs. "Combobulating", "Discombobulating", and "Recombobulating". The full lifecycle is covered. Also "Flibbertigibbeting" and "Clauding". Someone had fun.

ghrl 19 minutes ago | parent [-]

Let's hope they left the having-fun part for a human to do.

krzyzanowskim 2 hours ago | parent | prev | next [-]

I almost predicted that on Friday https://blog.krzyzanowskim.com/2026/03/30/shipping-snake-oil... so close to when comedy become reality

lukan 5 hours ago | parent | prev | next [-]

Neat. Coincidently recently I asked Claude about Claude CLI, if it is possible to patch some annoying things (like not being able to expand Ctrl + O more than once, so never be able to see some lines and in general have more control over the context) and it happily proclaimed it is open source and it can do it ... and started doing something. Then I checked a bit and saw, nope, not open source. And by the wording of the TOS, it might brake some sources. But claude said, "no worries", it only break the TOS technically. So by saving that conversation I would have some defense if I would start messing with it, but felt a bit uneasy and stopped the experiment. Also claude came into a loop, but if I would point it at this, it might work I suppose.

mikrotikker 5 hours ago | parent [-]

I think that you do not need to feel uneasy at all. It is your computer and your memory space that the data is stored and operating in you can do whatever you like to the bits in that space. I would encourage you to continue that experiment.

lukan 4 hours ago | parent | next [-]

Well, the thing is, I do not just use my computer, but connect to their computers and I do not like to get banned. I suppose simple UI things like expanding source files won't change a thing, but the more interesting things, editing the context etc. do have that risk, but no idea if they look for it or enforce it. Their side is, if I want to have full control, I need to use the API directly(way more expensive) and what I want to do is basically circumventing it.

mattmanser 3 hours ago | parent [-]

It doesn't matter what defence you can think of, if they want to ban you, they'll ban you.

They won't even read your defence.

lukan 3 hours ago | parent [-]

I know. All I could do in that case is a blogpost "Claude banned me, for following claude's instructions!" and hope it gets viral.

singularity2001 4 hours ago | parent | prev [-]

You are not allowed to use the assistance of Claude to manufacture hacks and bombs on your computer

prmoustache 4 hours ago | parent [-]

This is neither.

dheerajmp 5 hours ago | parent | prev | next [-]

Source here https://github.com/chatgptprojects/claude-code/

zhisme 5 hours ago | parent [-]

https://github.com/instructkr/claude-code

this one has more stars and more popular

moontear 3 hours ago | parent | next [-]

Popular, yes... but have you seen the issues? SOMETHING is going on in that repo: https://github.com/instructkr/claude-code/issues

nubinetwork 3 hours ago | parent | next [-]

Looks like mostly spam making fun of the code leak.

sudo_man 2 hours ago | parent | prev [-]

too much wechat QR Codes

101008 2 hours ago | parent | prev | next [-]

which has already been deleted

treexs 5 hours ago | parent | prev [-]

won't they just try to dmca or take these down especially if they're more popular

paxys 3 hours ago | parent | next [-]

Which is why you should clone it right now

panny 4 hours ago | parent | prev [-]

They can't. AI generated code cannot be copyrighted. They've stated that claude code is built with claude code. You can take this and start your own claude code project now if you like. There's zero copyright protection on this.

krlx 4 hours ago | parent | next [-]

Given that from 2026 onwards most of the code is going to be computer generated, doesn't it open some interesting implications there ?

shimman an hour ago | parent [-]

It's undetermined if code will be majority written by machines, especially as people start to realize how harmful these tools are without extreme diligence. Outages at Cloudflare, AWS, GitHub, etc are just the beginning. Companies aren't going to want to use tools that can potentially cause $100s of millions in potential damages (see Amazon store being down causing massive revenue loss).

0x3f 4 hours ago | parent | prev | next [-]

I'm sure it's not _entirely_ built that way, and in practically speaking GitHub will almost certainly take it down rather than doing some kind of deep research about which code is which.

panny 3 hours ago | parent [-]

That's fine. File a false claim DMCA and that's felony perjury :) They know for a fact that there is no copyright on AI generated code, the courts have affirmed this repeatedly.

nananana9 3 hours ago | parent | prev [-]

Try not to be overly confident about things where even the experts in the field (copyright lawyers) are uncertain of.

There's no major lawsuits about this yet, the general consensus is that even under current regulations it's in the grey. And even if you turn out to be right, and let's say 99% of this code is AI-generated, you're still breaking the law by using the other 1%, and good luck proving in court what parts of their code were human written and what weren't (especially when being sued by the company that literally has the LLM logs).

mesmertech 4 hours ago | parent | prev | next [-]

Was searching for the rumored Mythos/Capybara release, and what even is this file? https://github.com/chatgptprojects/claude-code/blob/642c7f94...

mesmertech 4 hours ago | parent | next [-]

Also saw this on twitter earlier, thought someone was just making a fake hype post thing. But turns out to be an actual prompt for capybara huh: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

mattmanser 3 hours ago | parent [-]

One tengentially interesting thing about that is how THEY talk to Claude.

"Don't blow your cover"

Interesting to see them be so informal and use an idiom to a computer.

And using capitals for emphasis.

mr_00ff00 an hour ago | parent [-]

It’s trained on mostly internet content, right?

If it learned language based on how the internet talks, then the best way to communicate is using similar language.

mesmertech 4 hours ago | parent | prev [-]

turns out its for an April fools tomorrow: https://x.com/mesmerlord/status/2038938888178135223

nunez 6 minutes ago | parent [-]

They even leaked their April Fool’s fun. Brutal!

VadimPR 40 minutes ago | parent | prev | next [-]

These security failures from Anthropic lately reveal the caveats of only using AI to write code - the safety an experienced engineer is not matched by an LLM just yet, even if the LLM can seemingly write code that is just as good.

Or in short, if you give LLMs to the masses, they will produce code faster, but the quality overall will degrade. Microsoft, Amazon found out this quickly. Anthropic's QA process is better equipped to handle this, but cracks are still showing.

squeegmeister 28 minutes ago | parent [-]

Anthropic has a QA process? I run into bugs on the regular, even on the "stable" release channel

Squarex 5 hours ago | parent | prev | next [-]

Codex and gemini cli are open source already. And plenty of other agents. I don't think there is any moat in claude code source.

rafram 5 hours ago | parent [-]

Well, Claude does boast an absolutely cursed (and very buggy) React-based TUI renderer that I think the others lack! What if someone steals it and builds their own buggy TUI app?

loveparade 5 hours ago | parent [-]

Your favorite LLM is great at building a super buggy renderer, so that's no longer a moat

vbezhenar 6 hours ago | parent | prev | next [-]

LoL! https://news.ycombinator.com/item?id=30337690

Not exactly this, but close.

ivanjermakov 5 hours ago | parent [-]

> It exposes all your frontend source code for everyone

I hope it's a common knowledge that _any_ client side JavaScript is exposed to everyone. Perhaps minimized, but still easily reverse-engineerable.

Monotoko 5 hours ago | parent [-]

Very easily these days, even if minified is difficult for me to reverse engineer... Claude has a very easy time of finding exactly what to patch to fix something

tmarice an hour ago | parent | prev | next [-]

A couple of years ago I had to evaluate A/B test and feature flag providers, and even then when they were a young company fresh out of YC, GrowthBook stood out. Bayesian methods, bring your own storage, and self-hosting instead of "Contact us for pricing" made them the go-to choice. I'm glad they're doing well.

zurfer 3 hours ago | parent | prev | next [-]

too much pressure. the author deleted the real source code: https://github.com/instructkr/claude-code/commit/7c3c5f7eb96...

raesene9 2 hours ago | parent [-]

there are a .....lot of forks already, no putting the genie back in the bottle for this one, I'd imagine.

dhruv3006 5 hours ago | parent | prev | next [-]

I have a feeling this is like llama.

Original llama models leaked from meta. Instead of fighting it they decided to publish them officially. Real boost to the OS/OW models movement, they have been leading it for a while after that.

It would be interesting to see that same thing with CC, but I doubt it'll ever happen.

jkukul 3 hours ago | parent [-]

Yes, I also doubt it'll ever happen considering how hard Anthropic went after Clawdbot to force its renaming.

karimf 5 hours ago | parent | prev | next [-]

Is there anything special here vs. OpenCode or Codex?

There were/are a lot of discussions on how the harness can affect the output.

simonklee 4 hours ago | parent [-]

Not really, except that they have a bunch of weird things in the source code and people like to make fun of it. OpenCode/Codex generally doesn't have this since these are open-source projects from the get go.

(I work on OpenCode)

AlexWApp an hour ago | parent | prev | next [-]

It is pretty funny that they recently announced about mythos which possess cybersecurity threat and then after some days, the claude code leaked. I think we know the culprit

harlequinetcie an hour ago | parent | prev | next [-]

Whenever someone figures out why it's consuming so many tokens lately, that's the post worth upvoting.

therealarthur 35 minutes ago | parent | prev | next [-]

Think It's just the CLI Code right? Not the Model's underlying source. If so - not the WORST situation (still embarrassing)

bob1029 6 hours ago | parent | prev | next [-]

Is this significant?

Copilot on OAI reveals everything meaningful about its functionality if you use a custom model config via the API. All you need to do is inspect the logs to see the prompts they're using. So far no one seems to care about this "loophole". Presumably, because the only thing that matters is for you to consume as many tokens per unit time as possible.

The source code of the slot machine is not relevant to the casino manager. He only cares that the customer is using it.

yunwal 4 hours ago | parent | next [-]

> The source code of the slot machine is not relevant to the casino manager.

Famously code leaks/reverse engineering attempts of slot machines matter enormously to casino managers

[0] -https://en.wikipedia.org/wiki/Ronald_Dale_Harris#:~:text=Ron...

[1] - https://cybernews.com/news/software-glitch-loses-casino-mill...

[2] - https://sccgmanagement.com/sccg-news/2025/9/24/superbet-pays...

hmokiguess an hour ago | parent | prev [-]

That’s not a good analogy, in a casino you don’t own the slot machine, in this case you download the client side code to your machine

bryanhogan 5 hours ago | parent | prev | next [-]

https://xcancel.com/Fried_rice/status/2038894956459290963

arrsingh an hour ago | parent | prev | next [-]

I don't understand why claude code (and all CLI apps) isn't written in Rust. I started building CLI agents in Go and then moved to Typescript and finally settled on Rust and it was amazing!

I even made it into an open source runtime - https://agent-air.ai.

Maybe I'm just a backend engineer so Rust appeals to me. What am I missing?

armanj an hour ago | parent | next [-]

claude code started as an experimental project by boris cherny. when you’re experimenting, you naturally use the language you’re most comfortable with. as the project grew, more people got involved and it evolved from there. codex, on the other hand, was built from the start specifically to compete with claude code. they chose rust early on because they knew it was going to be big.

bilekas an hour ago | parent | prev [-]

Think about your question, depending on the tool, Rust might not be needed, is high level memory performance and safety needed in a coding agent ? Probably not.

It's high speed iteration of release ? Might be needed, Interpreted or JIT compiled ? might be needed.

Without knowing all the requirements its just your workspace preference making your decision and not objectively the right tool for the job.

virtualritz an hour ago | parent | next [-]

I have a 16GB RAM laptop. It's a beast I bought in 2022.

It's all I need for my work.

RAM on this machine can't be upgraded. No issue when running a few Codex instances.

Claude: forget it.

That's why something like Rust makes a lot of sense.

Even more now, as RAM prices are becoming a concern.

bilekas an hour ago | parent [-]

> Claude: forget it.

I don't know what else you're doing but the footprint of Claude is minor.

Anyway my point still stands, you're looking at it as if they are competing languages and one is better at all things. That just not how things work.

LelouBil an hour ago | parent | prev [-]

While not directly related to GP, I would guess that a codebase developped with a coding agent (I assume Claude code is used to work on itself) would benefit from a stricter type system (one important point of Rust)

bilekas an hour ago | parent [-]

TypeScript is typed.. It's in the name ?

WD-42 2 hours ago | parent | prev | next [-]

Looks like the repo owner has force pushed a new project over the original source code, now it’s python, and they are shilling some other agent tool.

georgecalm 2 hours ago | parent | prev | next [-]

Intersected available info on the web with the source for this list of new features:

UNRELEASED PRODUCTS & MODES

1. KAIROS -- Persistent autonomous assistant mode driven by periodic <tick> prompts. More autonomous when terminal unfocused. Exclusive tools: SendUserFileTool, PushNotificationTool, SubscribePRTool. 7 sub-feature flags.

2. BUDDY -- Tamagotchi-style virtual companion pet. 18 species, 5 rarity tiers, Mulberry32 PRNG, shiny variants, stat system (DEBUGGING/PATIENCE/CHAOS/WISDOM/SNARK). April 1-7 2026 teaser window.

3. ULTRAPLAN -- Offloads planning to a remote 30-minute Opus 4.6 session. Smart keyword detection, 3-second polling, teleport sentinel for returning results locally.

4. Dream System -- Background memory consolidation (Orient -> Gather -> Consolidate -> Prune). Triple trigger gate: 24h + 5 sessions + advisory lock. Gated by tengu_onyx_plover.

INTERNAL-ONLY TOOLS & SYSTEMS

5. TungstenTool -- Ant-only tmux virtual terminal giving Claude direct keystroke/screen-capture control. Singleton, blocked from async agents.

6. Magic Docs -- Ant-only auto-documentation. Files starting with "# MAGIC DOC:" are tracked and updated by a Sonnet sub-agent after each conversation turn.

7. Undercover Mode -- Prevents Anthropic employees from leaking internal info (codenames, model versions) into public repo commits. No force-OFF; dead-code-eliminated from external builds.

ANTI-COMPETITIVE & SECURITY DEFENSES

8. Anti-Distillation -- Injects anti_distillation: ['fake_tools'] into every 1P API request to poison model training from scraped traffic. Gated by tengu_anti_distill_fake_tool_injection.

UNRELEASED MODELS & CODENAMES

9. opus-4-7, sonnet-4-8 -- Confirmed as planned future versions (referenced in undercover mode instructions).

10. "Capybara" / "capy v8" -- Internal codename for the model behind Opus 4.6. Hex-encoded in the BUDDY system to avoid build canary detection.

11. "Fennec" -- Predecessor model alias. Migration: fennec-latest -> opus, fennec-fast-latest -> opus[1m] + fast mode.

UNDOCUMENTED BETA API HEADERS

12. afk-mode-2026-01-31 -- Sticky-latched when auto mode activates 15. fast-mode-2026-02-01 -- Opus 4.6 fast output 16. task-budgets-2026-03-13 -- Per-task token budgets 17. redact-thinking-2026-02-12 -- Thinking block redaction 18. token-efficient-tools-2026-03-28 -- JSON tool format (~4.5% token saving) 19. advisor-tool-2026-03-01 -- Advisor tool 20. cli-internal-2026-02-09 -- Ant-only internal features

200+ SERVER-SIDE FEATURE GATES

21. tengu_penguins_off -- Kill switch for fast mode 22. tengu_scratch -- Coordinator mode / scratchpad 23. tengu_hive_evidence -- Verification agent 24. tengu_surreal_dali -- RemoteTriggerTool 25. tengu_birch_trellis -- Bash permissions classifier 26. tengu_amber_json_tools -- JSON tool format 27. tengu_iron_gate_closed -- Auto-mode fail-closed behavior 28. tengu_amber_flint -- Agent swarms killswitch 29. tengu_onyx_plover -- Dream system 30. tengu_anti_distill_fake_tool_injection -- Anti-distillation 31. tengu_session_memory -- Session memory 32. tengu_passport_quail -- Auto memory extraction 33. tengu_coral_fern -- Memory directory 34. tengu_turtle_carbon -- Adaptive thinking by default 35. tengu_marble_sandcastle -- Native binary required for fast mode

YOLO CLASSIFIER INTERNALS (previously only high-level known)

36. Two-stage system: Stage 1 at max_tokens=64 with "Err on the side of blocking"; Stage 2 at max_tokens=4096 with <thinking> 37. Three classifier modes: both (default), fast, thinking 38. Assistant text stripped from classifier input to prevent prompt injection 39. Denial limits: 3 consecutive or 20 total -> fallback to interactive prompting 40. Older classify_result tool schema variant still in codebase

COORDINATOR MODE & FORK SUBAGENT INTERNALS

41. Exact coordinator prompt: "Every message you send is to the user. Worker results are internal signals -- never thank or acknowledge them." 42. Anti-pattern enforcement: "Based on your findings, fix the auth bug" explicitly called out as wrong 43. Fork subagent cache sharing: Byte-identical API prefixes via placeholder "Fork started -- processing in background" tool results 44. <fork-boilerplate> tag prevents recursive forking 45. 10 non-negotiable rules for fork children including "commit before reporting"

DUAL MEMORY ARCHITECTURE

46. Session Memory -- Structured scratchpad for surviving compaction. 12K token cap, fixed sections, fires every 5K tokens + 3 tool calls. 47. Auto Memory -- Durable cross-session facts. Individual topic files with YAML frontmatter. 5-turn hard cap. Skips if main agent already wrote to memory. 48. Prompt cache scope "global" -- Cross-org caching for the static system prompt prefix

cbracketdash 5 hours ago | parent | prev | next [-]

Once the USA wakes up, this will be insane news

echelon 5 hours ago | parent [-]

What's special about Claude Code? Isn't Opus the real magic?

Surely there's nothing here of value compared to the weights except for UX and orchestration?

Couldn't this have just been decompiled anyhow?

derwiki 3 hours ago | parent [-]

I think pi has stolen the top honors, but people consider the Claude code harness very good (at least, better than Cursor)

sbarre 2 hours ago | parent [-]

Pi is the best choice for experts and power users, which is not most people.

Claude Code is still the dominant (I didn't say best) agentic harness by a wide margin I think.

alasano an hour ago | parent [-]

Pi really is amazing. It's as much or as little as you need it to be.

Not having to deal with Boris Cherny's UX choices for CC is the cherry on top.

mutkach 2 hours ago | parent | prev | next [-]

/*

* Check if 1M context is disabled via environment variable.

* Used by C4E admins to disable 1M context for HIPAA compliance.

*/ export function is1mContextDisabled(): boolean {

  return 
isEnvTruthy(process.env.CLAUDE_CODE_DISABLE_1M_CONTEXT)

}

Interesting, how is that relevant to HIPAA compliance?

nhubbard an hour ago | parent [-]

I'd guess some constraint on their end related to the Zero Data Retention (ZDR) mode? Maybe the 1M context has to spill something onto disk and therefore isn't compliant with HIPAA.

gman83 4 hours ago | parent | prev | next [-]

Gemini CLI and Codex are open source anyway. I doubt there was much of a moat there anyway. The cool kids are using things like https://pi.dev/ anyway.

__alexs an hour ago | parent | prev | next [-]

Looking forward to someone patching it so that it works with non Anthropic models.

dgb23 7 minutes ago | parent | next [-]

That's already the case I think, you just have to change a bunch of env vars.

osiris970 22 minutes ago | parent | prev [-]

It already does. I use it with gpt

q3k 6 hours ago | parent | prev | next [-]

The code looks, at a glance, as bad as you expect.

tokioyoyo 5 hours ago | parent | next [-]

It really doesn’t matter anymore. I’m saying this as a person who used to care about it. It does what it’s generally supposed to do, it has users. Two things that matter at this day and age.

samhh 5 hours ago | parent | next [-]

It may be economically effective but such heartless, buggy software is a drain to use. I care about that delta, and yes this can be extrapolated to other industries.

tokioyoyo 5 hours ago | parent [-]

Genuinely I have no idea what you mean by buggy. Sure there are some problems here and there, but my personal threshold for “buggy” is much higher. I guess, for a lot of other people as well, given the uptake and usage.

mattmanser 3 hours ago | parent [-]

Two weeks ago typing became super laggy. It was totally unusable.

Last week I had to reinstall Claude Desktop because every time I opened it, it just hung.

This week I am sometimes opening it and getting a blank screen. It eventually works after I open it a few times.

And of course there's people complaining that somehow they're blowing their 5 hour token budget in 5 messages.

It's really buggy.

There's only so long their model will be their advantage before they all become very similar, and then the difference will be how reliable the tools are.

Right now the Claude Code code quality seems extremely low.

tokioyoyo 2 hours ago | parent [-]

And those bugs were semi-fixed and people are still using it. So speed of fixes are there.

I can’t comment on Claude Desktop, sorry. Personally haven’t used it much.

The token usage looks like is intentional.

And I agree about the underlying model being the moat. If there’s something marginally better that comes up, people will switch to it (myself included). But for now it’s doing the job, despite all the hiccups, code quality and etc.

FiberBundle 5 hours ago | parent | prev | next [-]

This is the dumbest take there is about vibe coding. Claiming that managing complexity in a codebase doesn't matter anymore. I can't imagine that a competent engineer would come to the conclusion that managing complexity doesn't matter anymore. There is actually some evidence that coding agents struggle the same way humans do as the complexity of the system increases [0].

[0] https://arxiv.org/abs/2603.24755

tokioyoyo 5 hours ago | parent | next [-]

I agree, there is obviously “complete burning trash” and there’s this. Ant team has got a system going on for them where they can still extend the codebase. When time comes to it, I’m assuming they would be able to rewrite as feature set would be more solid and assuming they’ve been adding tests as well.

Reverse-engineering through tests have never been easier, which could collapse the complexity and clean the code.

maplethorpe 3 hours ago | parent | prev [-]

Well what is Anthropic doing differently to deal with this issue? Apparently they don't write any of their own code anymore, and they're doing fine.

nvarsj 2 hours ago | parent | next [-]

Cc is buggy as hell man. I frequently search the github for the issue I’m having only to find 10 exact bugs that no one is looking at.

Obviously they don’t care. Adoption is exploding. Boris brags about making 30 commits a day to the codebase.

Only will be an issue down the line when the codebase has such high entropy it takes months to add new features (maybe already there).

bakugo 2 hours ago | parent | prev [-]

Nothing, apparently, which is probably why Claude Code has 7893 open issues on Github at the time of writing.

otterley an hour ago | parent [-]

All software that’s popular has hundreds or thousands of issues filed against it. It’s not an objective indication of anything other than people having issues to report and a willingness and ability to report the issue.

It doesn’t mean every issue is valid, that it contains a suggestion that can be implemented, that it can be addressed immediately, etc. The issue list might not be curated, either, resulting in a garbage heap.

ghywertelling 3 hours ago | parent | prev | next [-]

Do compilers care about their assembly generated code to look good? We will soon reach that state with all the production code. LLMs will be the compiler and actual today's human code will be replaced by LLM generated assembly code, kinda sorta human readable.

hrmtst93837 5 hours ago | parent | prev | next [-]

Users stick around on inertia until a failure costs them money or face. A leaked map file won't sink a tool on its own, but it does strip away the story that you can ship sloppy JS build output into prod and still ask people to trust your security model.

'It works' is a low bar. If that's the bar you set you are one bad incident away from finding out who stayed for the product and who stayed because switching felt annoying.

tokioyoyo 5 hours ago | parent [-]

“It works and it’s doing what it’s supposed to do” encompasses the idea that it’s also not doing what it’s not supposed to do.

Also “one bad incident away” never works in practice. The last two decades have shown how people will use the tools that get the job done no matter what kinda privacy leaks, destructive things they have done to the user.

drstewart 2 hours ago | parent | prev [-]

>Two things that matter at this day and age.

That's all that has mattered in every day and age.

breppp 5 hours ago | parent | prev | next [-]

Honestly when using it, it feels vibe coded to the bone, together with the matching weird UI footgun quirks

tokioyoyo 5 hours ago | parent [-]

Team has been extremely open how it has been vibe coded from day 1. Given the insane amount of releases, I don’t think it would be possible without it.

catlifeonmars 4 hours ago | parent | next [-]

It’s not a particularly sophisticated tool. I’d put my money on one experienced engineer being able to achieve the same functionality in 3-6 months (even without the vibe coding).

tokioyoyo 2 hours ago | parent | next [-]

The same functionality can be copied over in a week most likely. The moat is experimentation and new feature releases with the underlying model. An engineer would not be able to experiment with the same speed.

derwiki 2 hours ago | parent | prev [-]

Kinda reads like the Dropbox launch thread

breppp 4 hours ago | parent | prev [-]

I don't really care about the code being an unmaintainable mess, but as a user there are some odd choices in the flow which feel could benefit from human judgement

loevborg 6 hours ago | parent | prev | next [-]

Can you give an example? Looks fairly decent to me

Insensitivity 6 hours ago | parent | next [-]

the "useCanUseTool.tsx" hook, is definitely something I would hate seeing in any code base I come across.

It's extremely nested, it's basically an if statement soup

`useTypeahead.tsx` is even worse, extremely nested, a ton of "if else" statements, I doubt you'd look at it and think this is sane code

Overpower0416 5 hours ago | parent | next [-]

  export function extractSearchToken(completionToken: {
    token: string;
    isQuoted?: boolean;
  }): string {
    if (completionToken.isQuoted) {
      // Remove @" prefix and optional closing "
      return completionToken.token.slice(2).replace(/"$/, '');
    } else if (completionToken.token.startsWith('@')) {
      return completionToken.token.substring(1);
    } else {
      return completionToken.token;
    }
  }
Why even use else if with return...
kelnos 4 hours ago | parent | next [-]

I always write code like that. I don't like early returns. This approximates `if` statements being an expression that returns something.

whilenot-dev 3 hours ago | parent | next [-]

> This approximates `if` statements being an expression that returns something.

Do you care to elaborate? "if (...) return ...;" looks closer to an expression for me:

  export function extractSearchToken(completionToken: { token: string; isQuoted?: boolean }): string {
    if (completionToken.isQuoted) return completionToken.token.slice(2).replace(/"$/, '');

    if (completionToken.token.startsWith('@')) return completionToken.token.substring(1);

    return completionToken.token;
  }
catlifeonmars 3 hours ago | parent | prev [-]

I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.

But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.

worksonmine 4 hours ago | parent | prev [-]

> Why even use else if with return...

What is the problem with that? How would you write that snippet? It is common in the new functional js landscape, even if it is pass-by-ref.

Overpower0416 4 hours ago | parent [-]

Using guard clauses. Way more readable and easy to work with.

  export function extractSearchToken(completionToken: {
    token: string;
    isQuoted?: boolean;
  }): string {
    if (completionToken.isQuoted) {
      return completionToken.token.slice(2).replace(/"$/, '');
    }
    if (completionToken.token.startsWith('@')) {
      return completionToken.token.substring(1);
    }
    return completionToken.token;
  }
duckmysick 4 hours ago | parent | prev | next [-]

I'm not that familiar with TypeScript/JavaScript - what would be a proper way of handling complex logic? Switch statements? Decision tables?

catlifeonmars 3 hours ago | parent [-]

Here I think the logic is unnecessarily complex. isQuoted is doing work that is implicit in the token.

luc_ 6 hours ago | parent | prev | next [-]

Fits with the origin story of Claude Code...

werdnapk 4 hours ago | parent [-]

insert "AI is just if statements" meme

loevborg 5 hours ago | parent | prev | next [-]

useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...

Insensitivity 5 hours ago | parent [-]

Maybe, I do suspect _some_ parts are codegen or source map artifacts.

But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup

matltc 5 hours ago | parent | prev [-]

Lol even the name is crazy

q3k 6 hours ago | parent | prev | next [-]

  1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
  2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
  3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
delamon 5 hours ago | parent | next [-]

What is wrong with peeking at process.env? It is a global map, after all. I assume, of course, that they don't mutate it.

lioeters 4 hours ago | parent | next [-]

> process.env? It is a global map

That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.

hu3 5 hours ago | parent | prev | next [-]

For one it's harder to unit test.

withinboredom 4 hours ago | parent | prev | next [-]

environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".

q3k 4 hours ago | parent | prev [-]

It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.

Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.

This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).

loevborg 5 hours ago | parent | prev | next [-]

You're right about process.argv - wow, that looks like a maintenance and testability nightmare.

darkstar_16 5 hours ago | parent [-]

They use claude code to code it. Makes sense

s3p 5 hours ago | parent | prev [-]

It probably exists only in CLAUDE or AGENTS.md since no humans are working on the code!

wklm 5 hours ago | parent | prev [-]

have a look at src/bootstrap/state.ts :D

PierceJoy 5 hours ago | parent | prev | next [-]

Nothing a couple /simplify's can't take care of.

bakugo 2 hours ago | parent | prev | next [-]

It's impressive how fast vibe coders seem to flip-flop between "AI can write better code than you, there's no reason to write code yourself anymore; if you do, you're stuck in the past" and "AI writes bad code but I don't care about quality and neither should you; if you care, you're stuck in the past".

I hope this leak can at least help silence the former. If you're going to flood the world with slop, at least own up to it.

linesofcode 4 hours ago | parent | prev [-]

Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.

Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.

Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.

Sathwickp 4 hours ago | parent | prev | next [-]

They do have a couple of interesting features that has not been publicly heard of yet:

Like KAIROS which seems to be like an inbuilt ai assistant and Ultraplan which seems to enable remote planning workflows, where a separate environment explores a problem, generates a plan, and then pauses for user approval before execution.

VadimPR 3 hours ago | parent | prev | next [-]

Anthropic team does an excellent job of speeding up Claude Code when it slows down, but for the sake of RAM and system resources, it would be nice to see it rewritten in a more performant framework!

And now, with Claude on a Ralph loop, you can.

bethekind 32 minutes ago | parent [-]

This. If I run 4 Claude code opus agents with subagents, my 8gb of RAM just dies.

I know they can do better

sourcegrift an hour ago | parent | prev | next [-]

Cheap chinese models incoming.

pplonski86 an hour ago | parent | prev | next [-]

I thought it was open source project on github? https://github.com/anthropics/claude-code no?

athorax an hour ago | parent [-]

Did you even look in that repo?

boxerbk an hour ago | parent | prev | next [-]

Maybe everyone should slow the fuck down - https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing...

mapcars 6 hours ago | parent | prev | next [-]

Are there any interesting/uniq features present in it that are not in the alternatives? My understanding is that its just a client for the powerful llm

nblintao an hour ago | parent | next [-]

Doesn't look like just a thin wrapper to me. The interesting part seems to be the surrounding harness/workflow layer rather than only the model call itself.

I was trying to keep track of the better post-leak code-analysis links on exactly this question, so I collected them here: https://github.com/nblintao/awesome-claude-code-postleak-ins...

swimmingbrain 6 hours ago | parent | prev [-]

From the directory listing having a cost-tracker.ts, upstreamproxy, coordinator, buddy and a full vim directory, it doesn't look like just an API client to me.

Diablo556 4 hours ago | parent | prev | next [-]

haha.. Anthropic need to hire fixer from vibecodefixers.com to fix all that messy code..lol

derwiki 2 hours ago | parent [-]

I don’t think they can hear you over the billions of dollars they are generating, and definitely not over them redefining what SWE means.

infinitezest 2 hours ago | parent [-]

And they can't hear you from under the enormous pile of debt they're fighting to overcome. Maybe try again in 2028.

ramesh31 an hour ago | parent | prev | next [-]

Who cares? It's Javascript, if anyone were even remotely motivated deobfuscation of their "closed source" code is trivial. It's silly that they aren't just doing this open source in the first place.

sbochins 3 hours ago | parent | prev | next [-]

Does this matter? I think every other agent cli is open source. I don’t even know why Anthropic insist upon having theirs be closed source.

dev213 2 hours ago | parent | prev | next [-]

Undercover mode is pretty interesting and potentially problematic: https://github.com/sanbuphy/claude-code-source-code/blob/mai...

theanonymousone 5 hours ago | parent | prev | next [-]

I am waiting now for someone to make it work with a Copilot Pro subscription.

treexs 5 hours ago | parent [-]

does this not work? https://www.mintlify.com/samarth777/claude-code-copilot/intr...

theanonymousone 4 hours ago | parent [-]

I believe GitHub can and does suspend accounts that use such proxies.

tekacs 4 hours ago | parent | prev | next [-]

In the app, it now reads:

> current: 2.1.88 · latest: 2.1.87

Which makes me think they pulled it - although it still shows up as 2.1.88 on npmjs for now (cached?).

panny 3 hours ago | parent [-]

Too little to late. Someone has it building now.

https://github.com/oboard/claude-code-rev

ZainRiz 35 minutes ago | parent | prev | next [-]

Maybe now someone will finally fix the bug that causes claude code to randomly scroll up all the way to the top!

LeoDaVibeci 6 hours ago | parent | prev | next [-]

Isn't it open source?

Or is there an open source front-end and a closed backend?

dragonwriter 6 hours ago | parent | next [-]

> Isn't it open source?

No, its not even source available,.

> Or is there an open source front-end and a closed backend?

No, its all proprietary. None of it is open source.

alkonaut 5 minutes ago | parent [-]

> its not even source available

It _wasn't_ even source available.

avaer 6 hours ago | parent | prev | next [-]

No, it was never open source. You could always reverse engineer the cli app but you didn't have access to the source.

karimf 5 hours ago | parent | prev | next [-]

The Github repo is only for issue tracker

matheusmoreira 5 hours ago | parent [-]

Wow it's true. Anthropic actually had me fooled. I saw the GitHub repository and just assumed it was open source. Didn't look at the actual files too closely. There's pretty much nothing there.

So glad I took the time to firejail this thing before running it.

agluszak 6 hours ago | parent | prev | next [-]

You may have mistaken it with Codex

https://github.com/openai/codex

yellow_lead 6 hours ago | parent | prev [-]

No

napo 2 hours ago | parent | prev | next [-]

The autoDream feature looks interesting.

aiedwardyi 3 hours ago | parent | prev | next [-]

interesting to see cost-tracker.ts in there. makes you wonder why they track usage internally but don't surface it to users in any meaningful way

jsmith45 3 hours ago | parent | next [-]

Cost tracking is used if you connect claude code with an api key instead of a subscription. It powers the /cost command.

It is tricky to meaningfully expose a dollar cost equivlent value for subscribers in a way that won't confuse users into thinking that they will get a bill that includes that amount. This is especially true if you have overages enabled, since in a session that used overages it was likely partially covered by the plan (and thus zero-rated) with the rest at api prices, and the client can't really know the breakdown.

thebigspacefuck 30 minutes ago | parent | prev | next [-]

Configure it to show on your status line

fcarraldo 3 hours ago | parent | prev [-]

They do, you can just type /cost

Pent an hour ago | parent | prev | next [-]

April Fools

zoobab 3 hours ago | parent | prev | next [-]

Just a client side written in JS, nothing to see here, the LLM is still secret.

They could have written that in curl+bash that would not have changed much.

sourcegrift an hour ago | parent | prev | next [-]

Removed

ChicagoDave 5 hours ago | parent | prev | next [-]

I hope everyone provides excellent feedback so they improve Claude Code.

sudo_man 2 hours ago | parent | prev | next [-]

How this leak happened?

sbarre 2 hours ago | parent [-]

It's literally explained in the tweet, in the repo and in this thread in many places.

sudo_man 2 hours ago | parent [-]

yeah and still can not understand how Regex can leak the code and what is the map file, I googled them and can not understand what is going

anhldbk 5 hours ago | parent | prev | next [-]

I guess it's time for Anthropic to open source Claude Code.

DeathArrow 5 hours ago | parent [-]

And while they are at it, open source Opus and Sonet. :)

artdigital 3 hours ago | parent | prev | next [-]

Now waiting for someone to point Codex at it and rebuild a new Claude Code in Golang to see if it would perform better

thefilmore 3 hours ago | parent | prev | next [-]

400k lines of code per scc

hemantkamalakar 3 hours ago | parent | prev | next [-]

today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?

tw1984 an hour ago | parent | prev | next [-]

wondering whether it was a human mistake or a CLAUDE model error.

CookieJedi 2 hours ago | parent | prev | next [-]

Hmmm, dont like the vibe

agile-gift0262 2 hours ago | parent | prev | next [-]

time to remove its copyright through malus.sh and release that source under MIT

sudo_man 2 hours ago | parent [-]

who would do this?

temp7000 2 hours ago | parent | prev | next [-]

There's some rollout flags - via GrowthBook, Tengu, Statsig - though I'm not sure if it's A/B or not

bdangubic 3 hours ago | parent | prev | next [-]

I have 705 PRs ready to go :)

CookieJedi 2 hours ago | parent | prev | next [-]

Hmmm, not the vibe

daft_pink 2 hours ago | parent | prev | next [-]

Now we need some articles analyzing this.

jedisct1 4 hours ago | parent | prev | next [-]

It shows that a company you and your organization are trusting with your data, and allowing full control over your devices 24/7, is failing to properly secure its own software.

It's a wake up call.

prmoustache 4 hours ago | parent | next [-]

It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source was provided to you already or am I mistaking?

jedisct1 4 hours ago | parent [-]

It was heavily obfuscated, keeping users in the dark about what they’re installing and running.

prmoustache 4 hours ago | parent | prev [-]

It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source is provided to you already.

DeathArrow 5 hours ago | parent | prev | next [-]

Why is Claude Code, a desktop tool, written in JS? Is the future of all software JS or Typescript?

jsk2600 5 hours ago | parent | next [-]

Original author of Claude Code is expert on TypeScript [1]

[1] https://www.amazon.com/Programming-TypeScript-Making-JavaScr...

ghywertelling 3 hours ago | parent [-]

is that the reason why Anthropic acquired Bun, a javascript tooling company?

arthur-st 2 hours ago | parent [-]

Yes, that's essentially the only practical reason.

progx 3 hours ago | parent | prev | next [-]

Anthropic acquired bun last year https://bun.com/blog/bun-joins-anthropic

monkpit 3 hours ago | parent | prev | next [-]

Alternatively: why not?

bigbezet 5 hours ago | parent | prev | next [-]

It's not a desktop tool, it's a CLI tool.

But a lot of desktop tools are written in JS because it's easy to create multi-platform applications.

wanttosaythings 4 hours ago | parent | prev | next [-]

LLMs are good in JS and Python which means everything from now on will be written in or ported to either of those two languages. So yeah, JS is the future of all software.

c0wb0yc0d3r 3 hours ago | parent [-]

This is a common take but language servers bridge the gap well.

Language servers, however, are a pain on Claude code. https://github.com/anthropics/claude-code/issues/15619

ivanjermakov 5 hours ago | parent | prev | next [-]

Because it's the most popular programming language in the world?

TiredOfLife 4 hours ago | parent | prev [-]

I am happy you woke up from your 10 year coma.

DeathArrow 5 hours ago | parent | prev | next [-]

I wonder what will happen with the poor guy who forgot to delete the code...

orphea 3 hours ago | parent | next [-]

  the poor guy
Do you mean the LLM?
epolanski 5 hours ago | parent | prev | next [-]

Responsibility goes upwards.

Why weren't proper checks in place in the first place?

Bonus: why didn't they setup their own AI-assisted tools to harness the release checks?

matltc 5 hours ago | parent | prev [-]

Ha. I'm surprised it's not a CI job

isodev 5 hours ago | parent | prev | next [-]

Can we stop referring to source maps as leaks? It was packaged in a way that wasn’t even obfuscated. Same as websites - it’s not a “leak” that you can read or inspect the source code.

kelnos 4 hours ago | parent | next [-]

If it was included unintentionally, then it's a leak.

bmitc 5 hours ago | parent | prev | next [-]

The source is linked to in this thread. Is that not the source code?

echelon 5 hours ago | parent | prev [-]

The only exciting leak would be the Opus weights themselves.

hemantkamalakar 3 hours ago | parent | prev | next [-]

Today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?

phtrivier 4 hours ago | parent | prev [-]

Maybe the OP could clarify, I don't like reading leaked code, but I'm curious: my understanding is that is it the source code for "claude code", the coding assistant that remotely calls the LLMs.

Is that correct ? The weights of the LLMs are _not_ in this repo, right ?

It sure sucks for anthropic to get pawned like this, but it should not affect their bottom line much ?

59nadir 3 hours ago | parent | next [-]

> I don't like reading leaked code

Don't worry about that, the code in that repository isn't Anthropic's to begin with.

phtrivier 3 minutes ago | parent [-]

You believe it's just a fake ? (That would be ironic if the fake was generated by... claude itself. Anyway.)

treexs 3 hours ago | parent | prev [-]

Yes it's the claude code CLI tool / coding agent harness, not the weights.

This code hasn't been open source until now and contains information like the system prompts, internal feature flags, etc.