Remix.run Logo
vadepaysa 6 days ago

I cancelled my coderabbit paid subscription, because it always worries me when a post has to go viral on HN for a company to even acknowledge an issue occurred. Their blogs are clean of any mention of this vulnerability and they don't have any new posts today either.

I understand mistakes happen, but lack of transparency when these happen makes them look bad.

sophacles 6 days ago | parent | next [-]

Both articles were published today. It seems to me that the researchers and coderabbit agreed to publish on the same day. This is a common practice when the company decides to disclose at all (disclosure is not required unless customer data was leaked and there's evidence of that, they are choosing to disclose unnecessarily here).

When the security researchers praise the response, it's a good sign tbh.

cube00 5 days ago | parent [-]

They weren't published together.

The early version of the researcher's article didn't have the whole first section where they "appreciate CodeRabbit’s swift action after we reported this security vulnerability" and the subsequent CodeRabbit talking points.

Refer to the blue paragraphs on the right hand site at https://web.archive.org/web/diff/20250819165333/202508192240...

curuinor 6 days ago | parent | prev | next [-]

https://www.coderabbit.ai/blog/our-response-to-the-january-2...

mkeeter 6 days ago | parent | next [-]

The LLM tics are strong in this writeup:

"No manual overrides, no exceptions."

"Our VDP isn't just a bug bounty—it's a security partnership"

oasisbob 6 days ago | parent | next [-]

Wow, you hit a nerve with that one. There have been some quick edits on the page.

Another:

> Security isn't just a checkbox for us; it's fundamental to our mission.

observationist 6 days ago | parent | next [-]

They delved deep and spent a whole 2 minutes with ChatGPT 4o getting those explanations and apologies in play.

aardvarkr 6 days ago | parent [-]

That’s the part that makes me laugh. If you’re going to try to pass of ChatGPT as your own work at least pay for the good model

jjani 5 days ago | parent | prev | next [-]

Hey CodeRabbit employees

> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.

This is still ultra-LLM-speak (and no, not just because of the em-dash).

rob74 5 days ago | parent | prev [-]

A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...

teaearlgraycold 6 days ago | parent | prev | next [-]

Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.

coldpie 5 days ago | parent | prev [-]

The NFT smell completely permeates the AI "industry." Can't wait for this bubble to pop.

acaloiar 6 days ago | parent | prev | next [-]

For anyone following along in the comments here. Code Rabbit's CEO posted some of the details today, after this post hit HN.

The usual "we take full responsibility" platitudes.

noisy_boy 6 days ago | parent | next [-]

I would like to see a diff of the consequences of taking full vs half-hearted responsibility.

therealpygon 6 days ago | parent | prev | next [-]

I’m sure an “intern” did it.

noisy_boy 5 days ago | parent [-]

I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.

therealpygon 5 days ago | parent [-]

I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.

Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.

paulddraper 6 days ago | parent | prev [-]

I would love to know the acceptable version.

jjani 5 days ago | parent [-]

Something not copy-pasted from an LLM would be more acceptable.

paulddraper 5 days ago | parent [-]

I feel like that would also be unacceptable.

frankfrank13 6 days ago | parent | prev | next [-]

Not a single mention of env vars. Just shifting the blame to rubocop.

6 days ago | parent | next [-]
[deleted]
Kriptonian 4 days ago | parent | prev [-]

[dead]

cube00 5 days ago | parent | prev | next [-]

They seem to have left out a point in their "Our immediate response" section:

- within 8 months: published the details after researchers publish it first.

Jap2-0 6 days ago | parent | prev | next [-]

Hmm, is it normal practice to rotate secrets before fixing the vulnerability?

neandrake 6 days ago | parent [-]

They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.

However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.

KingOfCoders 5 days ago | parent | next [-]

"According to their response all other tools were already sandboxed."

Everything else was fine, just this one tool chosen by the security researcher out of a dozen of tools was not sandboxed.

darkwater 5 days ago | parent [-]

Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?

shlomo_z 5 days ago | parent | prev | next [-]

> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me

Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.

Edit: Formatting

vmatsiiako 5 days ago | parent | next [-]

You can use native authentication methods with Infisical that don't require you to use keys to authenticate with your secrets manager: - https://infisical.com/docs/documentation/platform/identities... - https://infisical.com/docs/documentation/platform/identities...

Kriptonian 4 days ago | parent | prev [-]

[dead]

Jap2-0 5 days ago | parent | prev [-]

Duh. Thanks for pointing that out.

KingOfCoders 5 days ago | parent | prev [-]

That post happened after the HN post?

cube00 5 days ago | parent [-]

They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.

https://web.archive.org/web/diff/20250819165333/202508192240...

viraptor 6 days ago | parent | prev [-]

Most security bugs get fixed without any public notice. Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements. And there's no real benefit to doing it either. Why would you expect it to happen?

smarx007 6 days ago | parent | next [-]

> there are typically no legal requirements

Not after EU CRA https://en.m.wikipedia.org/wiki/Cyber_Resilience_Act goes into effect

singleshot_ 6 days ago | parent | prev | next [-]

> Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements.

If the company is regulated by the SEC I believe you will find that any “material” breach is reportable after the determination of materiality is reached, since at least 2023.

viraptor 6 days ago | parent [-]

Sure. And these types of "we fixed it and confirmed nobody actually exploited it" issues are not always treated as material. You can confirm that for example by checking SEC reports for each cve in commercial VPN gateways... or lack of.

5 days ago | parent [-]
[deleted]
wredcoll 6 days ago | parent | prev | next [-]

The benefit, apparently, is that people like this guy don't cancel their memberships.

viraptor 6 days ago | parent [-]

And how many would cancel is they published every security issue they fixed?

6 days ago | parent | prev [-]
[deleted]