| ▲ | biohazard2 7 hours ago |
| The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd... Professionalism at its finest! |
|
| ▲ | InsideOutSanta 6 hours ago | parent | next [-] |
| LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation. It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there? |
|
| ▲ | oefrha 7 hours ago | parent | prev | next [-] |
| Oh wow I'm at a loss for words. To the author: see my comment at https://news.ycombinator.com/item?id=46782174, please also clean up that misaligned ASCII diagram at the top of the README, it's a dead tell. |
| |
| ▲ | corvad 7 hours ago | parent [-] | | Yeah deleting the TODOs like that is honestly a worse look. |
|
|
| ▲ | jtbaker 7 hours ago | parent | prev | next [-] |
| Incoming force push to rewrite the history . Git doesn't lie! |
| |
|
| ▲ | bob1029 7 hours ago | parent | prev | next [-] |
| I also use this as a simple heuristic: https://github.com/nkuntz1934/matrix-workers/commits/main/ There exist only two commits. I've never seen a "real" project that looks like this. |
| |
| ▲ | victorbjorklund 6 hours ago | parent | next [-] | | To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit. | | |
| ▲ | masklinn 6 hours ago | parent [-] | | I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”. |
| |
| ▲ | InsideOutSanta 6 hours ago | parent | prev | next [-] | | I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean. | | |
| ▲ | ectospheno 6 hours ago | parent [-] | | I have a similar process. Internal repo where work gets done. External repo that only gets each release. |
| |
| ▲ | biohazard2 7 hours ago | parent | prev | next [-] | | The repository is less than one week old though; having only the initial commit wouldn't shock me right away. | | |
| ▲ | cortesoft 6 hours ago | parent | next [-] | | That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production. | | | |
| ▲ | jstanley 7 hours ago | parent | prev [-] | | But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden. | | |
| ▲ | btown 6 hours ago | parent [-] | | It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released). So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :) |
|
| |
| ▲ | Hamuko 3 hours ago | parent | prev [-] | | I might just make dummy commits ("asdadasdassadas") in the prototyping phase and then just squash everything to an "Initial commit" afterwards. |
|
|
| ▲ | godelski 6 hours ago | parent | prev | next [-] |
| Here's the post on LinkedIn https://www.linkedin.com/posts/nick-kuntz-61551869_building-... |
| |
| ▲ | tamnd 6 hours ago | parent | next [-] | | https://www.linkedin.com/in/nick-kuntz-61551869/ DevSecOps Engineer
United States Army Special Operations Command · Full-time Jun 2022 - Jul 2025 · 3 yrs 2 mos Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work. | | |
| ▲ | alex_sf 6 hours ago | parent [-] | | Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof. | | |
| ▲ | esseph 5 hours ago | parent [-] | | Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs. This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force. |
|
| |
| ▲ | BoredPositron 5 hours ago | parent | prev [-] | | I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes. |
|
|
| ▲ | corvad 7 hours ago | parent | prev | next [-] |
| Wow this is definitely not a software engineer. Hmm I wonder if Git stores history... |
|
| ▲ | usefulposter 7 hours ago | parent | prev | next [-] |
| Reminds me of Cloudflare's OAuth library for Workers. >Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security >To emphasize, this is not "vibe coded". >Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. ...Some time later... https://github.com/advisories/GHSA-4pc9-x2fx-p7vj |
| |
| ▲ | PUSH_AX 6 hours ago | parent [-] | | What is the learning here? There were humans involved in every step. Things built with security in mind are not invulnerable, human written or otherwise. | | |
| ▲ | btown 6 hours ago | parent | next [-] | | Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code. This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI. Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?) And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture? This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will. | |
| ▲ | kvdveer 6 hours ago | parent | prev | next [-] | | This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing. If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something. | |
| ▲ | blibble 6 hours ago | parent | prev | next [-] | | the problem with "AI" is that by the very way it was trained: it produces plausible looking code so the "reviewing" process will be looking for the needles in the haystack when you have no understanding, or mental model of how it works, because there isn't one it's a recipe for disaster for anything other than trivial projects | |
| ▲ | parliament32 6 hours ago | parent | prev [-] | | The learning is "they lied". After all, apart from marketing materials making a claim, where is the evidence? | | |
| ▲ | PUSH_AX 6 hours ago | parent [-] | | Wait, we think they’re lying because an advisory was eventually found? We think that should be impossible with people involved? | | |
| ▲ | usefulposter 5 hours ago | parent | next [-] | | Reading the necessary RFC is table stakes. Instead we got this: >"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!" >"haha gpus go brrr" (Those lines remain in the readme, even now: https://github.com/cloudflare/workers-oauth-provider?tab=rea...) | |
| ▲ | huimang 6 hours ago | parent | prev | next [-] | | To me it's likely, given the extremely rudimentary nature of that issue. | |
| ▲ | parliament32 5 hours ago | parent | prev [-] | | If you're asking in good faith, > Every line was thoroughly reviewed and cross-referenced with relevant RFCs The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec. When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied. | | |
| ▲ | sally_glance 4 hours ago | parent [-] | | I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s |
|
|
|
|
|
|
| ▲ | esnard 7 hours ago | parent | prev | next [-] |
| No more vulnerabilities then I guess! |
|
| ▲ | rideontime 7 hours ago | parent | prev | next [-] |
| Hilarious. Judging by the username, it's the same person who wrote the slop blog post, too. |
|
| ▲ | guluarte 6 hours ago | parent | prev [-] |
| they should have at least rebased it and removed from git history |