| ▲ | candiddevmike 12 hours ago |
| None of these tools perform particularly well and all lack context to actually provide a meaningful review beyond what a linter would find, IMO. The SOTA isn't capable of using a code diff as a jumping off point. Also the system prompts for some of them are kinda funny in a hopelessly naive aspirational way. We should all aspire to live and breathe the code review system prompt on a daily basis. |
|
| ▲ | dakshgupta 12 hours ago | parent | next [-] |
| I agree that none perform _super_ well. I would argue they go far beyond linters now, which was perhaps not true even nine months ago. To the degree you consider this to be evidence, in the last 7 days, the authors of a PR has replied to a Greptile comment with "great catch", "good catch", etc. 9,078 times. |
| |
| ▲ | onedognight 12 hours ago | parent | next [-] | | I fully agree. Claude’s review comments have been 50% useful, which is great. For comparison I have almost never found a useful TeamScale comment (classic static analyzer). Even more important, half of Claude’s good finds are orthogonal to those found by other human reviewers on our team. I.e. it points out things human reviewers miss consistently and v.v. | | |
| ▲ | Sharlin 11 hours ago | parent [-] | | TBH that sounds like TeamScale just has too verbose default settings. On the other hand, people generally find almost all of the lints in Clippy's [1] default set useful, but if you enable "pedantic" lints, the signal-to-noise ratio starts getting worse – those generally require a more fine-grained setup, disabling and enabling individual lints to suit your needs. [1] https://doc.rust-lang.org/stable/clippy/ |
| |
| ▲ | blibble 11 hours ago | parent | prev | next [-] | | > To the degree you consider this to be evidence, in the last 7 days, the authors of a PR has replied to a Greptile comment with "great catch", "good catch", etc. 9,078 times. do you have a bot to do this too? | |
| ▲ | johnsillings 9 hours ago | parent | prev | next [-] | | I like number of "great catches" as a measure of AI code review effectiveness | |
| ▲ | mulmboy 11 hours ago | parent | prev | next [-] | | People more often say that to save face by implying the issue you identified would be reasonable for the author to miss because it's subtle or tricky or whatever. It's often a proxy for embarrassment | | |
| ▲ | estimator7292 9 hours ago | parent [-] | | When mature, funtional adults say it, the read is "wow, I would have missed that, good job, you did better than me". Reading embarrassment into that is extremely childish and disrespectful. | | |
| ▲ | mulmboy 7 hours ago | parent [-] | | What I'm saying is that a corporate or professional environment can make people communicate in weird ways due to various incentives. Reading into people's communication is an important skill in these kinds of environments, and looking superficially at their words can be misleading. |
|
| |
| ▲ | 12 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | written-beyond 12 hours ago | parent | prev | next [-] | | I mean how far Rusts own clippy lint went before any LLMs was actually insane. Clippy + Rusts type system would basically ensure my software was working as close as possible to my spec before the first run. LLMs have greatly reduced the bar for bringing clippy quality linting to every language but at the cost of determinism. | |
| ▲ | boredtofears 12 hours ago | parent | prev | next [-] | | That sounds more like confirmation that greptile is being included in a lot agentic coding loops than anything | |
| ▲ | tadfisher 12 hours ago | parent | prev [-] | | Not trying to sidetrack, but a figure like that is data, not evidence. At the very minimum you need context which allows for interpretation; 9,078 positive author comments would be less impressive if Greptile made 1,000,000 comments in that time period, for example. | | |
| ▲ | fragmede 11 hours ago | parent [-] | | over 7 days does contextualize it some, though. 9,078 comments / 7 (days) / 8 (hours) = 162.107 though, so if human that person is making 162 comments an hour, 8 hours a day, 7 days a week? |
|
|
|
| ▲ | shagie 12 hours ago | parent | prev | next [-] |
| In some code that I was working on, I had // stuff
obj.setSomeData(something);
// fifteen lines of other code
obj.setSomeData(something);
// more stuff
The 'something' was a little bit more complex, but it was the same something with slightly different formatting.My linter didn't catch the repeat call. When asking the AI chat for a review of the code changes it did correctly flag that there was a repeat call. It also caught a repeat call in List<Objs> objs = someList.stream().filter(o -> o.field.isPresent()).toList();
// ...
var something = someFunc(objs);
Thingy someFunc(List<Objs> param) {
return param.stream().filter(o -> o.field.isPresent()). ...
Where one of the filter calls is unnecessary... and it caught that across a call boundary.So, I'd say that AI code reviews are better than a linter. There's still things that it fusses about because it doesn't know the full context of the application and the tables that make certain guarantees about the data, or code conventions for the team (in particular the use of internal terms within naming conventions). |
| |
| ▲ | realusername 11 hours ago | parent | next [-] | | I had a similar review by AI except my equivalent of setSomeData was stateful and needed to be there in both places, the AI just didn't understand any of it. | | |
| ▲ | james_marks 11 hours ago | parent | next [-] | | When this happens to me it makes me question my design. If the AI doesn’t understand it, chances are it’s counter-intuitive. Of course not all LLM’s are equal, etc, etc. | | |
| ▲ | wolletd 11 hours ago | parent | next [-] | | Then again, I have a rough idea on how I could implement this check with some (language-dependent) accuracy in a linter. With LLM's I... just hope and pray? | |
| ▲ | realusername 7 hours ago | parent | prev [-] | | I'd agree with that but in the JS world, there's a lot of questionable library designs that are outside of my control. |
| |
| ▲ | frde_me 8 hours ago | parent | prev [-] | | My reaction in that case is that most other readers of the codebase would probably also assume this, and so it should be either made clearer that it's stateful, or it should be refactored to not be stateful |
| |
| ▲ | uoaei 11 hours ago | parent | prev | next [-] | | I'd say I see one anecdote, nothing to draw conclusions from. | |
| ▲ | fcarraldo 10 hours ago | parent | prev | next [-] | | Why isn’t `obj` immutable? | | |
| ▲ | shagie 8 hours ago | parent [-] | | Because 'obj' is an object that was generated by a json schema and pulled in as a dependency. The pojo generator was not set up to create immutable objects. |
| |
| ▲ | tayo42 10 hours ago | parent | prev [-] | | Unit tests catch that kind of stuff | | |
| ▲ | shagie 8 hours ago | parent | next [-] | | The code works perfectly - there is no issue that a unit test could catch... unless you are spying on internally created objects to a method and verifying that certain functions are called some number of times for given data. | | |
| ▲ | tayo42 6 hours ago | parent [-] | | Sure and you can do that | | |
| ▲ | shagie 5 hours ago | parent [-] | | Trying to write the easiest code that I could test... I don't think I can without writing an excessively brittle test that would break at the slightest implementation change. So you've got this Java: public List<Integer> someCall() {
return IntStream.range(1,10).boxed().toList();
}
public List<Integer> filterEvens(List<Integer> ints) {
return ints.stream()
.filter(i -> i % 2 == 0)
.toList();
}
int aMethod() {
List<Integer> data = someCall();
return filterEvens(data.stream().filter(i -> i % 2 == 0).toList()).size();
}
And I can mock the class and return a spied'ed List. But now I've got to have that spied List return a spied stream that checks to see if .filter(i -> i % 2 == 0) was called. But then someone comes and writes it later as .filter(i -> i % 2 != 1) and the test breaks. Or someone adds another call to sort them first, and the test breaks.To that end, I'd be very curious to see the test code that verifies that when aMethod() is called that the List returned by SomeCall is not filtered twice. What's more, it's not a useful test - "not filtered twice" isn't something that is observable. It's an implementation detail that could change with a refactoring. Writing a test that verifies that filterEvens returns a list that only contains even numbers? That's a useful test. Writing a test that verifies that aMethod returns back the size of the even numbers that someCall produced? That's a useful test. Writing a test that tries to enforce a particular implementation between the {} of aMethod? That's not useful and incredibly brittle (assuming that it can be written). | | |
| ▲ | nl 3 hours ago | parent | next [-] | | You are correct and the objection is just completely invalid. There's no way anyone would or should write tests like this at the client level. I think they are just arguing for the sake of arguing. | |
| ▲ | tayo42 2 hours ago | parent | prev [-] | | You mention the tools you can use to make it happen. I think we're at the point where you need concrete examples to talk about whether it's worth it or not. If you have functions that can't be called twice, then you have no other option to test details in the implementation like that. Yeah there's a tradeoff between torturing your code to make everything about it testable and enforce certain behavior or keeping it simpler. I have worked in multiple code bases where every function call had asserts on how many times it was called and what the args were. |
|
|
| |
| ▲ | anthonypasq96 10 hours ago | parent | prev [-] | | youre verifying std lib function call counts in unit tests? lmao. | | |
| ▲ | tayo42 10 hours ago | parent | next [-] | | You can do that with mocks if it's important that something is only called once, or likely there's some unintended side effect of calling it twice and tests woukd catch the bug | | |
| ▲ | anthonypasq96 8 hours ago | parent [-] | | i know you could do it, im asking why on earth you would feel its vital to verify stream.filter() was called twice in a function | | |
| |
| ▲ | noitpmeder 10 hours ago | parent | prev [-] | | You're not verifying the observable behavior of your application?
lmao | | |
| ▲ | shagie 7 hours ago | parent | next [-] | | How would you suggest tests around: void func() {
printEvens(someCall().stream().filter(n -> n % 2 == 0).toList());
}
void printEvens(List<Integer> nums) {
nums.stream().filter(n -> n % 2 == 0).forEach(n -> System.out.println(n));
}
The first filter is redundant in this example. Duplicate code checkers are checking for exactly matching lines.I am unaware of any linter or static analyzer that would flag this. What's more, unit tests to test the code for printEvens (there exists one) pass because they're working properly... and the unit test that calls the calling function passes because it is working properly too. Alternatively, write the failing test for this code. | | |
| ▲ | tayo42 6 hours ago | parent [-] | | Idk how exactly to do it in cpp becasue I'm not familiar with the tooling You could write a test that makes sure the output of someCall is passed directly to printeven without being modified. The example as you wrote is hard to test in general. It's probably not something you would write if your serious about testing. | | |
| ▲ | shagie 6 hours ago | parent [-] | | In C++, the code would look like: #include <vector>
#include <iostream>
#include <algorithm>
std::vector<int> someCall()
{
return {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
}
void printEvens(const std::vector<int>& nums)
{
std::ranges::for_each(nums, [](int n)
{
if (n % 2 == 0)
{
std::cout << n << '\n';
}
});
}
int main()
{
std::vector<int> data = someCall();
std::vector<int> tmp;
std::ranges::copy_if(data,
std::back_inserter(tmp),
[](int n) { return n % 2 == 0; }
);
printEvens(tmp);
return 0;
}
---Nothing in there is wrong. There is no test that would fail short of going through the hassle of creating a new type that does some sort of introspection of its call stack to verify which function its being called in. Likewise, identify if a linter or other static analysis tool could catch this issue. Yes, this is a contrived example and it likely isn't idiomatic C++ (C++ isn't my 'native' language). The actual code in Java was more complex and had a lot more going on in other parts of the files. However, it should serve to show that there isn't a test for printEvens or someCall that would fail because it was filtered twice. Additionally, it should show that a linter or other static analysis wouldn't catch the problem (I would be rather impressed with one that did). From ChatGPT a code review of the code: https://chatgpt.com/share/69780ce6-03e0-8011-a488-e9f3f8173f... |
|
| |
| ▲ | quietbritishjim 8 hours ago | parent | prev | next [-] | | A redundant filter() isn't observable (except in execution time). You could pick it up if you were to explicitly track whether it's being called redundantly but it'd be very hard and by the time you'd thought of doing that you'd certainly have already manually checked the code for it. | |
| ▲ | anthonypasq96 8 hours ago | parent | prev [-] | | what happened to not testing implementation details? |
|
|
|
|
|
| ▲ | gherkinnn 11 hours ago | parent | prev | next [-] |
| Opus 4.5 catches all sorts of things a linter would not, and with little manual prompting at that. Missing DB indexes, forgotten migration scenarios, inconsistencies with similar services, an overlooked edge case. Now I'm getting a robot to review the branch at regular intervals and poking holes in my thinking. The trick is not to use an LLM as a confirmation machine. It doesn't replace a human reviewer. I don't see the point of paying for yet another CI integration doing LLM code review. |
| |
| ▲ | roncesvalles 4 hours ago | parent | next [-] | | Exactly. This is like buying a smoothie blender when you already have an all-purpose mixer-blender. This whole space is at best an open-source project, not a (multiple!) whole company. It's very unlikely that any of these tools are getting better results than simply prompting verbatim "review these code changes" in your branch with the SOTA model du jour. | |
| ▲ | storystarling 9 hours ago | parent | prev | next [-] | | I came to the same conclusion and ended up wiring a custom pipeline with LangGraph and Celery. The markup on the SaaS options is hard to justify given the raw API costs. The main benefit of rolling it yourself seems to be the control over context retrieval—I can force it to look at specific Postgres schemas or related service definitions that a generic CI integration usually misses. | | |
| ▲ | philipwhiuk 6 hours ago | parent [-] | | Personally I'm hoping that once the bubble bursts and hardware improvement catches up, we start seeing reasonable prices for reasonable models on SaaS platforms that are not scary for SecOps. Not guaranteed though of course. |
| |
| ▲ | guelo 3 hours ago | parent | prev | next [-] | | All those llm wrapper companies make no sense. | |
| ▲ | philipwhiuk 6 hours ago | parent | prev | next [-] | | Currently attempting to get GitLab Duo's review featured enabled as a 'second pair of eyes'. I agree 100% that it's not replacing a human review. I would on the whole prefer a 'lint-style' tool to catch most stuff because they don't hallucinate. But obviously they don't catch everything so an LLM-based review seems like an additional useful tool. | |
| ▲ | ohyoutravel 11 hours ago | parent | prev [-] | | You’ve found the smoking gun! |
|
|
| ▲ | justapassenger 10 hours ago | parent | prev | next [-] |
| AI code review to me is similar to AI code itself. It's good (and constantly getting better) at dealing with mundane things, like - is the list reversed correctly? Are you dealing with pointers correctly? Do you have off by 1 issues? Where they suck is high level problems like - is the code actually solving the business problem? Is it using right dependencies? Does it fit into broader design? Which is expected for me and great help. I'm more happy as a human to spend less time checking if you're managing lifecycle of the pointer correctly and focus on ensuring that code is there to do what it needs to do. |
|
| ▲ | cbovis 11 hours ago | parent | prev | next [-] |
| GH Copilot is definitely far better than just a linter. I don't have examples to hand but one thing that's stood out to me is its use of context outside the changes in the diff. It'll pull in context that typically isn't visible in the PR itself, the sort of things that only someone experienced in the code base with good recall would connect the dots on (e.g. this doesn't conform to typical patterns, or a version of this is already encapsulated in reusable code, or there's an existing constant that could be used here instead of the hardcoded value you have). |
|
| ▲ | bartread 11 hours ago | parent | prev | next [-] |
| I don't know that I fully agree with that. I use Copilot for AI code review - just because it's built in to GitHub and it's easy - and I'd say results are variable, but overall decent. Like anything else AI you need to understand what you're doing, so you need to understand your code and the structure of your application or service or whatever because there are times it will say something that's just completely wide of the mark, or even the polar opposite of what's actually the case. And so you just ignore the crap and close the conversation in those situations. At the same time, it does catch a lot of bugs and problems that fall into classes where more traditional linters really miss the mark. It can help fill holes in automated testing, spot security issues, etc., and it'll raise PRs for fixes that are generally decent. Sometimes not but, again, in these cases you just close them and move on. I'd certainly say that an AI code review is better than no code review at all, so it's good for a startup where you might be the only developer or where there are only one or two of you and you don't cross over that much. But the point I actually wanted to get to is this: I use Copilot because it's available as part of my GitHub subscription. Is it the best? I don't know. Does it add value with zero integration cost to me? Yes. And that, I suspect, is going to make it the default AI code review option for many GitHub subscribers. That does leave me wondering how much of a future there is for AI code review as a product or service outside of the hosting platforms like GitHub and Gitlab, and I have to imagine that an absolutely savage consolidation is coming. |
|
| ▲ | The_Fox 10 hours ago | parent | prev | next [-] |
| I installed CodeRabbit for our reviews in GitLab and am pretty happy with the results, especially considering the low price ($15/user/mo I think). It regularly finds problems, including subtle but important problems that human reviewers struggle to find. And it can make pretty good suggestions for fixes. It also regularly complains about things that are possible in theory but impossible in practice, so we've gotten used to just resolving those comments without any action. Maybe if we used types more effectively it would do that less. We pay a lot more attention to what CodeRabbit says than what DeepSource said when use used it. |
|
| ▲ | athrowaway3z 12 hours ago | parent | prev | next [-] |
| > The SOTA isn't capable of using a code diff as a jumping off point. Not a jumping off point, but I'm having pretty great results on a complicated fork on a big project with a `git diff main..fork > main.diff`, then load in the specs I keep, and tell it to review the diff in chunks while updating a ./review.md It's solving a problem I created myself by not reviewing some commits well enough, but it's surprisingly effective at picking up interactions spread out over multiple commits that might have slipped through regardless. |
|
| ▲ | victorbjorklund 9 hours ago | parent | prev | next [-] |
| They 100% catch bugs in code I work on. Is it replacing human review fully? No, not yet. But it is a useful tool. Just like most of us wouldn’t do a code review without having tests, linters etc run first. |
|
| ▲ | RetpolineDrama 8 hours ago | parent | prev | next [-] |
| >The SOTA isn't capable of using a code diff as a jumping off point. The low quality of HN comments has been blowing my mind. I have quite literally been doing what you describe every working day for the last 6+ months. |
|
| ▲ | vimda 12 hours ago | parent | prev [-] |
| Anecdotally, Claude Bug Bot has actually been super impressive in understanding non trivial changes. Like, today, it noted a race condition in a ~1000 line go change that go test -race didnt pick up. There are definitely issues though. For one, it's non deterministic, so you end up with half a dozen commits, with each run noting different issues. For a second, it tends to be quite in favour of premature optimisation. But over all, well worth it in my experience |