Remix.run Logo
ofirpress 5 days ago

[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

comex 5 days ago | parent | next [-]

The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.

typpilol 5 days ago | parent [-]

Ya what he links directly contradicts what he's saying lol

ofirpress 5 days ago | parent [-]

[flagged]

plumb_bob_00 5 days ago | parent | next [-]

If you are going to represent your team in public, you owe them better than a response like this.

hamonrye34 4 days ago | parent | next [-]

This is contingent on whether SWE N-class frontier models can do deep packet inspection.

keybored 4 days ago | parent | prev [-]

I say let them cook.

grepfru_it 4 days ago | parent [-]

Hol up

bflesch 5 days ago | parent | prev | next [-]

Unfortunately the bank account trajectories are not public, because unscupulous corporations such FAANG who let thousands of engineers wade through my chat messages on their platforms might not shy away from bribing academics to improve benchmarks of their billion-dollar AI initiatives.

It's also a bribe if my sibling gets a job with $500k annual salary. Tech is not immune to it.

Zacharias030 5 days ago | parent [-]

You realize that this problem in SWE-Bench was discovered and publicized by people within those FAANG corporations?

TheDong 5 days ago | parent [-]

I'm sure some of the people working at Theranos thought there legitimately was a revolutionary blood-test machine.

The presence of a person who wants SWE-bench to have honest results and takes it seriously does not mean the results are free of perverse incentives, nor that everyone is behaving just as honestly.

Zacharias030 4 days ago | parent [-]

When Swe-Bench was new in 2023, it was — with all due respect — a bit of a niche benchmark in LLM research. LLMs were so incredibly useless at solving these tasks that I think you could find a bit more empathy for the original academic authors. I don’t think the Theranos example applies. Even the flawed benchmark was good enough to get us from ~GPT4 to Claude 4‘s coding ability.

phyzome 5 days ago | parent | prev | next [-]

That sounds like the job of the person making the claim.

ares623 5 days ago | parent | prev | next [-]

They really did a "trust me bro" and "do your own research" huh

stronglikedan 5 days ago | parent [-]

the strange thing to me is that people would have it any other way. if you don't trust someone, why would you trust them to do the research for you? bit of entitlement if you ask me

wubrr 5 days ago | parent | next [-]

Because you should never just 'trust' random 'research'. Good analysis in this case will clearly explain the problem, the analysis methodology, findings, net effects, resolution, etc. Something you can read, and decide for yourself whether it is complete/incomplete, has holes, contradictions, etc. Not 'we looked into it and all is good - only potentially tiny effect' (no actual data or methodology presented at all) and then linking to a comment directly contradicting the claim...

It's a hilariously unserious and untrustworthy response.

haskellshill 4 days ago | parent | prev | next [-]

That's silly. If they show their work I won't have to trust them. Compare answering "The answer is 5, just compute it yourself." on a math test, vs. actually showing the calculation. The former clearly implies the person doesn't know what they're talking about.

croon 5 days ago | parent | prev | next [-]

Arguably the initial post was meant to convey confidence and authority on the subject. When questioned you could either dive deeper and explain in more detail why x because of y (if so inclined), ignore it, or... do what they did.

No one owes anyone anything, but if you want to represent something; answering the question more in detail would have either closed the issue or raised more scrutiny, both of which are a good thing when trying to figure something out.

I don't have to trust someone to check their research and look at how they worked. If the work doesn't pass muster, likely the results don't either. Again, you can view it as entitlement, but if you're not going to bother backing up your claim, why make the claim to start with?

aprilthird2021 5 days ago | parent | prev [-]

It's not that people are entitled. It's that "do your own research" is usually a cop out when you yourself don't understand the answer or are hiding it

typpilol 5 days ago | parent | prev | next [-]

Are you saying you've done way more than a cursory search and ruled out everything?

5 days ago | parent | prev [-]
[deleted]
_cs2017_ 5 days ago | parent | prev | next [-]

Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?

Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?

bflesch 5 days ago | parent | prev | next [-]

> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

> This doesn't change the overall picture or trends at all.

Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".

cjsaltlake 5 days ago | parent | next [-]

I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.

numbsafari 4 days ago | parent | next [-]

Your classic bug is being used as justification to destroy the careers and lives of tens of thousands of people. Read the room.

gg-plz 5 days ago | parent | prev [-]

[dead]

lieret 5 days ago | parent | prev | next [-]

[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.

doctorpangloss 5 days ago | parent | prev | next [-]

> other maybe extremely basic edge cases were missed?

The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.

The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.

And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.

The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.

They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.

Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.

If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.

mustaphah 5 days ago | parent | prev [-]

> You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]

I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.

Edit #1: I didn't mean to imply bad intent; just thinking out loud.

Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y

gchamonlive 5 days ago | parent | next [-]

> I didn't mean to imply bad intent

> I wouldn't be surprised if they left this loophole on purpose

You didn't imply bad intent, you outright suggested it.

coldtea 5 days ago | parent | next [-]

He means he doesn't say it was necessarily bad intent, but mentions it as a possibility ("thinking out loud").

gchamonlive 3 days ago | parent [-]

Thinking out loud isn't a free pass to say stuff without consequences. Sure we are all protected under free speech, but free speech doesn't remove the meaning and the impact words have in the world.

mustaphah 5 days ago | parent | prev [-]

I could've phrased it better.

gchamonlive 5 days ago | parent [-]

You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?

mustaphah 5 days ago | parent [-]

I meant it as a hint for anyone inclined to dig deeper. It's a possibility rather than something we can confidently dismiss.

gchamonlive 5 days ago | parent [-]

If it's a possibility and you don't want to dig deeper better to sit out and not comment anything at all, lest you risk defamation.

Thinking out loud also doesn't make defamation acceptable.

Dylan16807 4 days ago | parent | next [-]

"It's probably not X, but we should consider X as we look at this." and "I feel like this might be X but I'm 50:50 on it." are not anywhere near defamation. You have to get a lot closer to certainty before it's an issue.

And listing out "a possibility but you don't want to dig deeper" is often a good contribution to a conversation.

In this case they worded it badly, but the basic idea of the comment isn't awful.

gchamonlive 4 days ago | parent [-]

That someone in the team might not have done it on purpose, but left it for convenience? How does that benefit the debate? I really fail to see any silver lining in doing such speculative comments without any substance whatsoever to back it up.

TheDong 5 days ago | parent | prev | next [-]

It's fine, this is an american site so JAQing is in fact safe under free speech.

You're welcome to ask b "would none rid me of this meddlesome priest" with no fear

gchamonlive 4 days ago | parent [-]

And I'm protected under free speech to try to educate people about good manners, so it's fine too.

2 days ago | parent [-]
[deleted]
2 days ago | parent | prev [-]
[deleted]
faangguyindia 5 days ago | parent | prev | next [-]

never attribute something to malice which can be attributed to incompetence. Basically, this has been utilized plenty of times by some really smart folk to get what they want.

cjsaltlake 5 days ago | parent | prev [-]

We absolutely did not.

coldtea 5 days ago | parent | next [-]

Of course that's what a team that did it on purpose would also say :)

5 days ago | parent | prev [-]
[deleted]
enum 5 days ago | parent | prev | next [-]

SGTM. The transparency is good.

BestHackerOnHN 5 days ago | parent | prev | next [-]

[dead]

franktankbank 5 days ago | parent | prev | next [-]

#tiny

segmondy 5 days ago | parent | prev [-]

reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence

bflesch 5 days ago | parent [-]

I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.

"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]

Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.

[1] https://en.wikipedia.org/wiki/Cheating_(disambiguation)

segmondy 5 days ago | parent | next [-]

would it have been better if I called it "shortcut" instead of cheating? all shortcuts are called cheating until people decide on it's fairness. the AI has been given a task to fix a bug, the AI figured out that looking at other PR might yield a solution, if it was a human that did so, it would clearly be called cheating. Does AI know that it's cheating? Was it prompted to solve it without cheating? If you give AI access to the internet and quiz it, it would use info from the net to answer. Does that really skew it's score? Is it cheating? Is it a sign of intelligence? Sure, I think all of those.

https://en.wikipedia.org/wiki/Reward_hacking

giveita 5 days ago | parent | prev [-]

Is it wrong? Aren't ethics and intelligence two different axes?

coldtea 5 days ago | parent [-]

Different, but probably not as orthogonal as one might think.

E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).

So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.

robcohen 5 days ago | parent [-]

Aren't there only two rules that all groups follow in the animal kingdom?

- don't lie too often

- don't kill members of the in group

Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.

DrScientist 4 days ago | parent | next [-]

Groups don't follow rules as such, group behaviours emerge from the interaction of individual behaviours.

As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).

coldtea 5 days ago | parent | prev [-]

>All other rules/ethics seem to be dependent on resource scarcity

That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.

It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.