Remix.run Logo
ofirpress 5 days ago

We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent

simonw 5 days ago | parent | next [-]

OK that really is pretty simple, thanks for sharing.

The whole thing runs on these prompts: https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...

  Your task: {{task}}. Please reply
  with a single shell command in
  triple backticks.
  
  To finish, the first line of the
  output of the shell command must be
  'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.
sireat 4 days ago | parent | next [-]

Pretty sure you also need about 120 lines of prompting from default.yaml

https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...

nivertech 4 days ago | parent | prev | next [-]

  system_template: str = "You are a helpful assistant that can do anything."
anything? Sounds like an AI Safety issue ;)
greleic 4 days ago | parent [-]

You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.

There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.

If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.

curvaturearth 4 days ago | parent [-]

I'm already surprised by the amount of things they think they can do but can't

pinoy420 4 days ago | parent | prev [-]

[dead]

meander_water 5 days ago | parent | prev | next [-]

> 1. Analyze the codebase by finding and reading relevant files 2. Create a script to reproduce the issue 3. Edit the source code to resolve the issue 4. Verify your fix works by running your script again 5. Test edge cases to ensure your fix is robust

This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:

> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.

Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.

afro88 4 days ago | parent [-]

Does this mean it's only useful for issue fixes?

regularfry 4 days ago | parent [-]

A feature is just an issue. The issue is that the feature isn't complete yet.

afro88 3 days ago | parent [-]

> 2. Create a script to reproduce the issue

Surely that would send it a bit off the rails to implement a feature?

regularfry 3 days ago | parent [-]

Sounds like an acceptance test to me!

afro88 3 days ago | parent [-]

True. I guess I should actually try it out :)

faangguyindia 5 days ago | parent | prev | next [-]

when a problem is entirely self contained in a file, it's very easy to edit it with LLM.

that's not the case with a codebase, where things are littered around in tune with specific model of organisation the developer had in mind.

fmbb 5 days ago | parent | next [-]

Lumpers win again!

https://en.wikipedia.org/wiki/Lumpers_and_splitters

koakuma-chan 5 days ago | parent | prev [-]

> in tune with specific model of organisation

You wish

BenderV 5 days ago | parent | prev | next [-]

Nice but sad to see lack of tools. Most your code is about the agent framework instead of specific to SWE.

I've built a SWE agent too (for fun), check it out => https://github.com/myriade-ai/autocode

diminish 5 days ago | parent [-]

> sad to see lack of tools.

Lack of tools in mini-swe-agent is a feature. You can run it with any LLM no matter how big or small.

BenderV 4 days ago | parent [-]

I'm trying to understand what does it got to do with LLM size? Imho, right tools allow small models to perform better than undirected tool like bash to do everything. But I understand that this code is to show people how function calling is just a template for LLM.

diminish 4 days ago | parent [-]

Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.

> right tools allow small models to perform better than undirected tool like bash to do everything.

Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.

BenderV 3 days ago | parent [-]

Thanks for your answer.

I guess that it's only a matter of finetuning.

LLM have lots of experience with bash so I get they figure out how to work with it. They don't have experience with custom tools you provide it.

And also, LLM "tools" as we know it need better design (to show states, dynamic actions).

Given both, AI with the right tools will outperform AI with generic and uncontrolled tool.

zhlmmc 4 days ago | parent | prev | next [-]

Totally understandable. General coding agent is 95% from the model.

Teever 5 days ago | parent | prev | next [-]

What sort of results have you had from running it on its own codebase?

ghuntley 5 days ago | parent | prev [-]

cheers i'll add it in.