Remix.run Logo
nielstron 6 hours ago

Hey, paper author here. We did try to get an even sample - we include both SWE-bench repos (which are large, popular and mostly human-written) and a sample of smaller, more recent repositories with existing AGENTS.md (these tend to contain LLM written code of course). Our findings generalize across both these samples. What is arguably missing are small repositories of completely human-written code, but this is quite difficult to obtain nowadays.

menaerus 5 hours ago | parent [-]

Why stick to python-only repositories though?

troupo 5 hours ago | parent [-]

To reduce the number of variables to account for. To be able to finish the paper this year, and not the next century. To work with a familiar language and environments. To use a language heavily represented in the training data.

I mean, it's not that hard to understand why.

menaerus 5 hours ago | parent [-]

[flagged]

troupo 3 hours ago | parent [-]

All research is conducted in constraints. It's not hard to understand those constraints by simply thinking.

Besides, one could actually open the research, and scroll to section 5 where they acknowledge the need to expand beyond Python:

--- start quote ---

5. Limitations and Future Work

While our work addresses important shortcomings in the literature, exciting opportunities for future research remain.

# Niche programming languages

The current evaluation is focused heavily on Python. Since this is a language that is widely represented in the training data, much detailed knowledge about tooling, dependencies, and other repository specifics might be present in the models’ parametric knowledge, nullifying the effect of context files. Future work may investigate the effect of context files on more niche programming languages and toolchains that are less represented in the training data, and known to be more difficult for LLMs

--- end quote ---