Remix.run Logo
zwnow 8 hours ago

> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

And yet I expect the whole leaderboard to be full of AI submissions...

Edit: No leaderboard this year, nice!

chongli 8 hours ago | parent | next [-]

I am so glad there is no leaderboard this year. Making it a competition really is against the spirit of advent calendars in general. It’s also not a fair competition by default simply due to the issue of time zones and people’s life schedules not revolving around it.

There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.

amitav1 4 hours ago | parent | next [-]

I agree with the first point but the second point feels irrelevant. Yeah, people's life schedules don't revolve around it, but that doesn't mean shouldn't make iy a competition. Most people who play on chess.com don't have lives that revolve around it, but that doesn't mean that chess.com should abolish Elo rankings.

poulpy123 an hour ago | parent | next [-]

afai your elo score don't depend of your timezone

acedTrex 4 hours ago | parent | prev [-]

The global leaderboard encouraged bad behavior against the entire project. Including criminal things like attempting to ddos the site.

zwnow 7 hours ago | parent | prev [-]

Yea fully agree. The leaderboards always made me feel bad.

retsibsi 8 hours ago | parent | prev | next [-]

Not this time:

> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.

losvedir 5 hours ago | parent | prev | next [-]

Depends how you look at it. Some of my colleagues rave about Claude Code, so I was thinking about trying it out on these puzzles. In that sense it is "going to the gym", just for a different thing. Since I do AoC every year, I feel like it'll give me a good feel for Claude Code compared to my baseline. And it's not just "prompting", but figuring out a workflow with tests and brainstorming and iteration and all that. I guess if the LLM can just one-shot every puzzle that's less interesting, but I suppose it would be good to know it can do that...

zwnow 4 hours ago | parent [-]

It 100% can do that. LLMs are trained on an unfathomable amount of data. Every AoC puzzle can be solved by identifying the algorithm behind it. Its Leetcode in a friendlier and more festive spirit.

8 hours ago | parent | prev | next [-]
[deleted]
8 hours ago | parent | prev | next [-]
[deleted]
Cthulhu_ 8 hours ago | parent | prev | next [-]

I mean they're great programming tests, for both people and AI I'd argue - like, it'd be impressive if an AI can come up with a solution in short order, especially with minimal help / prompting / steering. But it wouldn't be a personal achievement, and if it was a competition I'd label it as cheating.

8 hours ago | parent | prev | next [-]
[deleted]
KolmogorovComp 8 hours ago | parent | prev | next [-]

> And yet I expect the whole leaderboard to be full of AI submissions...

There will be no global leaderboard this year.

stOneskull 8 hours ago | parent | prev [-]

i don't think there is a global leaderboard this year. just private ones.