| ▲ | the_duke 5 hours ago | |||||||
LLMs, and especially coding focused models, have come a very long way in the past year. The difference when working on larger tasks that require reasoning is night and day. In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now... | ||||||||
| ▲ | crystal_revenge 39 minutes ago | parent | next [-] | |||||||
> LLMs, and especially coding focused models, have come a very long way in the past year. I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company. I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it. | ||||||||
| ▲ | mbac32768 5 hours ago | parent | prev [-] | |||||||
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!) It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon. Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler. | ||||||||
| ||||||||