| ▲ | kenjackson 14 hours ago |
| I've hit this in little bursts, but one thing I've found is that LLMs are really good at reasoning about their own code and helping me understand how to diagnose and make fixes. I recently found some assembly source for some old C64 games and used an LLM to walk me through it (purely recreational). It was so good at it. If I was teaching a software engineering class, I'd have students use LLMs to do analysis of large code bases. One of the things we did in grad school was to go through gcc and contribute something to it. Man, that code was so complex and compilers are one of my specialties (at the time). I think having an LLM with me would have made the task 100x easier. |
|
| ▲ | devin 14 hours ago | parent [-] |
| Does that mean you don't think you learned anything valuable through the experience of working through this complexity yourself? I'm not advocating for everyone to do all of their math on paper or something, but when I look back on the times I learned the most, it involved a level of focus and dedication that LLMs simply do not require. In fact, I think their default settings may unfortunately lead you toward shallow patterns of thought. |
| |
| ▲ | kenjackson 13 hours ago | parent | next [-] | | I wouldn't say there is no value to it, but I do feel like I learned more using LLMs as a companion than trying to figure everything out myself. And note, using an LLM doesn't mean that I don't think. It helps provide context and information that often would be time consuming to figure out, and I'm not sure if the time spent is proportional to the learning I'd get from it. Seeing that these memory locations mapped to sprites that then get mapped to those memory locations, which map to the video display -- are an example of things that might take a minute to explore to learn, but the LLM can tell me instantly. So a combination of both is useful. | | |
| ▲ | devin 13 hours ago | parent [-] | | Hard to argue with such a pragmatic conclusion! I think the difficulty I have is that I don't think it's all that straightforward to assess how it is exactly that I came not just to _learn_, but to _understand_ things. As a result, I have low confidence in knowing which parts of my understanding were the result of different kinds of learning. |
| |
| ▲ | kolinko 13 hours ago | parent | prev [-] | | I'd say this is similar to working with assembly vs c++ vs python. Programming in python you learn less about low level architecture trivia than in assembly, but you learn way more in terms of high level understanding of issues. When I had to deal with/patch complex c/c++ code, I rarely ever got a deep understanding of what the code did exactly - just barely enough to patch what was needed and move on. With help of LLMs it's easier to understand what the whole codebase is about. |
|