| ▲ | raddan 3 hours ago | |
None of this is fresh in my mind, so my recollections might be a little hazy. I think the only issue I personally had when writing a decoder was keeping the alignment of various fields right. I wrote the decoder in C# and if I remember correctly I tried to get fancy with some modern-ish deserialization code. I think I eventually resorted to writing a rather ugly but simple low-level byte reader. Nevertheless I found it to be a relatively straightforward program to write and I got most of what I wanted done in under a day. The vibe coded version was a different story. For simplicity, I wanted to stick to an early version of BMP. I don’t remember the version off the top of my head. This was a simplified implementation for students to use and modify in a class setting. Sticking to early version BMPs also made it harder for students to go off-piste since random BMPs found on the internet probably would not work. The main problem was that the LLM struggled to stick to a specific version of BMP. Some of those newer features (compression, color table, etc, if I recall correctly) have to be used in a coordinated way. The LLM made a real mess here, mixing and matching newer features with older ones. But I did not understand that this was the problem until I gave up and started writing things myself. | ||
| ▲ | jason_oster 4 minutes ago | parent [-] | |
You should try a similar task with a recent model (Opus 4.5, GPT 5.2, etc) and see if there are any improvements since your last attempt. I also encourage using a coding agent. It can use test files that you provide, and perform compile-test loops to work out any mistakes. It sounds like you used an older model, and perhaps copy-pasted code from a chat session. (Just guessing, based on what you described.) | ||