| ▲ | keepamovin 2 hours ago | |
Thank you. I and you can be proud. Yes we can! :) I posted yesterday about how I'd invented a new compression algorithm, and used an AI to code it. The top comment was like "You or Claude? ... also ... maybe consider more than just 1-shotting some random idea." This was apparently based on the signal that I had incorrectly added ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which is a dictionary version of the back-reference variant by the same Level-Ziv team of LZ77, the thing actually used in Zip). This mistake was apparently signal that I had no idea what I was doing, was a script kiddie who had just tried to one shot some crap idea, and ended up with slop. This was despite the code working and the results table being accurate. Admittedly the readme was hyped and that probably set this person off too. But they were so far off in their belief that this was Claude's idea, Claude's solution, and just a one-off that it seemed they not only totally misrepresented me and my work, but the whole process that it would actually take to make something like this. I feel that perhaps someone making such comments does not have much familiarity with automatic programming. Because here's what actually happened: the path to get from my idea (intuited in 2013, but beyond my skills to do easily until using AI) was about as far from a 'one-shot' as you can get. The first iteration (Basic LZW + unbounded edit scripts + Huffman) was roughly 100x slower. I spent hours guiding the implementation through specific optimization attempts: - BK-trees for lookups (eventually discarded as slow). - Then going to Arithmetic coding. First both codes + scripts, later splitting. - Various strategies for pruning/resetting unbounded dictionaries. - Finally landing on a fixed dict size with a Gray-Code-style nearest neighbor search to cap the exploration. The AI suggested some tactical fixes (like capping the Levenshtein table, splitting edits/codes in Arithemtic coding), but the architectural pivots came from me. I had to find the winning path. I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s for 2MB) and the ratio consistently beat LZW (interestingly, for smaller dics, which makes sense, as the edit scripts make each word more expressive). That was my bar: Is it real? Does it work? Can it beat LZW? Once it did, I shared it. I was focused on the bench accuracy, not the marketing copy. I let the AI write the hype readme - I didn't really think it mattered. Yes, this person fixated on a small mistake there, and completely misrepresented or had the wrong model of waht it actually took to produce this. I believe that kind of misperception must be the result of a lack of familiarity with using these tools in practice. I consider these kind of "disdain from the unserious & inexperienced" to be low quality, low effort comments than essentially equate AI with clueless engineers and slop. As antirze lays out: the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software. Maybe some people are just pissed off - maybe their dev skills sucked beofre AI, and maybe they still suck with AI, and now they are mad at everything good people are doing with AI, and AI itself? Idk, man. I just reckon this is the age where you can really make things happen, that you couldn't make before, and you should be into and positive. If you are a serious about making stuff. And making stuff is never easy. And it's always about you. A master doesn't blame his tools. | ||