▲ | amarcheschi 20 hours ago | |
I'm honestly slightly appalled by what we might miss by not reading the docs and just letting Ai code. I'm attending a course where we have to analyze medical datasets using up to ~200gb of ram. Calculations can take some time. A simple skim through the library (or even asking the chatbot) can tell you that one of the longest call can be approximated and it takes about 1/3rd of the time it takes with another solver. And yet, none of my colleagues thought about either looking the docs or asking the chatbot. Because it was working. And of course the chatbot was using the solver that was "standard" but that you probably don't need to use for prototyping. Again. We had some parts of one of 3 datasets split in ~40 files, and we had to manipulate and save them before doing anything else. A colleague asked chatgpt to write the code to do it and it was single threaded, and not feasible. I hopped up on htop and upon seeing it was using only one core, I suggested her to ask chatgpt to make the conversion run on different files in different threads, and we basically went from absolutely slow to quite fast. But that supposed that the person using the code knows what's going on, why, and what is not going on. And when it is possible to do something different. Using it without asking yourself more about the context is a terrible use imho, but it's absolutely the direction that I see we're headed towards and I'm not a fan of it |