| ▲ | netdevphoenix 2 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI There are some things here that folks making statements like yours often omit and it makes me very sus about your (over)confidence. Mostly these statements talk in a business short-term results oriented mode without mentioning any introspective gains (see empirically supported understanding) or long-term gains (do you feel confident now in making further changes _without_ the AI now that you have gained new knowledge?). 1. Are you 100% sure your code changes didn't introduce unexpected bugs? 1a. If they did, would you be able to tell if they where behaviour bugs (ie. no crashing or exceptions thrown) without the AI? 2. Did you understand why the bug was happening without the AI giving you an explanation? 2a. If you didn't, did you empirically test the AI's explanation before applying the code change? 3. Has fixing the bug improved your understanding of the driver behaviour beyond what the AI told you? 3a. Have you independently verified your gained understanding or did you assume that your new views on its behaviour are axiomatically true? Ultimately, there are 2 things here: one is understanding the code change (why it is needed, why that particular change implementation is better relative to others, what future improvements could be made to that change implementation in the future) and skill (has this experience boosted your OWN ability in this particular area? in other words, could you make further changes WITHOUT using the AI?). This reminds me of people that get high and believe they have discovered these amazing truths. Because they FEEL it not because they have actual evidence. When asked to write down these amazing truths while high, all you get in the notes are meaningless words. While these assistants are more amenable to get empirically tested, I don't believe most of the AI hypers (including you in that category) are actually approaching this with the rigour that it entails. It is likely why people often think that none of you (people writing software for a living) are experienced in or qualified to understand and apply scientific principles to build software. Arguably, AI hypers should lead with data not with anecdotal evidence. For all the grandiose claims, the lack of empirical data obtained under controlled conditions on this particular matter is conspicuous by its absence. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jacquesm 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
It's incredible that within two minutes after posting this comment is already grayed out whereas it makes a number of excellent points. I've been playing with various AI tools and homebrew setups for a long time now and while I see the occasional advantage it isn't nearly as much of a revolution as I've been led to believe by a number of the ardent AI proponents here. This is starting to get into 'true believer' territory: you get these two camps 'for and against' whereas the best way forward is to insist on data rather than anecdotes. AI has served me well, no doubt about that. But it certainly isn't a passe-partout and the number of times it has caused gross waste of time because it insisted on chasing some rabbit simply because it was familiar with the rabbit adds up to a considerable loss in productivity. The scientific principle is a very powerful tool in such situations and anybody insisting on it should be applauded. It separates fact from fiction and allows us to make impartial and non-emotional evaluations of both theories and technologies. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | KptMarchewa 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Why would you ever, outside flight and medical software, care about being 100% sure that the change did not introduce any bugs? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | mlrtime an hour ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> 1. Are you 100% sure your code changes didn't introduce unexpected bugs? How often have you written code and been 100% your code didn't introduce ANY bugs? Seriously, for most of the code out there who cares? If it's in a private or even public repo, it doesn't matter. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||