| ▲ | neya 3 hours ago | ||||||||||||||||
Yesterday someone on was yapping about how AI is enough to replace senior software engineers and they can just "vibe code their way" over a weekend into a full-fledged product. And that somehow finally the "gatekeeping" of software development was removed. I think of that person reading these answers and wonder if they changed their opinion now :) | |||||||||||||||||
| ▲ | cyberrock 2 hours ago | parent | next [-] | ||||||||||||||||
Does this mean we're back in favor of using weird riddles to decide programming skills now? Do we owe Google an apology for the inverse binary tree incident? | |||||||||||||||||
| |||||||||||||||||
| ▲ | Closi 2 hours ago | parent | prev | next [-] | ||||||||||||||||
Humans aren't immune to getting questions like this wrong either, so I don't think it changes much in terms of the ability of AI to replace jobs. I've seen senior software engineers get tricked with the 'if YES spells yes, what does EYES spell?', or 'Say silk three times, what do cows drink?', or 'What do you put in a toaster?'. Even if not a trick - lots of people get the 'bat and a ball cost £1.10 in total. The bat costs £1 more than the ball. How much does the ball cost?' question wrong, or '5 machines take 5 minutes to make 5 widgets. How long do 100 machines take to make 100 widgets?' etc. There are obviously more complex variants of all these that have even lower success rates for humans. In addition, being PHD-Level in maths as a human doesn't make you immune to the 'toaster/toast' question (assuming you haven't heard it before). So if we assume humans are generally intelligent and can be a senior software engineer, getting this sort of question confidently wrong isn't incompatible with being a competent senior software engineer. | |||||||||||||||||
| |||||||||||||||||
| ▲ | LtWorf 16 minutes ago | parent | prev | next [-] | ||||||||||||||||
No, those people refuse evidence get in the way. | |||||||||||||||||
| ▲ | arcfour 2 hours ago | parent | prev [-] | ||||||||||||||||
What does this nonsensical question that some LLMs get wrong some of the time, and that some don't get wrong ever, have to do with anything? This isn't a "gotcha" even though you want it to be. It's just mildly amusing. | |||||||||||||||||