Remix.run Logo
neya 3 hours ago

Yesterday someone on was yapping about how AI is enough to replace senior software engineers and they can just "vibe code their way" over a weekend into a full-fledged product. And that somehow finally the "gatekeeping" of software development was removed. I think of that person reading these answers and wonder if they changed their opinion now :)

cyberrock 2 hours ago | parent | next [-]

Does this mean we're back in favor of using weird riddles to decide programming skills now? Do we owe Google an apology for the inverse binary tree incident?

LtWorf 16 minutes ago | parent [-]

Not riddles but "requirements" :)

Closi 2 hours ago | parent | prev | next [-]

Humans aren't immune to getting questions like this wrong either, so I don't think it changes much in terms of the ability of AI to replace jobs.

I've seen senior software engineers get tricked with the 'if YES spells yes, what does EYES spell?', or 'Say silk three times, what do cows drink?', or 'What do you put in a toaster?'.

Even if not a trick - lots of people get the 'bat and a ball cost £1.10 in total. The bat costs £1 more than the ball. How much does the ball cost?' question wrong, or '5 machines take 5 minutes to make 5 widgets. How long do 100 machines take to make 100 widgets?' etc. There are obviously more complex variants of all these that have even lower success rates for humans.

In addition, being PHD-Level in maths as a human doesn't make you immune to the 'toaster/toast' question (assuming you haven't heard it before).

So if we assume humans are generally intelligent and can be a senior software engineer, getting this sort of question confidently wrong isn't incompatible with being a competent senior software engineer.

hapless 2 hours ago | parent [-]

humans without credentials are bad at basic algebra in a word problem, ergo the large language model must be substantially equivalent to a human without a credential

thanks but no thanks

i am often glad my field of endeavour does not require special professional credentials but the advent of "vibe coding" and, just, generally, unethical behavior industry-wide, makes me wonder whether it wouldn't be better to have professional education and licensing

Closi 2 hours ago | parent [-]

Let's not forget that Einstein almost got a (reasonably simple) trick question wrong:

https://fs.blog/einstein-wertheimer-car-problem/

And that many mathematicians got monty-hall wrong, despite it being intuitive for many kids.

And being at the top of your field (regardless of the PHD) does not make you immune to falling for YES / EYES.

> humans without credentials are bad at basic algebra in a word problem, ergo the large language model must be substantially equivalent to a human without a credential

I'm not saying this - i'm saying the claim that 'AI's get this question wrong ergo they cannot be a senior software engineer' is wrong when senior software engineers will get analogous questions wrong. If you apply the same bar to software engineers, you get 'senior software engineers get this question wrong so they can't be senior software engineers' which is obviously wrong.

LtWorf 16 minutes ago | parent | prev | next [-]

No, those people refuse evidence get in the way.

arcfour 2 hours ago | parent | prev [-]

What does this nonsensical question that some LLMs get wrong some of the time, and that some don't get wrong ever, have to do with anything? This isn't a "gotcha" even though you want it to be. It's just mildly amusing.