Remix.run Logo
ipaddr a day ago

Is using AI cheating when it's part of the job now. Is not using AI signalling inexperience in the llm department.

paxys a day ago | parent | next [-]

Copy pasting code from ChatGPT doesn't mean you have any kind of understanding of LLMs.

finnthehuman a day ago | parent | prev | next [-]

Yes, obviously. Cheating is subverting the testers intent and being dishonest about it. Not just what a lawyer can weasel word their way around.

gopher_space a day ago | parent [-]

It’s not dishonest, it’s just business. I’m under the exact same burden of truth as the company interviewing me; zilch.

theamk a day ago | parent | next [-]

Fair enough! In this case, it seems that policy "hard-fail any candidates who use cheat, using AI or otherwise" is working as expected. Interviews are supposed to be candidate's best showing. If that includes cheating, better fail them fast.

a day ago | parent | prev [-]
[deleted]
chefandy a day ago | parent | prev | next [-]

I wonder if OpenAI/Google/Microsoft, et al would hire a developer that leaned heavily on ChatGPT, etc to answer interview questions? Not that I expect them to have ethical consistency when there are much more important factors (profit) on the table, but after several years of their marketing pushing the idea that these are ‘just tools’ and the output was tantamount to anything manually created by the prompter, that looks pretty blatantly hypocritical if they didn’t.

zamalek a day ago | parent | prev | next [-]

Amazon uses Hackerrank and explicitly says not to use LLMs. In that case it would be cheating. However, given that everyone is apparently using it, I now feel dumb for not doing so.

deprecative a day ago | parent [-]

They made tools to make us redundant and are upset we're forced to use those tools to be competitive.

surgical_fire a day ago | parent | prev | next [-]

Depends on what kind of developer are you trying to hire, maybe.

ChrisMarshallNY a day ago | parent | prev | next [-]

That's actually a valid question. It looks like it was an unpopular one.

Personally, I despise these types of tests. In 25 years as a tech manager, I never gave one, and never made technical mistakes (but did make a number of personality ones -great technical acumen is worthless, if they collapse under pressure).

But AI is going to be a ubiquitous tool, available to pretty much everyone, so testing for people that can use it, is quite valid. Results matter.

But don't expect to have people on board that can operate without AI. That may be perfectly acceptable. The tech scene is so complex, these days, that not one of us can actually hold it all in our head. I freely admit to having powerful "google-fu," when it comes to looking up solutions to even very basic technical challenges, and I get excellent results.

gitremote a day ago | parent | prev | next [-]

So now there are job applicants not only pretending to know DSA by using ChatGPT, but also claim they have "experience in the LLM department".

It's not part of the job now, unless you're too inexperienced to estimate how long it takes to find subtle bugs.

IshKebab a day ago | parent | prev [-]

It's cheating if you don't say you're using it.

hmottestad a day ago | parent [-]

At some point I assume that it’ll be so normal that you’ll almost have to say when you’re not using it.

I don’t need to say that I’m using a text editor, instead of hole punched cards. It’s also quite common to use an IDE instead of a text editor these days in coding interviews. When I was a student I remember teachers saying that they considered an IDE as cheating since they wanted to test our ability to remember syntax and to keep a mental picture of our code in our heads.