▲ | ipaddr a day ago | ||||||||||||||||||||||
Is using AI cheating when it's part of the job now. Is not using AI signalling inexperience in the llm department. | |||||||||||||||||||||||
▲ | paxys a day ago | parent | next [-] | ||||||||||||||||||||||
Copy pasting code from ChatGPT doesn't mean you have any kind of understanding of LLMs. | |||||||||||||||||||||||
▲ | finnthehuman a day ago | parent | prev | next [-] | ||||||||||||||||||||||
Yes, obviously. Cheating is subverting the testers intent and being dishonest about it. Not just what a lawyer can weasel word their way around. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | chefandy a day ago | parent | prev | next [-] | ||||||||||||||||||||||
I wonder if OpenAI/Google/Microsoft, et al would hire a developer that leaned heavily on ChatGPT, etc to answer interview questions? Not that I expect them to have ethical consistency when there are much more important factors (profit) on the table, but after several years of their marketing pushing the idea that these are ‘just tools’ and the output was tantamount to anything manually created by the prompter, that looks pretty blatantly hypocritical if they didn’t. | |||||||||||||||||||||||
▲ | zamalek a day ago | parent | prev | next [-] | ||||||||||||||||||||||
Amazon uses Hackerrank and explicitly says not to use LLMs. In that case it would be cheating. However, given that everyone is apparently using it, I now feel dumb for not doing so. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | surgical_fire a day ago | parent | prev | next [-] | ||||||||||||||||||||||
Depends on what kind of developer are you trying to hire, maybe. | |||||||||||||||||||||||
▲ | ChrisMarshallNY a day ago | parent | prev | next [-] | ||||||||||||||||||||||
That's actually a valid question. It looks like it was an unpopular one. Personally, I despise these types of tests. In 25 years as a tech manager, I never gave one, and never made technical mistakes (but did make a number of personality ones -great technical acumen is worthless, if they collapse under pressure). But AI is going to be a ubiquitous tool, available to pretty much everyone, so testing for people that can use it, is quite valid. Results matter. But don't expect to have people on board that can operate without AI. That may be perfectly acceptable. The tech scene is so complex, these days, that not one of us can actually hold it all in our head. I freely admit to having powerful "google-fu," when it comes to looking up solutions to even very basic technical challenges, and I get excellent results. | |||||||||||||||||||||||
▲ | gitremote a day ago | parent | prev | next [-] | ||||||||||||||||||||||
So now there are job applicants not only pretending to know DSA by using ChatGPT, but also claim they have "experience in the LLM department". It's not part of the job now, unless you're too inexperienced to estimate how long it takes to find subtle bugs. | |||||||||||||||||||||||
▲ | IshKebab a day ago | parent | prev [-] | ||||||||||||||||||||||
It's cheating if you don't say you're using it. | |||||||||||||||||||||||
|