Remix.run Logo
MrBrobot 6 days ago

> Candidates tell Fortune that AI interviewers make them feel unappreciated to the point where they’d rather skip out on potential job opportunities, reasoning the company’s culture can’t be great if human bosses won’t make the time to interview them. But HR experts argue the opposite; since AI interviewers can help hiring managers save time in first-round calls, the humans have more time to have more meaningful conversations with applicants down the line. “This gives me a bad feeling about your company” “But you’re wrong”

neilv 6 days ago | parent | next [-]

> “This gives me a bad feeling about your company” “But you’re wrong”

"Now you gave me two bad feelings about the company."

Loughla 6 days ago | parent [-]

It's not my actions causing this it's just your perspective.

Rule number 1; everyone's perspective is their reality, regardless of your beliefs or intentions.

threetonesun 6 days ago | parent | prev | next [-]

Same argument for removing customer service with chatbots or AI. It's entirely untrue, and creates a much worse customer experience, but because people drop out your KPIs / NPS is based off of people who were willing to put up with shit to get to a real human.

DavidWoof 6 days ago | parent | next [-]

Give me an AI chatbot over someone with poor English skills reading a script any day of the week. My problem probably isn't unique, it's probably something fairly obvious that was vague in the instructions.

Now, the important thing is offer a way to upgrade to a human. But I have no problem at all starting with AI, in fact I honestly prefer it.

rcxdude 6 days ago | parent | next [-]

That doesn't really match my experience. Usually if my problem is not unique it's already documented somewhere and I've solved it that way (And support generally puts some effort into documenting the non-unique problems to reduce their workload). If I'm calling support, it's because I've exhausted all other options and I've either concluded I need them to do something I can't do with an online form or the information is not at all accessible elsewhere, in which case first line support is nothing but an obstacle.

nijave 6 days ago | parent [-]

It's hit or miss. Sometimes screaming "give me a compotent human" at a chatbot is quicker than pleading with tier 1. Sometimes it's not

At least there's no hold time

threetonesun 6 days ago | parent | prev | next [-]

Sure, because you've already lived with 10+ years of enshittification in the process. Customer support used to be an in-house team that was actually trained on providing relevant support, not an outsourced call-center that's as (or more) useless than a chatbot.

In some ways it's not that different with hiring. I used to work with HR teams that knew the roles they were hiring for extremely well and could make reliable calls on whether or not to pass a candidate to a hiring manager. More recently I've seen HR get outsourced entirely, or staffed with cheaper employees that just shuffle documents through systems.

gowld 6 days ago | parent | prev | next [-]

The AI and the human are both programmed to avoid helping you.

MattGaiser 6 days ago | parent | prev [-]

At this point I find the humans know so little that an LLM referencing documentation or past support answers is superior.

AnimalMuppet 6 days ago | parent | prev [-]

Well... is a chatbot for customer service really all that much worse than a human who is not permitted to deviate from their script?

eviks 6 days ago | parent | next [-]

Certainly, because not deviating from the scripts also cuts off the infinite range of made up nonsense a bot can hallucinate. And it's not like the bot will have magic authority to fix the real issue it can't be bound by the script, so in this regard there is no upside.

rafabulsing 6 days ago | parent [-]

Chatbots != LLMs.

We've had chatbots for a long time before LLMs, and while they're of course much more limited as you have to explicitly program every thing it should be able to do, by that very virtue, hallucinating is problem they do not have.

For this kind of customer service chat scenario, I find them much better than just a free style LLM trained in some internal docs.

(Though really, probably the ultimate solution is a hybrid one, where you have an explicitly programmed conversation tree the user can go down, but with an LLM decoding what the user is saying into one of the constrained options. So that if one of the options is "shipping issues", "my order is late" should take me there. While other forms of NLP can do that, LLMS would certainly shine for that application)

6 days ago | parent | prev [-]
[deleted]
dfxm12 6 days ago | parent | prev | next [-]

What is an AI interview going to glean that it can't already from a resume?

The power imbalance is already so far tipped to the employer side. This verbiage doesn't even consider the applicant a human with time worth saving or worth having meaningful conversations!

remyp 6 days ago | parent [-]

Gleaning information isn't the goal; whittling down deluge of applicants is. For the company, candidate time is free and manager time is massively expensive. The AI tools are cheaper than hiring more HR staff, so companies buy them lest they be haunted by the ghost of Milton Friedman.

Anybody who has been on the hiring side post-GPT knows why these AI tools are getting built: people and/or their bots are blind-applying to every job everywhere regardless of their skillset. The last mid-level Python dev job I posted had 300 applicants in the first hour, with 1/4 of them being from acupuncturists and restaurant servers who have never written a line of code. Sure, they're easy to screen out, but there are thousands to sift through.

Having said that, I don't like AI interview tools and will not be using them. I do understand why others do, though.

rightbyte 5 days ago | parent | next [-]

> The last mid-level Python dev job I posted had 300 applicants in the first hour, with 1/4 of them being from acupuncturists and restaurant servers who have never written a line of code.

That has to be due to policy failure of forcing people on benefits to apply for jobs to get benefits, even if they already have applied to all suitable jobs there are right now?

RugnirViking 6 days ago | parent | prev | next [-]

> candidate time is free and manager time is massively expensive

This is a naive view of the proceedings. Why not hire literally the first person that applies? That would reduce your cost even further.

The point is to figure out who would be good at making you money. The question is, does an ai chatbot wasting your prospective candidates time make you more, or less, likely to find people good at that? Perhaps it reduces the amount of cost reviewing applications, but I imagine it also drives away a good number of the better candidates, those that have more options, away. If you're cutting corners and cost this much, why are you even hiring? surely the point of the exercise is looking towards future growth.

Naturally, there is also a limit to that line of thinking also - spending weeks reviewing each one of the ten thousand applications to your junior developer role wouldn't be the most efficient way to grow. But surely there are better filtering methods you can think of than this, which is imo the equivalent of planning on reducing the number of candidates by lining them up in a room for hours in sweltering heat and hurling verbal abuse at them until only a couple of the wretched ones without a shred of dignity are left

jacksonjadden 6 days ago | parent | prev [-]

[dead]

aflag 6 days ago | parent | prev | next [-]

I don't want more time having meaningful conversations with human bosses. I just want to have a normal interview.

bluefirebrand 6 days ago | parent | prev | next [-]

> But HR experts argue the opposite

Once again proving that somehow HR has become captured by bug people

adamors 6 days ago | parent [-]

That happened when they started to refer to people as “resources”.

henry2023 6 days ago | parent | prev [-]

"HR Experts"