▲ | 0xpgm 6 days ago | |||||||
> “The truth is, if you want a job, you’re gonna go through this thing,” Adam Jackson, CEO and founder of Braintrust, a company that distributes AI interviewers, tells Fortune. “If there were a large portion of the job-seeking community that were wholesale rejecting this, our clients wouldn’t find the tool useful… This thing would be chronically underperforming for our clients. And we’re just not seeing that—we’re seeing the opposite.” That is quite rich coming from Braintrust. The founder should spend less time doing press interviews and more time listening to feedback from his own community. I was from the outside intrigued by the unique way of working and signed up to learn more about it. The thing that immediately jumped out is community members complaining about failing the initial screening without any feedback at all. This initial screening is apparently an AI interview. If the AI is so great, it should be trivial to get it to explain why it rejected interviewees. Unless it has serious shortcomings that would be risky to publicize. Alternatively, this could be a sneaky way of collecting training data for the AI by preying on unsuspecting humans. | ||||||||
▲ | beefnugs 6 days ago | parent | next [-] | |||||||
Half the benefit of ai screening is upper management being able to put in race/religion/private-data-purchased restrictions without the HR knowing about these illegal settings | ||||||||
| ||||||||
▲ | iainmerrick 6 days ago | parent | prev [-] | |||||||
If the AI is so great, it should be trivial to get it to explain why it rejected interviewees. Unless it has serious shortcomings that would be risky to publicize. Why should AIs be any different from human interviewers in this respect? It wouldn't be much extra effort for humans to give a little feedback, but this typically isn't done. |