Can an AI Interview You Better Than a Human?
Mon Jan 26 2026
We discuss “Voice in AI Firms: A Natural Field Experiment on Automated Job Interviews” by Brian Jabarian and Luca Henkel. The paper examines a randomized experiment with call center job applicants in the Philippines who were assigned to either AI-conducted voice interviews, human interviews, or given a choice between the two.
Key Findings:
* AI interviews led to higher job offer rates and proportionally higher retention rates
* No significant difference in involuntary terminations between groups
* Applicants actually preferred AI interviews—likely due to scheduling flexibility and immediate availability
* AI interviewers kept conversations more on-script with more substantive exchanges
* Online applicants saw especially large gains from AI interviews
Topics Discussed:
* The costs of recruitment and why interview efficiency matters
* Whether AI interviews find different workers or just reduce noise in screening
* How human recruiters interpret AI interview transcripts differently
* The “Coasean singularity” question: Will AI improve labor market matching overall?
* Limitations: scheduling confounds, external validity beyond call centers, unmeasured long-tail outcomes
* The coming arms race between AI interviewers and AI-coached applicants
Posterior Updates:
On the usefulness of current AI for job hiring:
* Seth: 40% → 90% confidence AI works for call center jobs; modest update for general jobs
* Andrey: 20% → 75% for call centers; 1% → 5% for general interviews (“we need to reorganize all of hiring first”)
On whether AI will improve job matching significantly on net in the next 5-10 years
* Andrey: 55% → No Update
* Seth: “A bit more optimistic than Andrey” → +1pp update
Referenced Work/Authors:
* Prediction Machines
* Related episode on AI and labor signaling with Bo Cowgill.
Transcript:
[00:00:00] INTRODUCTION
Seth: Welcome to the Justified Posteriors podcast, the podcast that updates its priors about the economics of AI and technology. I’m Seth Benzell, an interviewer who will never stick to a standard script, coming to you from Chapman University in sunny Southern California.
Andrey: And I’m Andrey Fradkin, counting down the days until I can use an AI to pre-interview my podcast guests to see if they deserve to be on the show. Coming to you from San Francisco, California.
Seth: I don’t know. I think our filtering criteria is pretty good.
Andrey: I know.
Seth: Right. That’s one job we never want to automate—who becomes a friend of the podcast. That’s an un-automatable job.
Andrey: But it would be nice to pre-interview our guests so that we could prepare better for the actual show.
Seth: I was thinking about this, because there’s two possibilities, right? You do the pre-interview, and you get an unsurprising answer in this sort of pre-interview, and then that’s good, and then you should go with it. And then if you get a surprising one, then you would lean into it. What would you even get out of the pre-interview?
Andrey: Maybe what the guests would want to talk about.
Seth: Okay.
Andrey: But I agree with you. Mostly, it’s just hearing the guest talk, and then thinking about, “Oh, this is something that we want to really dig into,” versus, “This is something that might be not as interesting to our audience,” and knowing that ex ante.
[00:02:00] SETTING UP THE TOPIC
Seth: Yeah. We’ve been... So we’re talking about interviews. You’ll remember in a recent episode, we just talked to our friend Bo, who’s doing work on how maybe job applications are changing because of AI. So now I think what we want to think a little bit about is how job interviews are changing because of AI. Maybe we’ve heard before about how AI is changing how people talk to the hirer. Maybe we want to hear a little bit about how AI is changing how the hirer solicits information in an interview. We’ve got a very interesting paper to talk about just about that. But do you remember the last job interview you did, Andrey?
Andrey: Yes.
Seth: How did it go? Did you have fun? Did you feel like you stayed on topic?
Andrey: It was a very intense set of interviews that required me to fly halfway across the world, which was fun, but exhausting.
Seth: So fun. So you would describe the interview as a fun experience? Did you get more excited about the job after doing the interview?
Andrey: Yes, although I ultimately didn’t take it, but I did get—you know, I was impressed by the signaling value of having such an interview.
Seth: So the signaling value. So in other words, the signal to you from the interviewer about the fact that they were going to invest this much time. Is that right? It’s that direction of signal?
Andrey: Yes, yes. And also the sorts of people who they had talking to me, and just the fact that they were trying to pitch me so hard. Now, certain other companies lacked such efforts.
Seth: Right. So it seems like one important aspect of an interview is what the interviewee learns from the interview. But what about the other side? Do you feel like your interviewer learned a lot about you, or enough to justify all that time and expense?
Andrey: I’d like to think so. I mean, I’m not them, so I can’t really speak on their behalf. But it did seem like the interview process was fairly thought out for a certain set of goals, which might differ across companies. What about yourself, Seth?
Seth: Thank God, it has been a long time ago that I interviewed for a job, and I can tell you exactly what happened. I was on the academic job market, but I did throw out a couple of business applications, and so I got an interview at Facebook. Headed out to their headquarters, did all of the one-on-one interviews, and then there was a code screen, and I was not grinding LeetCode for the last five months and completely bombed it. And they said, “Thank you very much for your time.” So that was an example of, I think they probably could have saved the time for the interview if they had given me the code screen first.
Andrey: It’s funny, there was a time in my life where I interviewed at Facebook, too. I mean, this is probably 2014 or something.
Seth: Mm-hmm, mm-hmm.
Andrey: And they did do the coding screen before.
Seth: Who knows? Who knows, dude?
[00:05:15] THE PAPER
Seth: Okay, so interviews, we do them. People seem to give information, take information from them. How can this be made more efficient with AI? That’s today’s question. In order to learn more about that, we read Voice in AI Firms: A Natural Field Experiment on Automated Job Interviews, by friend of the show, Brian Jabrian and Luca Henkel. I was interested in this paper because it’s kind of an interesting flip side of what we just saw from Bo.
I guess before we talk too much about what the paper actually does, it’s time for us to go into our priors.
═══════════════════════════════════════════════════════════════════
[00:06:00] PRIORS
Seth: Okay, so Andrey, when we’re thinking about AI being used in interviews, what sort of thoughts do you have about that going in? What sort of priors should we be exchanging?
Andrey: Yeah, I mean, I think just when I first saw this paper, I was kind of surprised that we were there already, honestly. I think interviewing via voice is a pretty delicate thing, and the fact that AI is potentially able to do it already was—I hadn’t been thinking—I didn’t think we were there yet, and I think just the very existence of this paper was a bit of a surprise when I first saw it.
But I guess a first natural prior that we can think about is: is using an AI to interview someone rather than using a human to interview someone, is that better or worse, or how do we think about that?
So, Seth, what do you think?
Seth: Well, it’s a big question, Andrey. I guess my first response is, like we always say in this podcast, context matters, partial equilibrium versus general equilibrium matters. The context that we’re going to be looking at in the paper is call center workers. So maybe I’ll give kind of a different answer for short-term call center workers than maybe longer term economy as a whole.
When I think about call center workers, I think about a job that seems to be—no offense to our friends of the show out there who are call center workers—but this does seem like one of the jobs that is going to be the first to be automated with generative AI, or most at risk, especially kind of low-skilled call center work. So if there was going to be any sort of domain where you could automatically verify whether someone was good at it, intuitively, it would be the domain that you’re kind of close to automating anyway. So if it was going to work anywhere, I would say it would work here.
And yet still, call center work, you might imagine, it requires a lot of personal empathy, it requires maybe some subtleties of voice and accent that an AI might not identify or even might hesitate to point out such deficits. I would say I kind of went in with the idea that for call center workers, maybe there’s a forty percent chance that AI would be better than a human interviewer. So maybe it’s slightly unlikely that it would be better. But if we were to expand out to kind of knowledge work as a whole, I would be more, even more pessimistic, maybe only a twenty-five percent chance or lower that the AI interviewer would be better. What do you think?
Andrey: Well, how would you—what do you mean by better?
Seth: Oh, well, better in terms of the hire is ultimately the correct match, right? That’s going to be operationalized in a specific way in this paper, what... How they’re going to measure better match, but, yeah, that’s what I would say. They hire someone who’s going to be productive and work with the firm for a long time.
Andrey: Yeah. I mean, so that’s ki
More
We discuss “Voice in AI Firms: A Natural Field Experiment on Automated Job Interviews” by Brian Jabarian and Luca Henkel. The paper examines a randomized experiment with call center job applicants in the Philippines who were assigned to either AI-conducted voice interviews, human interviews, or given a choice between the two. Key Findings: * AI interviews led to higher job offer rates and proportionally higher retention rates * No significant difference in involuntary terminations between groups * Applicants actually preferred AI interviews—likely due to scheduling flexibility and immediate availability * AI interviewers kept conversations more on-script with more substantive exchanges * Online applicants saw especially large gains from AI interviews Topics Discussed: * The costs of recruitment and why interview efficiency matters * Whether AI interviews find different workers or just reduce noise in screening * How human recruiters interpret AI interview transcripts differently * The “Coasean singularity” question: Will AI improve labor market matching overall? * Limitations: scheduling confounds, external validity beyond call centers, unmeasured long-tail outcomes * The coming arms race between AI interviewers and AI-coached applicants Posterior Updates: On the usefulness of current AI for job hiring: * Seth: 40% → 90% confidence AI works for call center jobs; modest update for general jobs * Andrey: 20% → 75% for call centers; 1% → 5% for general interviews (“we need to reorganize all of hiring first”) On whether AI will improve job matching significantly on net in the next 5-10 years * Andrey: 55% → No Update * Seth: “A bit more optimistic than Andrey” → +1pp update Referenced Work/Authors: * Prediction Machines * Related episode on AI and labor signaling with Bo Cowgill. Transcript: [00:00:00] INTRODUCTION Seth: Welcome to the Justified Posteriors podcast, the podcast that updates its priors about the economics of AI and technology. I’m Seth Benzell, an interviewer who will never stick to a standard script, coming to you from Chapman University in sunny Southern California. Andrey: And I’m Andrey Fradkin, counting down the days until I can use an AI to pre-interview my podcast guests to see if they deserve to be on the show. Coming to you from San Francisco, California. Seth: I don’t know. I think our filtering criteria is pretty good. Andrey: I know. Seth: Right. That’s one job we never want to automate—who becomes a friend of the podcast. That’s an un-automatable job. Andrey: But it would be nice to pre-interview our guests so that we could prepare better for the actual show. Seth: I was thinking about this, because there’s two possibilities, right? You do the pre-interview, and you get an unsurprising answer in this sort of pre-interview, and then that’s good, and then you should go with it. And then if you get a surprising one, then you would lean into it. What would you even get out of the pre-interview? Andrey: Maybe what the guests would want to talk about. Seth: Okay. Andrey: But I agree with you. Mostly, it’s just hearing the guest talk, and then thinking about, “Oh, this is something that we want to really dig into,” versus, “This is something that might be not as interesting to our audience,” and knowing that ex ante. [00:02:00] SETTING UP THE TOPIC Seth: Yeah. We’ve been... So we’re talking about interviews. You’ll remember in a recent episode, we just talked to our friend Bo, who’s doing work on how maybe job applications are changing because of AI. So now I think what we want to think a little bit about is how job interviews are changing because of AI. Maybe we’ve heard before about how AI is changing how people talk to the hirer. Maybe we want to hear a little bit about how AI is changing how the hirer solicits information in an interview. We’ve got a very interesting paper to talk about just about that. But do you remember the last job interview you did, Andrey? Andrey: Yes. Seth: How did it go? Did you have fun? Did you feel like you stayed on topic? Andrey: It was a very intense set of interviews that required me to fly halfway across the world, which was fun, but exhausting. Seth: So fun. So you would describe the interview as a fun experience? Did you get more excited about the job after doing the interview? Andrey: Yes, although I ultimately didn’t take it, but I did get—you know, I was impressed by the signaling value of having such an interview. Seth: So the signaling value. So in other words, the signal to you from the interviewer about the fact that they were going to invest this much time. Is that right? It’s that direction of signal? Andrey: Yes, yes. And also the sorts of people who they had talking to me, and just the fact that they were trying to pitch me so hard. Now, certain other companies lacked such efforts. Seth: Right. So it seems like one important aspect of an interview is what the interviewee learns from the interview. But what about the other side? Do you feel like your interviewer learned a lot about you, or enough to justify all that time and expense? Andrey: I’d like to think so. I mean, I’m not them, so I can’t really speak on their behalf. But it did seem like the interview process was fairly thought out for a certain set of goals, which might differ across companies. What about yourself, Seth? Seth: Thank God, it has been a long time ago that I interviewed for a job, and I can tell you exactly what happened. I was on the academic job market, but I did throw out a couple of business applications, and so I got an interview at Facebook. Headed out to their headquarters, did all of the one-on-one interviews, and then there was a code screen, and I was not grinding LeetCode for the last five months and completely bombed it. And they said, “Thank you very much for your time.” So that was an example of, I think they probably could have saved the time for the interview if they had given me the code screen first. Andrey: It’s funny, there was a time in my life where I interviewed at Facebook, too. I mean, this is probably 2014 or something. Seth: Mm-hmm, mm-hmm. Andrey: And they did do the coding screen before. Seth: Who knows? Who knows, dude? [00:05:15] THE PAPER Seth: Okay, so interviews, we do them. People seem to give information, take information from them. How can this be made more efficient with AI? That’s today’s question. In order to learn more about that, we read Voice in AI Firms: A Natural Field Experiment on Automated Job Interviews, by friend of the show, Brian Jabrian and Luca Henkel. I was interested in this paper because it’s kind of an interesting flip side of what we just saw from Bo. I guess before we talk too much about what the paper actually does, it’s time for us to go into our priors. ═══════════════════════════════════════════════════════════════════ [00:06:00] PRIORS Seth: Okay, so Andrey, when we’re thinking about AI being used in interviews, what sort of thoughts do you have about that going in? What sort of priors should we be exchanging? Andrey: Yeah, I mean, I think just when I first saw this paper, I was kind of surprised that we were there already, honestly. I think interviewing via voice is a pretty delicate thing, and the fact that AI is potentially able to do it already was—I hadn’t been thinking—I didn’t think we were there yet, and I think just the very existence of this paper was a bit of a surprise when I first saw it. But I guess a first natural prior that we can think about is: is using an AI to interview someone rather than using a human to interview someone, is that better or worse, or how do we think about that? So, Seth, what do you think? Seth: Well, it’s a big question, Andrey. I guess my first response is, like we always say in this podcast, context matters, partial equilibrium versus general equilibrium matters. The context that we’re going to be looking at in the paper is call center workers. So maybe I’ll give kind of a different answer for short-term call center workers than maybe longer term economy as a whole. When I think about call center workers, I think about a job that seems to be—no offense to our friends of the show out there who are call center workers—but this does seem like one of the jobs that is going to be the first to be automated with generative AI, or most at risk, especially kind of low-skilled call center work. So if there was going to be any sort of domain where you could automatically verify whether someone was good at it, intuitively, it would be the domain that you’re kind of close to automating anyway. So if it was going to work anywhere, I would say it would work here. And yet still, call center work, you might imagine, it requires a lot of personal empathy, it requires maybe some subtleties of voice and accent that an AI might not identify or even might hesitate to point out such deficits. I would say I kind of went in with the idea that for call center workers, maybe there’s a forty percent chance that AI would be better than a human interviewer. So maybe it’s slightly unlikely that it would be better. But if we were to expand out to kind of knowledge work as a whole, I would be more, even more pessimistic, maybe only a twenty-five percent chance or lower that the AI interviewer would be better. What do you think? Andrey: Well, how would you—what do you mean by better? Seth: Oh, well, better in terms of the hire is ultimately the correct match, right? That’s going to be operationalized in a specific way in this paper, what... How they’re going to measure better match, but, yeah, that’s what I would say. They hire someone who’s going to be productive and work with the firm for a long time. Andrey: Yeah. I mean, so that’s ki