AI-driven Executive Search: 4 Pitfalls
AIs risk losing critical talent profiles.
AIs are increasingly training on AI-generated data, potentially leading to the disturbing phenomenon of 'model collapse.' Executive and board recruiters need to pay keen attention to issues that may compromise a hire.
Given all of this, could AI-driven executive search be a real possibility?
“The current reality (and it will probably change) is that AI works better on a clear profile with languages, hard skills,” says Mikael Norr, a member of the Amrop Global Board. “However, a CEO in a broad leadership role involves softer skills, such as being inspiring. It's much more difficult to work with AI.”
At senior level, keyword filtering and automation can be misleading, signals Jamal Khan, Managing Partner of Amrop Carmichael Fisher in Australia. “The rules-based AI system often rejects qualified candidates if their CVs aren’t ‘SEO-optimized’ with the right terms. Experienced recruiters can infer skills from context. They know that a CFO automatically knows about payroll. I'm sure Gen AI will get there, but I'm not sure it's there yet.”
It’s tempting to assume that machines think logically. But AIs can fall prey to errors or unfair outcomes, embedded in their stereotyped training data or design. Faults can include under-representing certain groups, the subjective judgment of human labelers, or algorithms that favor certain outcomes.
In 2018, Amazon finally abandoned its AI recruiting tool. The problems were rooted in training that drew on (male) job responses received over ten years — reflecting gender imbalance in the tech industry.1 Interestingly, OpenAI recently announced plans to launch a recruitment and jobs platform, rivaling LinkedIn. But AI’s troubles are not over.2 Tackling bias requires diverse data, transparent design, and ongoing human oversight.
Diversity under threat: is ‘think different’ missing in action?
Outliers are often innovative and perseverant. But AI tends to overlook them. “Value isn’t created by the leaders with the most polished resumés, but by those who lived through black swan moments, crisis, systemic shocks.” Their resilience and agility is invaluable, argues Amrop Global Board Member Oana Ciornei. “But if you’re using an AI to filter, these people will not match the algorithmic norm.” Moreover, candidates’ online CVs may omit controversial (and interesting) career moments. She seeks leadership knowledge created “in the space of the unexpected.” Like compiling a top-flight soccer team, human scouts are needed, says Job Voorhoeve, Head of Amrop's global Digital Practice. “Then it's about trust. Do I want to talk to you?” If the headhunter doesn’t have a solid reputation, these people won’t pick up the phone."
Read the report
I Am Not a Robot: AI and Leadership Hiring
Part II - Pitfalls, Risks & Solutions
How do we break out of the echo chamber?
Recently,3 IBM writers signaled a problem: AI models increasingly learn from other AI-generated data, weakening the results. This is model collapse - AIs lose information from the tails of the data distribution. With ever more model iterations, AIs risk becoming self-referential, detached from reality. They miss the richness of human experience.
The opacity of AI decisions makes it difficult for recruiters to ‘show their workings’ and demonstrate 'procedural justice'. Costa Tzavaras, Amrop's Global Programs Director: “If I go into a database and select the parameters myself, I know the outcome is based on that. We’re still not quite trusting that black box.” “I recently tried an AI to analyze and summarize a candidate assessment with insights,” says Jamal Khan. “But it didn't understand the 0-10 grading system. So it gave incorrect answers. You have to keep refining it so it learns.”
All eyes on the control panel
He recalls how on one occasion LinkedIn’s automated messaging function took off and flew solo. “I wrote a message and didn't notice before clicking ‘send’ that the AI had rewritten it: a terrible cheesy missive. No-one responded, whereas 40 to 50% normally do. And it kept doing it.” He quickly realized that the AI setting launched by default and had to be actively turned off. Mikael Norr: “We cannot be totally sure who is talking to us. Fraud is omnipresent.” Jamal Khan also warns against automation for outreach in business development. “You can't mass market and send 200 emails out. You’re lucky if you get 2% response. That’s not the business we’re in.”
AI & executive search | 4 challenges
1 - MISSING INFORMATION, LACK OF SUBTLETY
- Heavy reliance on online data
- Difficulty understanding context, nuance & 'silent knowledge’
- Transcriptions and summaries requiring manual correction.
2 - BIAS, NARROW-MINDEDNESS
- Rejection of qualified or unconventional candidates due to rigid keyword filtering
- Historical biases in training data that can perpetuate unfair outcomes.
3 - ECHO CHAMBER EFFECT
- AI models increasingly train on AI-generated data, potentially leading to 'model collapse' - loss of original data and inaccurate outputs
- A self-referential loop: detaches AI from real-world diversity.
4 - OPACITY & TRUST ISSUES
- A lack of transparency & clarity in how AI reaches its conclusions.
- "Hallucinations" & cover-up attempts
- Automated messaging reduces quality, authenticity & impact.
Go here to download the full article.
Sources
1 Winick, E., (October 10, 2018). Amazon ditched AI recruitment software because it was biased against women. MIT Technology Review.
2 Gassam Asare, J. (June 23, 2025). What The Workday Lawsuit Reveals About AI Bias—And How To Prevent It. Forbes.
3 Gomstyn, A., Jonker, A., (2024). What is model collapse? IBM.