Artificial intelligence is rapidly transforming how companies recruit new talent—promising faster, fairer, and more efficient hiring processes. But what happens when these systems overlook highly qualified candidates or reinforce existing biases instead of removing them?
This is exactly what Paksy Plackis-Cheng, Founder of impactmania and CSO & Co-Founder of Staex, is investigating. As part of the EU-funded FINDHR network, she works with companies, policymakers, and civil society to evaluate how AI is used in recruitment—and where it falls short. In an interview with #ai_berlin, she discusses the risks of algorithmic hiring, the gap between European AI policy and practice, and why trustworthy AI is not just a technical challenge, but a societal one.
Your research highlights that AI-driven hiring is often not as objective or efficient as companies claim. Was there a defining moment in your studies—perhaps a particular finding or a conversation—that made you fully grasp the scale of the challenge?
The idea that AI ensures meritocracy is misguided. While I believe AI has the potential to support a system where advancement is based on individual ability and achievement, we are not there yet. By actually stress-testing AI systems, we found that in human recommendation software in hiring for example, those with the most relevant qualifications and experience for the job, the top (Category A) candidates were sometimes dismissed in favor of the least suitable (Category C) candidates.
"Trustworthy AI made in Europe" is a key ambition of EU policy. However, your research suggests that many companies are still far from deploying AI in hiring responsibly. Where do you see the biggest gap between aspiration and implementation?
This is a complex matter influenced by many factors. Many companies assume they can simply purchase an AI platform, deploy it, and that is that. In reality, most AI tools have been developed in the U.S. Even those built in the EU often rely on large language models (LLM) originated from the U.S. These systems are then trained on your company’s data—which may be incomplete, skewed, biased. The qualification of the most desirable workforce from 5 years ago is probably not most representative of today’s. The system is making decisions based on incorrect data. Then continues to learn from flawed outcomes, and so it continues. Ongoing systems testing and evaluation are required to ensure validity of the outcome. In our first report, we have developed a set of recommendations for companies and policymakers. It is a starting point to use AI more accurately and therefore more beneficially.
The EU AI Act classifies AI-powered hiring tools as "high-risk" technology. Do you believe this classification is appropriate, or are there gray areas where companies might avoid regulation?
Any system that is used for “human recommendation” is considered “High Risk” according to the EU AI Act because it can directly affect a person’s livelihood. Sourcing, screening, and selecting people happens in recruitment, but (will) also happen when you apply for school, rent a home, apply for loans or subsidies, affect your spot on a medical waitlist, etc.
That’s why we need strong checks and balances. AI must be paired with human oversight—people who can review and challenge its decisions. As we've seen in practice, even governments misclassified its citizens due to inaccuracies in the system. And often, individuals have no way of knowing that this has happened. The Dutch Tax Authority used a self-learning algorithm that wrongly accused 26,000 parents of fraud. The consequences were devastating: individuals suffered severe mental and physical health issues, one person died by suicide, many lost their jobs and homes, and 1,115 children were forcibly removed from their families. The algorithm had learned to associate factors such as “dual nationality” and “low income” with a higher risk of fraud—reflecting and amplifying existing biases. Even as the human cost became apparent, authorities continued to defer responsibility, hiding behind the algorithm’s decisions. The AI Act is there to protect us from these grave injustices, even when committed by governments.
In our own experiments simulating a hiring process, we evaluated the same resumes both manually and through an AI hiring system. Our team cross-checked the recommendations made by a human HR leader and compared them to those made by the AI. While human decisions (even the biased ones) could be explained, most commercial AI systems—except for a few open-source models—provided no insight into why certain candidates were selected. Troublingly, we found that some resumes lacking relevant certifications or experience were recommended over those that clearly met the job requirements.
When people apply for jobs, they often don’t know whether their applications are being assessed by a human or an AI system—or to what extent. There's little recourse for job seekers to find out if they were a top-tier candidate who was passed over in favor of someone far less qualified, as we found in our testing. This lack of transparency is not unique to AI, but AI systems amplify the problem at scale. Whether regulations are in place or not, hiring remains a costly process—especially when the wrong decision is made. Companies investing in flawed or opaque systems risk missing out on top talent, which is a cost not just to them, but to society as a whole. Qualified people sitting on the sidelines is an inefficiency we can’t afford. To truly make AI beneficial, we must make it more accurate, more transparent, and more accountable.
Many companies turn to AI hiring tools to cut costs and increase efficiency. But your research suggests these systems often overlook highly skilled talent. Can you share a specific case where an AI system misjudged a strong candidate—and what the consequences were for both employers and job seekers?
A survey by the ifo Institute (Leibniz Institute for Economic Research at the University of Munich) found that 87% of family businesses in Germany are struggling due to worker shortages. At the same time, millions of skilled, educated, and motivated individuals remain excluded from the labor market. Germany alone faces a shortfall of nearly 2 million workers—yet 2.5 million people are actively seeking employment. While there isn’t a perfect one-to-one match between available jobs and job seekers, AI can play a key role in bridging the gap—by identifying transferable skills, supporting re-skilling efforts, and enabling smarter talent sourcing.
In the two research projects, we worked with hundreds of job seekers—many educated, experienced, and eager to work. One of the most striking findings was the disproportionate impact of “international education” on resume scores in AI screening tools. Resumes with international degrees were penalized significantly, with 80% receiving lower rankings.
We encountered numerous candidates with clear qualifications for the roles they applied to—many of whom had submitted hundreds of applications. They were often met with silence, instant rejections, or automated responses sent at 11:00 PM—clear signs that their applications were never properly reviewed. This isn’t just a failure of hiring systems—it’s a loss of valuable talent for society. Rather than using AI to filter out qualified candidates, we must begin using it to actively connect people to opportunities. The goal should not be exclusion, but inclusion—matching people to roles where they can contribute and grow.
Berlin has been central to your research, but you've also worked with experts across Europe. How does Germany's approach to AI in hiring compare to other EU countries? Are there best practices that Berlin—or Germany as a whole—could learn from?
This multi-year research was initiated by the European Union and conducted through the FINDHR network, with collaboration across seven countries. In some European countries, the government officials are completely unaware that companies are using AI systems in hiring. The sophistication of these tools also varies widely—from basic keyword matching between resumes and job descriptions to more advanced systems that assess candidates through video interviews.
In Germany, we interviewed a range of globally operating companies across industries: adidas, Siemens, Deutsche Telekom. All were either using or planning to adopt AI hiring tools. Applicant Tracking Systems (ATS)—which organize, sort, and manage resumes—are already widely used. Overall, German companies tend to approach AI hiring tools with thoughtfulness. However, once a system is implemented, it’s not uncommon for it to be handed off to an intern or junior staff member with minimal training and no clear guidance on how to evaluate the system’s ongoing performance.
In contrast, the U.S.—where talent technology (tools for talent acquisition, development, management, and retention) originated and is most prevalent—has taken AI hiring to another level. Some companies don’t even require resumes. Instead, they source candidate data from social media profiles and third-party databases containing personal and professional information, such as security clearance or certifications.
This stark difference highlights why the EU AI Act matters for citizens: it introduces safeguards that the U.S. largely lacks. That said, privacy protections don’t account for the massive amount of personal data we freely post and upload each day. This is a separate but urgent issue—one I encounter daily in my work. While the public conversation is currently centered on AI, I strongly advocate for a parallel discussion on data. Secure, accurate, and reliable data is even more critical—it is the fuel that powers AI. Without trustworthy data, no AI system can produce meaningful results.
You collaborate with companies, policymakers, and organizations like AlgorithmWatch. Where do you see the biggest tensions between these stakeholders when it comes to ensuring responsible AI in recruitment?
This is the real hurdle. We won’t solve cultural, economical, and societal challenges unless we begin treating one another as equal stakeholders. People often think AI is about tech—but really, it’s about us. Having worked across multiple sectors and industries, I’ve come to appreciate the range of challenges and objectives each brings to the table. Especially now, in a geo-political and market rollercoaster, we need to make an effort to reach beyond and understand that—whether we are running a company with responsibilities toward shareholders, speak on behalf of the people who elected us, or ensuring safety, health, opportunities for the public-at-large—we are on this ride together. No one will arrive safely or successfully without the support of others.
This shared reality and collaborative spirit is how we approached our research. We built a diverse team and intentionally examined AI hiring from multiple perspectives. To truly understand the job market, we were advised by AI experts from tech corporations and NGOs alike, worked directly with hundreds of job seekers, collaborated with HR and talent acquisition leaders, and partnered with AI startups. The more we connect across sectors and industries, the better we understand one another’s priorities—and why certain issues are non-negotiable for them. Right now, we’re too siloed. It's time to shake things up.
Finally, what is the future of "Trustworthy AI made in Europe" in the context of hiring? What discussions do we need to have now to ensure AI in the workplace is truly responsible?
We need a shared understanding of what “Trustworthy AI made in Europe” means—for job applicants, employees, and employers alike. We all have opinions on AI, but many of us are affected by it without even knowing; only through having a dialogue with all parties involved and rigorously testing these systems to understand how these systems actually behave. We had a challenging time testing AI tools because we could only test the open source systems. We need continuous involvement from stakeholders across sectors, industries—people bringing their professional and personal experiences. And across cultural divides, participating in optimizing a system that could further human’s ability and opportunities. Europe is in the most optimal position to lead, we value a healthy tension between the different stakeholders. Trustworthy AI isn’t just a technical standard—it’s a social contract. And it can only be built when everyone has a seat at the table.