Business expectations around AI are high. Gartner reports that leaders expect a 23% boost in functional productivity from generative AI over the next 12 months. Yet 47% of employees say they don’t know how to achieve those gains, and 77% feel that AI tools have actually increased their workload. The disconnect between ambition and reality is becoming clear.
Still, HR leaders are moving forward. Many are already adopting AI applications like employee-facing chatbots, automated job description creation, and skills data mapping. In fact, according to Forbes, over 80% of companies now use AI in recruitment, and nearly all Fortune 500 firms do too.
Yet as employers experiment, many are discovering that AI cannot replace human judgment in hiring. It can streamline job matching or automate scheduling, but it can’t yet uncover hidden potential in a CV or assess soft skills like adaptability or emotional intelligence that drive long-term performance. Both candidates and hiring managers worry AI is making hiring feel impersonal, and over half fear that algorithm bias could lead to unfair outcomes.
Trusting AI Models
Furthermore, at Aon, we have seen growing reports of AI models actively lying or misleading users with their responses. Recent research provided empirical example of a Large Language Model (LLM) engaging in ‘alignment faking’ (where the engine produces a response that it believes aligns with pre-determined principles of the user) without being explicitly requested to do so.
None of this should come as a complete surprise, given that LLM’s like Claude or ChatGPT are designed to become ‘smarter’ through reinforcement training, similar to applying rewards and punishments when training a pet. Researchers at Apple have also identified ‘fundamental limitations’ in cutting-edge AI models, raising doubts about the race within the industry to develop more powerful systems.
The Genie is Out of the Bottle
As scientists and AI developers of Silicon Valley grapple with these issues, what does this mean for the real world especially in terms of hiring and nurturing top global talent?
AI has arrived and is here to stay, and as we become aware of some potential challenges in its usage, we are gaining valuable insights on how to deploy, but to not mislead. Organizations navigating this AI revolution are asking:
Are we selecting the right talent to thrive in a fast-changing, tech-enabled world?
Which AI-enabled tools can we trust for accuracy, transparency, and fairness?
Do we have the right mechanisms in place to detect bias, and human oversight to intervene?
Can we identify the future leaders and change champions who will drive transformation?
Combining AI Innovation with Scientific Integrity
At Aon, we help organizations harness AI responsibly, enhancing speed and efficiency without sacrificing fairness, trust, or decision quality. Our approach is guided by five core principles:
Valid, Reliable, and Fair: Every AI-enabled assessment is built on psychometric science and tested for consistency and accuracy.
Transparent: Insights from our tools are explainable and accessible to all users, not hidden behind black boxes.
Legal Defensibility: We continuously monitor global legislation to ensure compliance and reduce risk.
Responsible Design, Ethical Use: Our application of AI follows responsible design standards grounded in scientific best practice.
Monitor & Improve: We continuously refine our application of AI using data insights and evolving technology advancements.
We have been pioneers in leveraging AI for assessment use. From using large language models, enhancing test development, to enabling automated scoring of open-ended responses in interviews and simulations. Our approach ensures clients gain insights that are not only fast and scalable, but also explainable, ethical, and defensible.
Aon’s assessments and frameworks support decision-making across the talent lifecycle, whether leaders are seeking to identify potential, reduce bias, or enhance speed-to-hire.
Source: consultancy-me.com