According to recent analysis by Deloitte, 74% of organisations are now actively investing in AI or generative AI, with around 36% of their digital‑initiative budgets directed toward these technologies. This represents a fundamental pivot in how companies view risk: as a strategic capability powered by automation and insight.
At the same time, the traditional risk cycle has been rendered obsolete. Threats can quickly emerge and executives increasingly demand models that forecast exposure before it materialises - rather than reports that document it afterwards. It’s no surprise that only 18% of ERM leaders say they feel confident in their ability to identify emerging risks within legacy systems.
This shift is driving demand for a new kind of ERM talent.
AI is now embedded in frontline risk operations
Artificial intelligence is now used by risk functions in a multitude of ways: from predictive analytics and automated scenario modelling to real‑time risk scoring. Yet the industry’s own data exposes a disconnect between aspiration and capacity. Fragmented data remains one of the greatest obstacles, creating significant blind spots - because risk information is often scattered across business units, IT systems, and supply chains.
This fragmentation is precisely why unified data foundations - and the people who can build and maintain them - have become indispensable. AI may accelerate insight, but without technical expertise in data governance, modelling, and validation, those insights remain unreliable or unusable.
The knock‑on effect for hiring is immediate. Organisations are looking for risk managers who are not only comfortable working with AI‑enabled systems, but who understand how those systems operate, how models drift, and how automated outputs translate into business decisions.
Specialists in model risk, AI governance, data architecture and quantitative analysis are becoming essential members of the ERM team. These are roles that didn’t even exist in most corporate structures even five years ago.
The move toward predictive intelligence
The pursuit of predictive risk intelligence is reshaping expectations at the top of the organisation. Boards and executive teams no longer want static heat maps or broad qualitative assessments. They want clarity on financial exposure: quantitative outputs such as Monte Carlo simulations, probability‑based scenario models, and dynamic forecasting: insights that directly inform capital allocation and strategic planning.
This evolution places new pressure on recruitment. Employers need risk leaders who speak the language of finance. They need analysts who can build real‑time dashboards and interpret streams of data at speed. They need experts who understand how to spot anomalies in complex systems, and who can articulate their significance to senior leaders.
As a result, the ERM professional of 2026 looks very different from the one many companies hired in 2020. The role now blends technical expertise, strategic judgement, and communication skills at a level previously associated more with data science or investment analysis than with traditional compliance roles.
AI skills shortage intensifies the competition
For the first time, AI‑related capabilities have become the hardest skills to hire anywhere in the world, surpassing both engineering and traditional IT competencies. ManpowerGroup’s 2026 data shows that AI model and application development (20%) and AI literacy (19%) are now the most difficult to source, contributing to a talent shortfall affecting 72% of employers worldwide.
The implications for risk teams are profound. ERM leaders are competing directly with tech giants, financial institutions, cybersecurity firms, and high‑growth startups who are all chasing the same scarce pool of AI‑capable professionals. The pressure is pushing salaries upward and encouraging companies to rethink their hiring models, often blending permanent hires with contract specialists, outsourced analytics teams, and internal upskilling initiatives.
But hiring alone won’t solve the problem. The speed of change means organisations must also build AI literacy across the wider risk function, ensuring that even non‑technical staff can understand and interpret machine‑generated insights.
Regulatory pressure
Layered on top of these technological shifts is a surge in regulatory activity. Across the UK and EU, policymakers are pushing for more streamlined, data‑driven compliance frameworks, including the Digital Omnibus and new AML supervision structures.
Meanwhile, regulators are also moving toward tighter oversight of AI systems, requiring transparency, auditability, and robust governance: all of which demand specialised risk expertise.
This regulatory evolution amplifies the need for professionals skilled in AI governance, data lineage, model auditing, and emerging compliance technologies. In many organisations, responsibility for these capabilities is landing squarely within ERM functions.
Conclusion: ERM hiring is being rewritten around AI
AI is reshaping the risk function from the inside out: redefining what risk management is, how it operates, and who is qualified to lead it.
As risks escalate in both speed and complexity, companies must rethink how they recruit, retain, and develop risk talent. Organisations need to act early, by hiring AI‑literate risk professionals, cultivating cross‑functional data expertise, building continuous monitoring capabilities, and strengthening their governance frameworks before regulatory pressure forces their hand.