General Insurance Article - Lessons in implementing board level AI governance


Effective leaders have shifted from traditional risk management protocols to more dynamic and responsible governance models for managing AI. AI governance continues to evolve in 2025 for both boards and senior management teams. Recent studies suggest many companies are not achieving the desired return on investment from AI projects, and many boards are devoting time to AI governance without achieving desired outcomes.

 By John M. Bremen, Managing Director and Chief Innovation and Acceleration Officer, WTW

 Effective board members and leaders understand their role in safeguarding data, governing new technologies and ensuring the necessary skills in the boardroom and across the organization.

 A widely circulated MIT study, The GenAI Divide: State of AI in Business in 2025, reports that despite $30 to $40 billion in enterprise investment in AI, 95% of organizations are getting zero return. Gartner reports that even with an average spend of $1.9 million on generative AI (GenAI) initiatives last year, less than 30% of AI leaders report their CEOs are happy with AI investment returns.

 OpenAI’s Sam Altman sees an AI bubble forming as industry spending surges without commensurate gains in performance. Studies from Harvard and Stamford show significant workforce implications.

 According to the National Association of Corporate Directors (NACD) 2025 Trends and Priorities Survey, three of the 10 top director trends for 2025 involve technology governance. Cybersecurity threats and AI remain at the center of directors’ technology concerns. In WTW’s most recent Emerging and Interconnected Risks Survey, executives worldwide listed AI and cyber risk as the top two out of 752 emerging risks. Additionally, WTW’s 2025 Directors’ & Officers’ Risk Survey reports data loss and cyberattacks are both within the top three risks.

 As we covered in How board-level AI governance is changing, research from professor and corporate director Dr. Helmuth Ludwig and professor Dr. Benjamin van Giffen includes a four-category AI governance model. They recently updated their report with guidance and practices for boards on AI oversight using four pillars under an array of different scenarios, with input from NACD program manager for digital and cybersecurity governance content Dylan Sandlin, board members Rima Qureshi and Samantha Kappagoda, and staff from the Data & Trust Alliance.

 4 pillars of AI governance
 Strategic oversight: According to NACD’s 2025 Public Company Board Practices and Oversight Survey, more than 62% of directors set aside agenda time to discuss AI. Yet while many directors note the potential for AI disruption to their company’s strategy and long-term viability, only 23% of boards have assessed how it might happen. Effective boards recognize AI as a material strategic enabler and differentiator that influences an organization’s competitive position and business model.

 They adopt three practices:

 Develop a shared understanding of AI’s strategic relevance and importance among the full board
 Establish a cadence for AI discussions and updates, regularly allocating agenda time for updates on the company’s AI initiatives and related issues
 Incorporate AI as a topic at the board’s annual strategy retreat, taking the opportunity for directors to receive a comprehensive view of the AI landscape and its impact on the company’s strategy
 Capital allocation: Many boards identify proper allocation of capital resources as one of the challenges their organizations faces in adopting AI technologies. Yet only 11% of boards have approved an annual budget for AI projects. Effective boards recognize AI has broad implications for business strategy and operations.

 They adopt two practices:

 Include AI expenses in annual budgeting and approval, allowing the board to evaluate current investments and to determine whether more resources may be required to meet the organization’s strategic objectives (this may include providing support as well as assistance for AI pilots, experiments and scaling)
 Regularly review the viability and opportunities to use mergers, acquisitions and partnerships to acquire AI capabilities. (This includes working with management to establish clear partnerships and evaluation criteria for assessing opportunities)
 AI risks: Effective boards treat risk oversight not only as a board’s core fiduciary responsibility but also as central to the responsible use of AI systems and maintaining trust among key stakeholders. They recognize that AI may help protect competitive advantage and can play a role in a company’s potential for value creation or destruction.

 They adopt two practices:

 Integrate AI into the company’s enterprise risk management program through regular updates and reviews of the risks associated with AI, both through audit or risk committees and the full board. Effective management teams leverage existing risk assessment frameworks to evaluate AI risk in economic terms to better evaluate the most effective risk-mitigation actions and controls, sharing these reports with the board
 Receive briefings from internal and external AI risk experts as a regular item on board agendas. Effective boards take a strategic approach in selecting experts and establishing briefing agendas, prioritizing material risks and maintaining their role as a governing body rather than a technical body
 AI technology competence: Effective technology governance, including AI and data oversight, requires full board engagement with all directors maintaining at least a foundational knowledge of AI and its influence on the organization’s particular needs. These boards also ensure the CEO, management team and workforce have the technological competence to execute the company’s AI agenda.

 They adopt four practices:

 Maintain board-level AI and technology proficiency aligned to corporate strategy and governance needs. They achieve AI proficiency through board structure, processes, education and access to expertise
 Establish authority and responsibility for AI within the organization. This includes clearly designating leaders for strategic AI implementation and ensuring a consistent and coherent decision-making model, both across functions and at the business-unit level, with processes for escalation
 Ensure management and workforce readiness for AI transformation through compensation and human capital committees. This often includes working with management to ensure that the pipeline for necessary talent and skills is in place as AI is further deployed throughout the company
 Incorporate AI oversight roles and responsibilities into board and committee charters, mitigating gaps and overlaps and more efficiently using committee focus, expertise and agenda time

 Effective leaders have shifted from traditional risk management protocols to more dynamic and responsible governance models for managing AI’s growth across industries and applications while adhering to their values. These leaders adopt principle-based governance practices that allow their organizations to benefit from AI technologies while reducing risks and increasing trust and accountability.

Back to Index


Similar News to this Story

9 in 10 firms interested in insurance cover for Gen AI risks
Businesses worldwide are rapidly embedding Generative AI (Gen AI) into products, services and internal operations. While this brings significant oppor
The Data Use and Access Act what are the key impacts
In June this year, The Data (Use and Access) Act 2025 (DUAA) received Royal Assent. There’s a staged approach to commencement, with most changes likel
Lessons in implementing board level AI governance
Effective leaders have shifted from traditional risk management protocols to more dynamic and responsible governance models for managing AI. AI govern

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.