CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership Apr 29, 2026

AI Governance Moves From Theory to Practice

Data Breach Today Archived Apr 29, 2026 ✓ Full text saved

CIOs Face Growing Pressure on Risk, Data and Board Reporting As AI moves deeper into enterprise operations, CIOs are being pushed to turn governance principles into practical controls, board reporting and risk oversight, according to a survey by The Conference Board's Governance and Sustainability Center.

Full text archived locally
✦ AI Summary · Claude Sonnet


    Agentic AI , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development AI Governance Moves From Theory to Practice CIOs Face Growing Pressure on Risk, Data and Board Reporting Jennifer Lawinski • April 28, 2026     Share Post Share Credit Eligible Get Permission CIOs are being pushed to turn governance principles into practical controls, board reporting and risk oversight, according to a survey by The Conference Board's Governance and Sustainability Center. (Image: Shutterstock) Artificial intelligence has taken root across the enterprise, firmly embedded in core processes across the business. And recent data on corporate risk disclosures show just how fast the technology landscape has changed. See Also: The Context Crisis: Cloud Security in the Age of AI In 2023, just 12% of S&P 500 companies disclosed AI as a material business risk in their annual filings. By 2025, that number had reached 83%. At the same time, executives are balancing optimism and concern: 80% expect AI to drive productivity gains, but 75% anticipate significant workforce disruption, according to a new report based on S&P 500 disclosure data and a survey of 130 senior executives from The Conference Board's Governance and Sustainability Center. Those data points highlight the challenge for CIOs managing both the rapid pace of technology change and the impact these changes have across the business. Establishing governance to minimize risk adds another layer of complexity to an already complicated job. "In the space of a couple of years, we've moved from experimentation and early-stage thinking to really starting to see integration of AI into the business," said Andrew Jones, principal researcher at The Conference Board and the report's author. "And with that, growing recognition of the very real risks." Many organizations have begun to put structures around their AI deployments, and 70% of companies are including it in risk inventories or heat maps. Enterprise-wide AI principles have been established by 63% of companies, and 52% have created centralized AI councils to coordinate governance and cross-functional oversight. Despite the growing popularity of AI councils, governance remains uneven, Jones said, and many organizations are still approaching it through a tech, legal and compliance lens, with a lesser focus on workforce impact, sustainability and broader implications. Juggling these priorities is increasingly falling on the CIO. "The CIO isn't just helping the enterprise deploy AI," Jones said. "The CIO is increasingly helping the enterprise govern AI - which is a huge, significant shift." Cybersecurity, data privacy and legal liability top the list of AI risks companies are prioritizing. That focus is supported by what Jones is seeing on the ground. "When we talk to CISOs now, it feels like everything is just AI," Jones said. "The attack surface has evolved, and it's definitely keeping them awake at night." The focus on cybersecurity, privacy and liability requires tight alignment between the CIO and CISO, but each needs to focus on their own spheres to ensure balls aren't dropped. "The CISO particularly needs to own the whole piece around managing the attack surface, managing the defenses, that whole technical cybersecurity piece," Jones said. "The CIO's role leans more toward enterprise AI visibility, data governance and risk tiering." Boards Are Interested, But Not Educated Only 23% of governance leaders say their boards have high AI fluency. And AI-specific expertise among S&P 500 independent directors remains low, rising from 1.5% to just 2.7% between 2021 and 2025. Broader technology expertise, meanwhile, has grown at a much higher rate, from 20% to 51% over the same period. For CIOs, explaining AI and its attendant risks poses a new challenge. They must deliver board-ready reports that explain the risks, use cases, governance, incidents and more, but figuring out which information is most critical will take practice, along with defining what rises to that level before incidents occur. "The board needs a clear line of sight into what's actually happening within the company," Jones said. "Where is AI being used? Which use cases carry the highest risk? What data are these systems touching? What controls exist? And if there is an incident, is it being captured and escalated?" The goal isn't to put board directors on the technology front lines. "I don't think anyone expects the board to be a board of AI engineers and data scientists," Jones said. "But they need sufficient fluency to ask the right questions and know what a good answer looks like. It's a two-way street." Data Governance Is the Foundation When it comes to data, CIOs are in agreement. Data governance and controls are cited as the top AI governance priority by 74% of executives. Regulatory readiness follows at 47% and third-party risk management at 30%. The ranking shows what CIOs are prioritizing, Jones said. "It's not glamorous, but it is the core, fundamental work - the solid foundation of clean, well-governed data with clear provenance and audit trails," he said. "Agentic AI works best when it's working with good data, not slop data." While no organization has had perfect data management, Jones said that the rise of AI has forced a data reckoning for enterprises. "No organization ever has had perfect data. There are always going to be different systems, historic ways of working, databases that don't speak to each other," he said. "If you're going to have a competitive advantage in this space, building effective data infrastructure is part of what gets you there. And interestingly, AI can help with that - we've heard of companies using AI to better clean their data, improve tagging and metadata, and create a stronger foundation for more sophisticated use cases." Building a Governance Program For CIOs, Jones offers a clear order of operations. Start by taking a comprehensive inventory of your AI use cases. "That needs to include internal tools, what you're getting from vendors, APIs and what employees are actually using, which could be a lot of things that are very visible, and a lot of things that aren't," he said. "You can't govern what you can't see." The next step is evaluating that inventory and creating risk tiers, flagging anything that touches sensitive data, employment decisions or customer-facing functions. He also recommends linking AI governance to existing cybersecurity governance structures, and building out board reporting from that foundation, adding metrics on use cases, risk tiers, control ownership and incidents. He cautions that AI governance is not a one-and-done project. It needs to be a living process. "Some companies that had a good AI governance program six months ago don't necessarily have one today, because the technology and the landscape have evolved so quickly," Jones said. "It's not just about building governance. It's about being able to constantly evolve it."
    💬 Team Notes
    Article Info
    Source
    Data Breach Today
    Category
    ◇ Industry News & Leadership
    Published
    Apr 29, 2026
    Archived
    Apr 29, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗