ZEROCK

AI Governance Org Design | CAIO, AI Ethics Board, and Steering Committee Authority and Operations [2026 Edition]

2026-04-24濱本 隆太

We organize the roles and authority of the CAIO (Chief AI Officer), AI Ethics Board, and AI Steering Committee, drawing on real examples from Microsoft, Google, IBM, Anthropic, Salesforce, Walmart, and the NTT Group. We also present recommended organizational models for Japanese companies of 500 and 5,000 employees.

AI Governance Org Design | CAIO, AI Ethics Board, and Steering Committee Authority and Operations [2026 Edition]
シェア

Hello, this is Hamamoto from TIMEWELL.

When asked to "draw an AI governance organization chart," many companies freeze. Should we install a CAIO (Chief AI Officer)? Is an AI Ethics Board enough? How is that different from a Steering Committee? The number of role names around AI has exploded over the past two years. Without organizing them, "let's just create a promotion office" leads to overlapping deliberations and a vacuum of accountability appearing simultaneously a few months later.

In this fifth installment of the series, we dissect the AI organizations of leading global companies and draw a blueprint that Japanese companies can adopt as is. The structures that pioneers like Microsoft, Google DeepMind, IBM, Anthropic, Salesforce, and Walmart have arrived at share clear common ground. The CAIO is the command center, the Ethics Board is the deliberative body, and the Steering Committee is the executive forum. When this trio meshes, AI investment decisions speed up by an order of magnitude.

This article assumes the guideline-compliance discussion from part 4 and the five-phase implementation framework from part 3. If you want to revisit "why we are building these boxes" before structuring the org, please read those first.

How to design the CAIO command center

CAIO stands for Chief AI Officer. According to IBM's 2025 survey of 2,300 companies, 26% have already installed a CAIO, double the 11% from two years earlier[^1]. PwC Japan's "CAIO Reality Survey 2025" reports that 22% of Japanese companies have established a formal CAIO position, and including those with people performing equivalent roles, the figure rises to 60%[^2]. As AI investments have become a board-level matter, the cry from the field is that decision-making cannot keep up without a dedicated role.

The CAIO's job breaks down into roughly five areas: setting AI strategy, choosing models and platforms, designing AI governance, recruiting and developing AI talent, and managing the ROI of AI investment. The CIO (Chief Information Officer) oversees the IT foundation, the CISO (Chief Information Security Officer) handles security, the CTO (Chief Technology Officer) leads technology selection, and the COO (Chief Operating Officer) runs operations. The CAIO cuts across all of these and holds decision rights specifically within the AI domain. Because the overlap is significant, I believe the first task on day one is to draft an "authority allocation table."

Real examples bring the contour into sharp focus. In June 2024, the NTT Group enacted a Group AI Charter, appointed Co-CAIOs (Co-Chief AI Officers), and created a new AI Governance Office[^3]. Dentsu Digital simultaneously installed a CAIO and a CSO in December 2024 and reorganized AI utilization support and integrated services. Overseas, Anthropic placed Jared Kaplan (Co-Founder and Chief Science Officer) as Responsible Scaling Officer, concentrating accountability for implementing the Responsible Scaling Policy in one person[^4]. Treating the CAIO not as a title but as the ritual of designating "the one person who can press the AI judgment-stop button" is what gets organizations moving.

There are two pitfalls when installing a CAIO. The first is making it a concurrent role with the CIO or CDO (Chief Data Officer), without securing dedicated AI time. The second is handing full authority to an outside hire who has no field experience. McKinsey's 2026 report found that 52% of companies that produced AI outcomes had "documented processes for putting AI into production," compared with only 34% of others[^5]. Documenting processes does not progress without a CAIO. Personnel decisions where dedicated time and field understanding are not aligned should be reconsidered.

When deploying ZEROCK in enterprise environments, the first thing we ask is, "Who is the final approver for this AI deployment?" If the CAIO is set, the knowledge control policy and the publication scope of the prompt library can all be designed along that person's approval line. Before drawing the org chart, name one person. The conversation begins from there.

Struggling with AI adoption?

We have prepared materials covering ZEROCK case studies and implementation methods.

Composition and operating cadence of the AI Ethics Board

The AI Ethics Board is the deliberative body that checks the CAIO's decisions from ethical, legal, and social viewpoints. The cleanest precedent is probably Microsoft's AETHER (AI, Ethics, and Effects in Engineering and Research). Founded in 2016 by Eric Horvitz and Brad Smith, it is still chaired by Eric Horvitz today[^6]. AETHER serves as an advisory body and works with the Office of Responsible AI (ORA, established in 2019 and led by Natasha Crampton) and the Responsible AI Council co-chaired by CTO Kevin Scott and Vice Chair Brad Smith, handling reviews of high-risk cases.

IBM's AI Ethics Board uses a slightly different design. It is co-chaired by Christina Montgomery (Chief Privacy and Trust Officer) and Francesca Rossi (IBM Fellow, AI Ethics Global Leader). Each business unit has "AI Ethics Focal Points" who escalate field-level cases to the central board. In addition, an "Advocacy Network" of employee volunteers spreads AI ethics culture, and the CPO AI Ethics Project Office acts as the secretariat[^7]. The three-layer structure ensures that field discoveries always reach the top-level deliberation.

The proportion of external members is a point of debate. Salesforce's Office of Ethical and Humane Use (OEHU, established in 2018) is led by Chief Ethical and Humane Officer Paula Goldman and pairs with an external advisory council[^8]. SAP also uses a hybrid of internal committee and external advisory board. On the other hand, advisory boards composed only of external members are often criticized as "lacking decision authority and prone to becoming a formality." When I propose this to Japanese companies, I recommend a composition of 20 to 30% external members, 30 to 40% internal executives, and 30 to 40% field experts. Neither full external delegation nor purely internal self-evaluation.

A 90-minute monthly review plus a three-hour quarterly strategy session is realistic for cadence. Microsoft's published operational record shows AETHER combining monthly subcommittees with quarterly plenary meetings. High-risk cases are handled in ad hoc sessions as they arise. Just imitating this prevents agendas from clogging up. Francesca Rossi, the AAAI (Association for the Advancement of Artificial Intelligence) Global AI Initiatives and Policy Chair, has stated that "an ethics board needs both academic and practical wheels," and recommends including current AI researchers among external members along with legal scholars and ethicists.

The iron rule for authority is to document three tiers in advance. First, advisory authority. Second, the right to send a deliberation back for reconsideration (Veto with reconsideration). Third, the right to escalate to the management board. Microsoft's three-layer structure mirrors this division, with AETHER advising, ORA implementing policy, and the Responsible AI Council making the final ruling. Whenever Japanese companies ask, "What do we do when the Ethics Board disagrees with a business unit?" I always offer this three-tier framework as the answer.

Relationship between the AI Steering Committee and the management board

The AI Steering Committee is the executive forum that prioritizes AI investments. While the Ethics Board deliberates on "whether we should do it," the Steering Committee decides "what we will do by when." Mixing these roles overloads the agenda and breaks monthly decision-making. Separating them is the heart of organizational design.

Standard membership has the CAIO chair, with the CIO, CISO, CTO, COO, CDO, head of legal, head of HR, and key business unit leaders. In my experience, 8 to 12 members function best. Above 15, opinion gathering takes too long; below 5, the field perspective drops out. OneTrust's published process for setting up an AI Governance Committee also recommends starting with around 10 initial members and expanding as needed.

Steering Committee agendas can be consolidated into four areas. The first is portfolio management of AI use cases, with quarterly reviews of investment allocation, priorities, and resources. The second is KPI monitoring, the monthly forum to check the AI Quality Score and business completion rates discussed in the KPI monitoring article of this series. The third is deciding risk appetite, including how to handle cases sent back by the Ethics Board and how much technical debt to tolerate. The fourth is external communication, including public updates to the Responsible Scaling Policy as Anthropic does.

The relationship with the management board is not "top-down command" but "bidirectional bridging." The management board approves AI strategy direction, annual budget, and key personnel. The Steering Committee makes execution calls within that envelope and reports monthly progress to the board. Walmart, since 2025, has been deploying a "super agent" model (Sparky, Associate Agent, etc.) anchored by Wallaby, a retail-specialized LLM. Public materials state that the decisions resulted from an executive-level AI strategy lead consolidating platform architecture, governance, and business priorities into a single roadmap[^9]. A good example of separating the management board from the Steering Committee while keeping them connected through the roadmap.

KPMG's "Federated Governance" model is also a useful reference[^10]. It combines centralized standard setting with decentralized execution. The headquarters Steering Committee defines standards, criteria, and ethics, while business unit subcommittees retain implementation freedom within that envelope. McKinsey's recommended hub-and-spoke model takes essentially the same approach, with the hub owning governance and infrastructure, and spokes handling business-specific AI deployments[^11]. Both "uniform across the company" and "fully delegated to business units," the patterns Japanese companies often fall into, fail. Choosing the federated middle ground is the best practice in 2026.

Examples of AI governance organizations at large enterprises

Lining up the structures of major companies surfaces three commonalities: a command center (equivalent to a CAIO), an ethics review body, and an executive forum. Organizations missing any of these three layers struggle to manage AI investments regardless of size.

Microsoft is the textbook three-layer structure. The Senior Leadership Team gives final approval, the Office of Responsible AI (ORA) handles policy and case management, and the AETHER Committee advises. The Responsible AI Strategy in Engineering (RAISE) group also handles the engineering-side implementation[^6]. ORA's four functions, which are internal policy formulation, field deployment support, sensitive case management, and public policy engagement, can be reused as a template for Ethics Board secretariats at mid-sized companies.

Google DeepMind's Responsibility and Safety Council (RSC) is co-chaired by COO Lila Ibrahim and VP Helen King[^12]. The RSC reviews research, products, and collaborations against the AI Principles, while the AGI Safety Council (led by Co-Founder and Chief AGI Scientist Shane Legg) separately deliberates on extreme risks. For ordinary companies, an Ethics Board equivalent to the RSC is enough, but the example suggests that organizations handling frontier AI should set up a dedicated body equivalent to the AGI Safety Council.

Anthropic's design is the most pointed. Under the Responsible Scaling Policy (RSP) v3.0, Jared Kaplan (Co-Founder and Chief Science Officer) currently bears all responsibility as a single Responsible Scaling Officer (RSO)[^4]. Inheriting the role from Sam McCandlish, the RSO consolidates everything from policy update proposals to model deployment approval, contract review, and receipt of non-compliance reports. A new coordinating role of Head of Responsible Scaling has also been created. The choice is to concentrate accountability in one point and combine it with external independent reviews, rather than multiplying organizational layers.

IBM's design, as noted, is a three-layer structure where AI Ethics Focal Points, the Advocacy Network, and the CPO AI Ethics Project Office split the work of escalation, culture-building, and secretariat duties[^7]. Salesforce OEHU was established in 2018 with Paula Goldman as Chief Ethical and Humane Officer, and published Trusted AI Principles and five guidelines for generative AI[^8]. Walmart has built a "super agent" ecosystem with Wallaby (retail-specialized LLM), Sparky (customer-facing agent), and an Associate Agent for employees, and articulates external commitments through the Responsible AI Pledge[^9].

What these all share is a stance of "not being done once the org is built." Each company discloses its governance status in annual reports and updates the structure each year in response to external criticism and regulatory shifts. Many Japanese companies mistake establishing the org for the goal, and two years later, the actual operations have hollowed out. Writing annual review and revision cycles into the initial bylaws is, I believe, the most important lesson.

Finally, I show recommended organizational models for Japanese companies by scale, designed for minimum viable structure with implementability as the top priority.

For mid-sized companies of around 500 people, designate one CAIO equivalent as "AI Executive Officer" or "Head of AI Promotion Office," and strictly avoid concurrent roles with the CIO or CISO. Run an Ethics Board on a small scale with one external expert, two to three executives, and two field experts, meeting once a quarter. The Steering Committee meets every other month for 60 to 90 minutes, realistically using the opening slot of a management board meeting. Total membership stays at 10 to 15, and the operating effort comes in around 10 hours a month. When proposing this to mid-sized clients, I start with this composition and shift to monthly cadence once use cases grow.

At semi-major companies of 5,000 people, you need a dedicated CAIO and an AI Promotion Office (5 to 10 people). The Ethics Board should have 10 to 12 members in total, with two to three external members, three to four executives, and three to four representatives from legal, HR, and business units. In addition to monthly cadence, adopting Microsoft's business unit Focal Points model prevents field-level cases from clogging up. The Steering Committee meets once a month for 90 minutes, separately from the management board. The NTT Group's Co-CAIO plus AI Governance Office combo is a textbook reference for this size band[^3].

For enterprise scale (over 10,000 people), you can adopt the three-layer model used by Microsoft and SAP almost as is. Separate the final-approval layer equivalent to Senior Leadership, the policy implementation layer equivalent to a Responsible AI Office, and the ethics advisory layer equivalent to AETHER, and place a sub-Steering Committee in each business unit. Following Federated Governance, the center owns company-wide standards and business units own use-case specialization. Combining annual external audits with a Responsible AI Pledge equivalent allows you to address regulatory compliance and brand building at the same time.

Three iron rules apply at every scale. First, name one accountable owner. Second, document the three tiers of authority: advisory, send-back, and escalation. Third, build annual reviews into the bylaws. These three alone prevent 90% of the formality drift two years out. Conversely, an org chart missing any of these three will not function, no matter how grand it looks.

In TIMEWELL's deployment support for ZEROCK (our enterprise AI platform), organization design consulting is part of the standard package. Securing data sovereignty on AWS Japan regions, knowledge control through GraphRAG, and configuring publication scopes for the prompt library all rely on the approval lines of the CAIO and Ethics Board to operate. Whether you align the org first or align it alongside the technology rollout depends on the situation, but I would caution strongly against deploying the technology with no organization at all.

Conclusion: write the line of accountability before the org chart

What matters in AI governance org design is not building an impressive box. It is whether you can write in a single line, "Who makes the final call on AI?" Every example in this article, Microsoft, Google DeepMind, IBM, Anthropic, Salesforce, Walmart, and the NTT Group, had this single line clearly stated.

CAIO as the command center, AI Ethics Board as deliberation, Steering Committee as execution. Three role splits, plus three tiers of authority: advisory, send-back, and escalation. Document them and review annually. The work is, surprisingly, simple. What complicates it is internal politics that just want to spawn a new org, or external consultants over-engineering proposals.

Next time, in part six of this series, we will cover the technology stack supporting AI governance, namely how to choose audit logs, policy engines, and model evaluation tools. Only when organization and technology mesh does AI governance turn into a living, breathing system. Stay tuned.

References

[^1]: IBM "chief AI officer (CAIO)" https://www.ibm.com/think/topics/chief-ai-officer [^2]: PwC Japan "CAIO Reality Survey 2025: Conditions for Leaders Who Determine the Outcome of AI Management" https://www.pwc.com/jp/ja/knowledge/thoughtleadership/caio-survey-2025.html [^3]: NTT "Establishment of NTT Group AI Governance Regulations and the Promotion Structure for AI Governance" (June 2024) https://group.ntt/jp/newsrelease/2024/06/07/240607a.html [^4]: Anthropic "Responsible Scaling Policy Version 3.0" https://www.anthropic.com/news/responsible-scaling-policy-v3 [^5]: McKinsey "The agentic organization: A new operating model for AI" https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-agentic-organization-contours-of-the-next-paradigm-for-the-ai-era [^6]: Microsoft "The building blocks of Microsoft's responsible AI program" https://blogs.microsoft.com/on-the-issues/2021/01/19/microsoft-responsible-ai-program/ [^7]: IBM "A look into IBM's AI ethics governance framework" https://www.ibm.com/think/insights/a-look-into-ibms-ai-ethics-governance-framework [^8]: Salesforce "Ethical and Humane Use at Salesforce" https://www.salesforce.com/company/ethical-and-humane-use/ [^9]: Klover.ai "Walmart's Integrated AI Ecosystem Is Forging Market Dominance" https://www.klover.ai/walmart-integrated-ai-ecosystem-forging-market-dominance-ai-strategy/ [^10]: KPMG "The new model for AI governance" https://kpmg.com/kpmg-us/content/dam/kpmg/pdf/2025/new-model-ai-governance.pdf [^11]: McKinsey "The gen AI operating model: A leader's guide" https://www.mckinsey.com.br/capabilities/tech-and-ai/our-insights/a-data-leaders-operating-guide-to-scaling-gen-ai [^12]: Google DeepMind "Responsibility & Safety" https://deepmind.google/about/responsibility-safety/

Ready to optimize your workflows with AI?

Take our free 3-minute assessment to evaluate your AI readiness across strategy, data, and talent.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About ZEROCK

Discover the features and case studies for ZEROCK.

Related Articles