Hello, this is Hamamoto from TIMEWELL.
Inquiries about deploying Claude Code at the enterprise level have visibly accelerated. Less than a year after its May 2025 release, Claude Code reached an annualized revenue run rate of one billion dollars, with names like Netflix, Spotify, KPMG, L'Oreal, and Salesforce now featured as public reference customers. At the same time, on March 31, 2026, version 2.1.88 of @anthropic-ai/claude-code shipped with a 59.8 MB JavaScript source map accidentally bundled in, exposing 512,000 lines of source code across GitHub. Shortly after, a prompt injection vulnerability exploiting CLAUDE.md was disclosed. "We want it because it's powerful" and "We don't want it because of the risk" coexist inside the same companies. That is the reality of 2026.
This article organizes the differences between Pro, Team, and Enterprise plans; how to choose between Anthropic, AWS Bedrock, and Google Vertex AI; governance design that satisfies SOC 2 Type II and ISO 27001; and a six-step rollout from PoC to company-wide adoption. The intent is to give executives, IT, CISOs, and engineering leaders a single, end-to-end map.
The Three Decisions to Make First
When introducing Claude Code into your organization, three issues need to be settled up front: the plan, the foundation, and the governance. Without these, your PoC will never escape the gravitational pull of "individual developers playing with a tool."
The first is the plan. Pro, Max, Team, and Enterprise are the four tiers, but for serious enterprise use the choice narrows to Team or Enterprise. Team is built for organizations between five and 150 users and includes SAML 2.0 / OIDC SSO, domain capture, JIT (Just-in-Time) provisioning, role-based access controls, and per-workspace spend caps as standard. Each seat includes Claude Code, plus connectors for Google Workspace, Microsoft 365, Slack, and GitHub. Enterprise starts at twenty dollars per seat per month, with a twenty-seat minimum and an annual contract. Seat fees grant access only; token usage is billed separately in a hybrid model. Enterprise layers in SCIM, the Compliance API, the Admin API, Zero Data Retention (ZDR) contracts, data residency selection, and bulk policy distribution.
The second is the foundation. Connect to the Anthropic API directly, route through AWS Bedrock, or route through Google Vertex AI. The selection logic comes later in this article. The short version: companies already committed to AWS or Google Cloud should default to that cloud's path.
The third is governance. Tool permissions, MCP server allowlisting, log forwarding, the necessity of a ZDR contract, developer education. Distribute seats without first deciding these, and you will witness convenience and incidents arriving simultaneously. The shortest path is to advance these three governance decisions in parallel with PoC design.
Struggling with AI adoption?
We have prepared materials covering ZEROCK case studies and implementation methods.
Plan Selection and the Real Cost Calculation
Here is the honest math on Claude Code enterprise pricing as of April 2026. Enterprise seats start at twenty dollars per user per month, with a twenty-seat minimum and an annual contract. Seat fees cover access only. Claude Opus 4.7 adds five dollars per million input tokens and twenty-five dollars per million output tokens; Claude Sonnet 4.6 adds three dollars input and fifteen dollars output. Opus 4.7, released April 16, 2026, holds the same nominal pricing as Opus 4.6, but a new tokenizer can count the same input text as up to 35% more tokens, which means real costs may actually rise. Prompt caching offers up to 90% discounts and batch processing offers 50%, so how aggressively your team exploits these mechanisms determines monthly invoices in a meaningful way.
The Team plan retains the legacy subscription model, where usage limits are tied to seats. For organizations of fifty users where the goal is "let everyone try it and see what productivity looks like," Team is sufficient. The transition point to Enterprise emerges when you cross fifty users and start needing to stream Compliance API logs into your SIEM, sign a ZDR contract, or fix data residency to the EU or Japan.
ROI calculations should always start with a pre-deployment baseline. Deployment frequency, lead time, PR counts, and review rework cycles—at minimum, capture these four metrics for one month before deployment, ideally a full quarter. Faros AI's published case study reports an incremental cost of $37.5 per additional PR offset by $150 of saved developer time, landing at a 4:1 ROI. Looking inside a fifty-developer team, the typical distribution was twelve developers using Claude Code autonomously, twenty-three confined to dialogue mode, and fifteen barely using it at all. ROI calculations that assume "everyone benefits equally" almost always miss. Surfacing those fifteen non-users is a problem the rollout plan needs to solve explicitly.
Choosing Between Anthropic Direct, Bedrock, and Vertex AI
A flat operational comparison of the three foundations.
Anthropic direct is the fastest path. Authenticate with a Pro/Team/Enterprise seat or an API key and manage everything through the Anthropic Console. The Admin API exposes 25 endpoints for user management, and the Compliance API streams usage logs in real time. Time to first value is unmatched. The trade-off: billing is consolidated under Anthropic alone, and the physical location of your data depends on the infrastructure Anthropic selects.
AWS Bedrock is a single environment variable away: set CLAUDE_CODE_USE_BEDROCK=1 and the mode switches. Anthropic's official AWS guidance (aws-solutions-library-samples/guidance-for-claude-code-with-amazon-bedrock) recommends Direct OIDC Federation with Okta, Azure AD, Auth0, or Cognito User Pools, distributing temporary IAM credentials per user while preserving audit attribution. SSO via AWS IAM Identity Center is also supported. Bedrock supports prompt caching for Claude, which substantially trims real costs given Claude Code's heavily multi-turn usage pattern. Billing folds into your existing AWS contract, and audit logs flow through CloudWatch and CloudTrail—a decisive advantage for any organization already committed to AWS.
Google Vertex AI follows the same pattern: set CLAUDE_CODE_USE_VERTEX=1, ANTHROPIC_VERTEX_PROJECT_ID, and CLOUD_ML_REGION (regions include us-east5, europe-west1, and global), and the tool will pick up gcloud credentials automatically. Provisioned Throughput lets you reserve capacity for peak periods, which makes Vertex AI a strong fit for any workload where downtime during business hours is unacceptable. Claude Opus 4.7 is available on Vertex AI as of April 2026. DBS Bank in Singapore has publicly described combining Gemini and Claude on Vertex AI Provisioned Throughput to bring an internal AI assistant to commercial-grade quality.
In my experience, the deciding axis is "where do you want billing and logs to consolidate?" AWS-centric companies pick Bedrock; Google Cloud-centric companies pick Vertex AI; companies that lean on neither, or that have not yet settled on a cloud strategy, pick Anthropic direct. That covers about 70% of cases. The remaining 30% is decided by the strictness of data residency requirements and the necessity of ZDR. ZDR is contractable on Anthropic direct Enterprise, but Bedrock and Vertex AI provide "no data leaves the cloud region" by default, which makes the cloud routes overwhelmingly easier to push through finance, healthcare, and public sector approval committees.
Security and Governance: SOC 2 Type II, ISO 27001, ZDR, and Confronting the Source Leak
Anthropic currently holds SOC 2 Type II, ISO 27001:2022, and ISO/IEC 42001:2023 certifications. Evidence is published on its Trust Center, and the baseline materials a CISO needs are available. That said, certifications are a floor, not a ceiling. Auditors examine your own access controls, log retention, and vendor risk evaluations—Anthropic's certifications do not exempt you from any of those. Mistaking the two and concluding "Anthropic has the certifications, so we can skip our work" is a near-certain way to fail.
The default data retention behavior is that API inputs and outputs are kept for 30 days and not used for model training. Enterprise customers can sign a ZDR contract, which compresses retention to zero. Requests are scanned in real time for abuse detection and immediately discarded—no prompt, output, or metadata is retained. I have watched legal and IT teams who were initially opposed to feeding business documents into generative AI flip to a positive stance after seeing the ZDR contract.
The center of gravity for governance is centralized management via managed-settings.json and managed-mcp.json. Claude Code is designed so that user-level settings cannot override administrator-distributed configuration. Distribution paths are: server management through the Anthropic Console; MDM via Jamf, Kandji, or Microsoft Intune; group policy through the HKLM\SOFTWARE\Policies\ClaudeCode registry on Windows; and direct file placement (/etc/claude-code/managed-settings.json on Unix, C:\Program Files\ClaudeCode\managed-settings.json on Windows). Place a managed-settings.d/ directory next to those files and multiple JSON fragments will merge alphabetically, which makes per-team partitioned management practical.
Tool permissions should be allowlist-first, with denylists added as forced blocks where necessary. The specification states unambiguously that denials override permissions. Layer in PreToolUse hooks and you can intercept the JSON payload immediately before any tool call, applying custom approve/deny/modify logic. Exit code 2 stops execution before any rule evaluation runs, providing a tight mechanism for scrutinizing shell commands or outbound HTTP requests against internal policy.
And then March 31, 2026 happened. Version 2.1.88 of @anthropic-ai/claude-code shipped with a 59.8 MB source map mistakenly bundled in, publishing 512,000 lines of TypeScript through npm and propagating to GitHub mirrors within hours. SecurityWeek followed up days later with a critical vulnerability traced to that leak. InfoWorld reported that crafting CLAUDE.md to chain over fifty subcommands could bypass safety controls along a path that looks like a legitimate build. Oasis Security, in March, demonstrated an attack chain capable of exfiltrating chat history from claude.ai itself via invisible prompt injection. Trend Micro observed campaigns distributing attack payloads through fake Claude Code GitHub releases. The probability of secrets leaking through Claude Code-authored commits has been measured at 3.2%, against a public GitHub average of 1.5%.
In other words, every input the agent reads—READMEs, issue bodies, third-party API responses, historical logs, third-party MCP servers—is part of your attack surface. Allowlists, PreToolUse hooks, secret scanning, and an internal MCP server allowlist are the minimum viable defenses today.
Six-Step Rollout from PoC to Company-Wide Adoption
In my field experience, staged rollouts are paradoxically the fastest. The playbook published by systemprompt.io describes peer-to-peer adoption stages reaching company-wide deployment in four weeks and continuing usage growth at six months. The structure below is built on similar principles.
Step 1 is requirements definition (weeks 0–2). Identify the target departments, the confidentiality classification of the assets they handle, data residency, ZDR necessity, the IdP for SSO, and the destination for usage logs. Drawing lines such as "code at the financial subsidiary requires ZDR" or "IR and HR systems are off-limits" up front prevents friction during the PoC.
Step 2 is the PoC (weeks 3–6). Onboard 5–10 developers against a restricted set of repositories, and validate Anthropic direct, Bedrock, and Vertex AI on real workloads. Team seats are sufficient at this stage. Running /install-github-app to layer GitHub Actions–based AI review alongside lets the team build muscle memory for "summon @claude on a PR comment" usage patterns early.
Step 3 is governance build-out (weeks 5–8). Centralize tool permissions and MCP servers through managed-settings.json, switch over SSO, wire log forwarding into your SIEM, and distribute internal CLAUDE.md templates. If ZDR is required, this is the right window to negotiate the Enterprise switchover. The destination for Compliance API output—Splunk, OpenSearch, or whatever your team already monitors—matters more than what you flow through. Consistency wins.
Step 4 is pilot department deployment (weeks 8–12). Expand to a single department of 10–30 users. The Claude Code Analytics API surfaces daily PR counts, commits, sessions, per-user token consumption, and cost. Compare against the pre-deployment baseline. Some developers always lag. Institutionalizing one-on-one time where in-house champions (the autonomous-mode users) sit beside lagging colleagues moves the adoption curve dramatically.
Step 5 is company-wide deployment (weeks 12–20). Expand outward starting with the highest-adoption teams. Spend caps and PreToolUse hooks should be packaged as internal libraries at this stage to stabilize operations. If your fleet mixes Windows and macOS, ensure the same managed-settings.json distributes through both Intune and Jamf.
Step 6 is continuous improvement (week 20 and beyond). Continue measurement through Faros AI, internal DX dashboards, or your own BI, and gradually layer in Claude Cowork (which exited preview and shipped formally in early 2026), automated review on GitHub Actions, and integration with internal knowledge bases. By the one-year mark, the discussion of distributing Claude seats to non-engineering roles will start.
For concrete usage patterns, see our prior pieces: 45 Claude Code Skills, the Superpowers plugin, the agent-team formation guide, and from an enterprise perspective AI agent trends from Google Cloud Next 2025. Read together, they give you a continuous picture of the ecosystem.
How TIMEWELL Supports Enterprise Deployment
At TIMEWELL, we provide ZEROCK as an enterprise AI foundation. Built to run GraphRAG on AWS infrastructure inside Japan, it integrates the controls needed to safely connect internal knowledge to AI from day one. We are receiving an increasing number of inquiries from enterprises looking to set up the search foundation for internal documents and code assets in parallel with Claude Code deployment.
For organizations with the position "we want to put Claude Code in front of developers immediately, but governance design is beyond our bandwidth" or "we need to debate Bedrock versus Vertex AI in the context of our specific situation," our AI consulting service WARP walks beside you. WARP teams pair former enterprise DX and data-strategy specialists, on a monthly-renewal model, who execute the implementation work alongside the client. Claude Code rollout design, managed-settings.json template development, Compliance API to SIEM integration, ZDR contract negotiation support—we adapt to each company's specific circumstances and do the work.
One closing thought. Claude Code is not a "useful developer tool"—it is "an agent foundation that touches your organization's code and decision-making." Productivity does not magically rise simply by distributing seats. Decide on plan, foundation, and governance before starting the PoC. Draw the six-step rollout. Design the metrics that will measure ROI. The difference in this preparation determines whether internal adoption six months out will be twice or half what it could have been. A design that absorbs both incidents and benefits is required. This is the question every IT and CISO team in the world is wrestling with right now.
A practical note for leaders kicking off this work: do not treat the rollout as a procurement event. Treat it as the standing-up of an internal capability. Whoever owns Claude Code in your organization—whether that is a platform team, a developer experience team, or a security and compliance partnership—needs the budget, the authority, and the calendar time to maintain managed-settings.json, refresh CLAUDE.md templates, watch the Compliance API, respond to vulnerabilities like the March 2026 source map leak within hours rather than weeks, and continuously curate the internal MCP server allowlist. Vendors will help, but the hands on the controls have to be inside the company.
If you are evaluating Claude Code right now, the single most useful thing you can do this week is run a tabletop exercise. Walk through what would happen if a malicious CLAUDE.md instructed an agent to exfiltrate AWS credentials from a developer's local machine, what your detection latency would be, and which logs you would consult in the first ten minutes. Most enterprises discover that their answers are aspirational at best. That gap is the actual project. Closing it is also the work that produces the durable benefit—productivity that you can defend in front of a board, an auditor, or your own security team a year from now.
References
[^1]: What is the Enterprise plan? | Claude Help Center [^2]: Team plan | Claude by Anthropic [^3]: Claude Opus 4.7 Pricing — Finout [^4]: Claude Code on Amazon Bedrock — Claude Code Docs [^5]: Claude Code on Google Vertex AI — Claude Code Docs [^6]: Guidance for Claude Code with Amazon Bedrock — AWS [^7]: Claude Opus 4.7 on Vertex AI — Google Cloud Blog [^8]: Claude Code settings — Claude Code Docs [^9]: Configure permissions — Claude Code Docs [^10]: Anthropic Trust Center [^11]: Claude Code source leak coverage — InfoWorld [^12]: Critical Vulnerability in Claude Code — SecurityWeek [^13]: Measuring Claude Code ROI — Faros AI [^14]: Claude Code Enterprise Rollout Playbook — systemprompt.io [^15]: Anthropic Cowork for enterprise — The New Stack
