Hello, this is Hamamoto from TIMEWELL.
"We want to roll out Claude Code across the company, but legal has told us to lock down the Act on the Protection of Personal Information and the AI Business Operator Guideline first." A head of IT at a manufacturer told me this last week. I have received more than ten of the same questions over the past six months from banks, insurers, medical device makers, and SIers serving local governments.
Honestly, the regulatory side is moving so fast that it is hard to keep up. The AI Business Operator Guideline went from v1.0 in April 2024, to v1.1 in March 2025, to v1.2 in March 2026 - revised every year. The Act on the Protection of Personal Information also saw a major reform direction in 2025, and a 2026 Diet bill including the introduction of administrative monetary penalties was approved by the Cabinet. Anthropic also switched its terms to an opt-out approach in August 2025.
This article organizes, in one pass, the points Japanese enterprises must address when integrating Claude Code into business operations as of April 2026, from the structure of the regulation through practical implementation. I have aimed at a level of detail readable by both legal and IT teams.
1. Structure of the AI Business Operator Guideline v1.1 and v1.2, and Where Claude Code Fits
The "AI Business Operator Guideline," jointly published by the Ministry of Internal Affairs and Communications (MIC) and the Ministry of Economy, Trade and Industry (METI), was released as v1.0 on 19 April 2024, v1.1 on 28 March 2025, and v1.2 on 31 March 2026[^1][^2]. As declared in the document, it is a Living Document and the substance shifts every year. v1.0 set the broad frame, v1.1 deepened coverage of generative AI and AI agents, and v1.2 articulates risk management for the AI agent era.
The axis of this guideline is to divide AI business activities into three roles - "AI Developer," "AI Provider," and "AI User" - while requiring all three to follow ten common principles: Human-Centric, Safety, Fairness, Privacy Protection, Security, Transparency, Accountability, Education and Literacy, Fair Competition, and Innovation[^3]. The lineup is consciously aligned with the OECD AI Principles and the G7 Hiroshima AI Process.
Mapped to Claude Code use cases, most Japanese enterprises sit on the "AI User" side. Anthropic is the AI Developer, and the firms reselling the API or building Claude into their SaaS are AI Providers, while the operating company that uses them is the AI User. What deserves attention is that when an enterprise builds its own subagents or MCP servers on Claude Code and then provides them to group companies or customers, the enterprise wears both User and Provider hats. v1.1 explicitly handles the case of a single business operator holding multiple categories[^4]. In that case, the Provider obligations - accountability, fault response contact, and presenting points of attention to users - must all be put in place as a set.
You may feel that "we just use Claude Code, this does not apply to us," but the moment you turn it into an internal tool and hand it to another team, or let a subsidiary use the agent, Provider responsibility kicks in. I have seen this boundary missed many times, so I recommend building the inventory from day one on the assumption that "we may end up being a Provider."
The guideline itself is non-binding soft law. However, the Cabinet Office's Act on Promoting Research, Development, and Utilization of AI-related Technologies (the AI Promotion Act) came fully into force on 1 September 2025, and the Cabinet adopted the "AI Basic Plan" on 23 December 2025[^5]. The guideline's standing has been elevated through linkage to the AI Basic Plan, and is no longer an item that can be ignored.
Struggling with AI adoption?
We have prepared materials covering ZEROCK case studies and implementation methods.
2. Three Issues at the Intersection of the Privacy Law and Generative AI
The discussion at the intersection of the Act on the Protection of Personal Information and generative AI sorts into roughly three issues: leakage into training data, handling of special-care personal information, and cross-border transfer.
The first issue, training-data leakage, refers to cases where personal data entered by users into prompts is reused for AI model training. On 2 June 2023, the Personal Information Protection Commission published "Cautionary Notice Regarding the Use of Generative AI Services"[^6], stating clearly that for personal information handling business operators, entering personal data into prompts beyond the original purpose of use can constitute a violation of Article 18. On the same day, the Commission also issued Japan's first administrative guidance related to generative AI, directed at OpenAI L.L.C., the developer of ChatGPT[^7].
The second issue, special-care personal information, is the category in Article 2 (3) of the Act covering race, creed, medical history, criminal record, and similar data. The Commission required OpenAI not to acquire special-care personal information without consent, to exclude such information from data collected for machine learning and immediately delete it after collection, and to provide notification of the purpose of use in Japanese[^7]. The same framework applies to Claude Code. Letting users feed medical history or belief data as test data during code review is dangerous even before anonymization.
The third issue, cross-border transfer, sits under Article 28 of the Act. Providing personal data to Anthropic in the United States constitutes "provision to a third party in a foreign country," and in principle requires the data subject's consent[^8]. There are exceptions for countries with equivalent protection (the EU and the UK), for systems that continuously implement equivalent measures, and for entrustment or joint use, but the United States is not designated as an equivalent jurisdiction. So you either make the exception stand by way of contractual safeguards with Anthropic, or you go and obtain consent.
I should also touch on the legislative reform momentum that has been progressing since 2025. In January 2026, the government published a reform direction proposing to relax consent requirements only for statistical use that does not identify individuals and that is for AI development and training[^9][^10]. However, this concerns the side that "uses for training," and does not apply to enterprise use of Claude Code where personal data is fed into prompts as part of business. I often hear "since it is AI-related, consent is no longer required" - that is a misreading.
A subtle line that comes up in practice is "is an email address in a comment column sent to Claude Code personal information?" My position is conservative: an email address alone has identifying potential. The reason is that I have seen multiple past cases where an internal whistleblower hotline ended up identifying an employee from an email address. There is room for legal debate, but as a matter of operational design, building on the assumption of masking is the configuration that produces fewer accidents.
3. Responsibility Split Among Developer, Provider, and User, and TIMEWELL's Position
The three-category split in the guideline becomes most useful when read as a responsibility-decomposition model.
The AI Developer is the entity that designs, trains, and validates AI models. Anthropic sits here. The principal duties are securing fairness in training data, safety validation, technical documentation, and model transparency.
The AI Provider is the entity that takes models built by AI Developers and embeds them into its own services. Claude API resellers, vendors selling MCP servers for Claude Code as SaaS, and SIers offering Claude-embedded business applications belong here. Providers are expected to present user-facing terms of use, respond to faults, and warn against use outside the intended scope.
The AI User is the entity that uses AI systems and services received from AI Providers in its own business. Most Japanese enterprises sit here. The duties are appropriate use, validation of outputs, final human judgment, and log management[^4][^11].
For TIMEWELL, we are a composite player. When using Anthropic Claude as-is, we are the AI User. When embedding AI for clients via ZEROCK or TRAFEED, we are the AI Provider. As developers of our own GraphRAG engine, we also have an AI Developer aspect. As a company that delivers enterprise AI to clients, we organize internally with Provider responsibilities at the center.
Concretely, we maintain "Points of Attention for Users" documentation per product. Aligned with the v1.1 appendix checklist, we document accuracy limits of outputs, data types that must not be entered, log retention period, sub-processor list, and support contact. Separately, we maintain "User Responsibility Compliance Rules" for our own use of Claude Code internally. The basics: do not enter production personal data into prompts, use Claude for Enterprise with a Zero Data Retention agreement, remove customer names from in-code comments, and so on.
The point I want to underline is that the three categories of responsibility overlap without contradiction. There is no "as a Developer, User responsibility is light," nor "as a User, Provider responsibility is irrelevant." The moment you build a subagent on Claude Code and share it with a colleague, you become a small Provider.
A related internal article is Total Design for Adopting Claude Code in the Enterprise, useful for those who want to lock down both technical and operational sides.
4. Running an AI-Aware PIA (Privacy Impact Assessment)
A PIA (Privacy Impact Assessment) is a mechanism for identifying and mitigating risks before launching operations or systems that handle personal information. It is not a legal obligation in Japan, but on 30 June 2021 the Personal Information Protection Commission published "On Promoting PIA Adoption"[^12], and JIS X 9251:2021 (the Japanese translation of ISO/IEC 29134:2017) has been established as the domestic standard.
A standard PIA proceeds in four stages: preparation, assessment, reporting, and review. An AI-aware PIA must always weave in three additional points.
The first is provenance of training data. Claude Code itself contractually does not use customer prompts for model training (API, Team, and Enterprise plans), but you must verify that the assumption is genuinely backed by the actual contract clauses. Anthropic's official FAQ explicitly states that data via commercial plans is not used for training[^13][^14]. Capture the corresponding contract clause and the Zero Data Retention configuration as evidence.
The second is handling of prompt and output logs. The Free, Pro, and Max plans changed terms on 29 August 2025 to a structure that, depending on settings, retains data for up to five years[^15]. For enterprise use, the right answer is a commercial or Enterprise plan with a Zero Data Retention agreement. ZDR also applies to Claude Code under an Enterprise contract[^14]. In the PIA, write out the log retention period, retention location, access privileges, and deletion procedures in full.
The third is re-identification risk. There are real cases of customer names being recovered from "anonymized" code comments, and of internal test data left in prompts leaking into the next conversation. Samsung Electronics had a 2023 incident where confidential data leaked through employee ChatGPT inputs and the company shifted to a generative-AI prohibition policy. That is a classic combination of re-identification and secondary use.
Templates for PIA reports are available from JIPDEC (the Japan Information Processing Development Corporation) and the Personal Information Protection Commission. In our ZEROCK adoption projects, we run PIAs as a mandatory process with clients. It is often dismissed as "tedious paperwork," but in practice it raises the quality of requirements definition, which usually wins acceptance on the ground.
A PIA is not a one-time deliverable; the principle is to re-evaluate annually or upon major changes. Claude Code's terms also change every year, so build re-evaluation triggers into your internal rules to stay safe.
5. Violation Cases and Penalties: Pitfalls You'll Actually Hit
Finally, look at real violation cases and the penalty landscape that is forming.
On 2 June 2023, the Personal Information Protection Commission issued Japan's first administrative guidance to a generative-AI service, directed at OpenAI[^7]. The three points were: do not use special-care personal information for machine learning without consent, do not acquire special-care personal information from users and non-users, and notify the purpose of use in Japanese. As administrative guidance, no monetary penalty was attached, but the response status falls under continued oversight. Overseas, Italy's Garante temporarily banned ChatGPT in March 2023 and conditionally lifted it in April of the same year.
A publicly available reference case among domestic firms is Samsung Electronics. After a 2023 incident in which employees pasted source code and meeting content into ChatGPT and triggered leakage, the company switched to a default prohibition of internal use. Several Japanese firms, mainly financial institutions, also reportedly paused enterprise rollout of generative AI.
The penalty landscape will shift with the 2026 Diet's Cabinet-approved amendment to the Act on the Protection of Personal Information[^16]. The plan is to introduce a system that levies, from business operators that wrongfully obtained or used personal data of 1,000 or more individuals, an amount equivalent to the gain as an administrative monetary penalty (kacho-kin). Until now, penalty exposure was limited to criminal punishment for violations of orders. Once enforced, the kacho-kin system can be triggered nimbly as an administrative measure, dramatically raising risk.
From experience, three pitfalls stand out on the ground.
The first is "prompt leakage." A Slack screenshot sharing a Claude Code exchange that includes a customer name. The sender often does not realize, so the only defense is review culture combined with automated masking. The second is "shadow AI." Outside IT's reach, the field starts using personal Claude accounts. Suppress this with consolidation onto enterprise contracts and the introduction of usage-log management tools. I have written more on enterprise Claude Code operations in Claude Code Security Implementation. The third is "lack of cross-border awareness." On the ground, many people do not even recognize that data flowing to Anthropic in the US is a cross-border transfer. Run a data governance training program at least once a year.
Do not forget the geopolitical-risk angle. I covered the issue of Chinese IT services in Japanese local governments in Issues in Excluding Chinese-Made IT Services from Japanese Local Governments, but Anthropic is a US company and the questions are on a different axis. Industries falling under the Economic Security Promotion Act's specified critical materials or specified critical infrastructure services may also have data concentration in US firms evaluated.
Summary
Below is a checklist version of the points covered, designed for use in the field.
- Use the three categories in v1.1 of the AI Business Operator Guideline (AI Developer, Provider, User) to map your organization, recognizing that creating internal tools triggers Provider responsibility
- Articles 18 (purpose of use), 20 (2) (special-care personal information), and 28 (cross-border transfer) of the Act on the Protection of Personal Information are the three central provisions for generative AI use
- Commercial plans (API, Team, Enterprise) are contractually not used for model training. Free, Pro, and Max switched to opt-out from 28 September 2025, so business use should rely solely on commercial plans
- Claude Code under Claude for Enterprise can be put under a Zero Data Retention agreement, where prompts and responses are not stored
- An AI-aware PIA must weave in three points: provenance of training data, handling of logs, and re-identification risk
- The Act on the Protection of Personal Information amendment passed in the 2026 ordinary Diet introduces an administrative monetary penalty (kacho-kin) system. Penalty risk rises significantly after enforcement
Honestly, "perfect everything before using AI" is unrealistic. My recommendation is: first lock down a Claude for Enterprise contract and ZDR settings, then write a first draft of the PIA in three weeks, and iterate over a year of operation. Launching at 60 points and adding monthly beats waiting for 100 points - it lands better with the field.
At TIMEWELL, with our enterprise AI platform ZEROCK as the backbone, we provide end-to-end support from AI Business Operator Guideline alignment through PIA design under the Act on the Protection of Personal Information to internal Claude Code rollout rules. If you can come to us at the level of "where should we draw the line in our case," we can move quickly to safe operation.
References
[^1]: Ministry of Economy, Trade and Industry, "AI Business Operator Guideline (v1.1) Overview," 28 March 2025 https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20250328_2.pdf [^2]: Ministry of Internal Affairs and Communications, "AI Business Operator Guideline (v1.1)," 28 March 2025 https://www.soumu.go.jp/main_content/001002576.pdf [^3]: Factory Journal, "Government formulates an integrated 'AI Guideline' for business operators - 10 principles including human-centric" https://factoryjournal.jp/42511/ [^4]: ITOCHU Techno-Solutions, "An Outline Understanding of the AI Business Operator Guideline" https://www.ctc-g.co.jp/keys/blog/detail/ai-business-guidelines-key-points [^5]: Cabinet Office, "AI Basic Plan," Cabinet decision of 23 December 2025 https://www8.cao.go.jp/cstp/ai/ai_plan/aiplan_20251223.pdf [^6]: Personal Information Protection Commission, "Cautionary Notice Regarding the Use of Generative AI Services," 2 June 2023 https://www.ppc.go.jp/news/careful_information/230602_AI_utilize_alert/ [^7]: Personal Information Protection Commission, public material, "Cautionary Notice Regarding the Use of Generative AI Services" https://www.ppc.go.jp/files/pdf/230602_kouhou_houdou.pdf [^8]: Personal Information Protection Commission, "Guidelines on the Act on the Protection of Personal Information (Provision to Third Parties Located in Foreign Countries)" https://www.ppc.go.jp/personalinfo/legal/guidelines_offshore/ [^9]: Nikkei, "AI development personal information without obtaining consent: Government prepares legal amendment" https://www.nikkei.com/article/DGXZQOUA0639F0W5A200C2000000/ [^10]: Jiji Press, "'Race and creed' AI use without consent: Government considering relaxation of personal information protection requirements" https://www.jiji.com/jc/article?k=2025022200393&g=pol [^11]: PwC Japan Group, "AI Business Operator Guideline Draft - Commentary" https://www.pwc.com/jp/ja/knowledge/column/ai-governance/ai-guideline.html [^12]: Personal Information Protection Commission, "On Promoting PIA Adoption," 30 June 2021 https://www.ppc.go.jp/files/pdf/pia_promotion.pdf [^13]: Anthropic Privacy Center, "Zero data retention agreements" https://privacy.claude.com/en/articles/8956058-i-have-a-zero-data-retention-agreement-with-anthropic-what-products-does-it-apply-to [^14]: Claude Code Docs, "Zero data retention" https://code.claude.com/docs/en/zero-data-retention [^15]: Lifehacker Japan, "Privacy settings to revisit immediately if you do not want Claude conversations used for AI training" https://www.lifehacker.jp/article/2508-anthropic-training-ai-claude-user-conversations/ [^16]: Nikkei, "Personal Information Protection: Cabinet approves amendment imposing administrative monetary penalty on violators" https://www.nikkei.com/article/DGXZQOUA065240W6A400C2000000/
![Claude Code x Japan Privacy Law and AI Business Operator Guideline Compliance | Regulations and Implementation for Japanese Enterprises [2026 Latest]](/images/columns/claude-code-japan-privacy-law-ai-guideline-compliance/cover.png)