Hello, this is Hamamoto from TIMEWELL.
"We don't sell anything in the EU, so the EU AI Act doesn't apply to us, right?" I have lost count of how many times I have heard this from heads of AI initiatives this spring. Each time, I walk over to the whiteboard and draw three scenarios: an EU subsidiary using the parent's AI system, an AI service distributed through an EU-based reseller, and a generative-AI chatbot running on a public website that is also accessible to EU residents. Almost without exception, at least one of those scenarios applies to the company in front of me.
On August 2, 2026, the heart of the EU AI Act starts to beat. The European Commission's enforcement powers activate, and document requests, evaluations, market restrictions, recalls, and administrative fines against general-purpose AI (GPAI) models become real instruments[^1][^2]. The maximum penalties are 7% of global revenue or EUR 35M (whichever is higher) for violations of prohibited AI practices, 3% or EUR 15M for high-risk AI system breaches, and 3% or EUR 15M for GPAI providers[^3][^4]. If you continue to dismiss this as "an EU thing", you may find yourself receiving a board-level report next spring that begins, "Our European subsidiary has been ordered to recall…"
This article cross-references the EU AI Office's primary sources, analyses from European law firms, the NIST framework, and McKinsey's most recent surveys to translate "what to actually do" for Japanese enterprises deploying GPAI models—down to a checklist you can put on your desk. It is written for the practitioner who knows that extraterritorial scope exists but is not sure where to start.
August 2, 2026: the day the EU AI Act starts fining your company in earnest
Once you understand the overall EU AI Act timeline, the gravity of the August 2 date becomes three-dimensional. The Act came into force on August 1, 2024. From February 2, 2025, the prohibited-practice rules (Article 5) and AI Literacy requirements (Article 4) became applicable. From August 2, 2025, GPAI provider obligations and the governance machinery, including the EU AI Office, started operating[^5][^6]. Then, on August 2, 2026, the obligations for high-risk AI systems listed in Annex III, the transparency obligations of Article 50, and the Commission's enforcement powers all activate together[^5][^7].
The EU AI Office had already started operating from August 2025, but for GPAI providers the past year has been positioned as an "adaptation period"[^1][^8]—a window for documentation, building risk-evaluation frameworks, and considering whether to sign the Code of Practice. From August 2, 2026, that adaptation period ends and the Commission moves into a phase where it can wield legally binding tools.
The substance of those enforcement powers is broader than people realize. Through the EU AI Office, the Commission can request documents and information from providers, conduct evaluations of documentation and the model itself, demand compliance measures, demand systemic-risk mitigation, demand market restrictions / recalls / withdrawals, and impose administrative fines[^1][^2]. In Japan, "an inquiry from a regulator" often conjures up the image of a single phone call. But when the European Commission moves at this level, expect months of work pulling in legal, governance, engineering, and procurement teams just to assemble the requested materials.
The fine structure, in numbers. Article 99 sets out: EUR 35M or 7% of preceding-year global annual turnover (whichever is higher) for violations of the prohibited AI practices in Article 5; EUR 15M or 3% for high-risk AI system requirement violations; and EUR 7.5M or 1% for the supply of incorrect or misleading information[^3]. For GPAI providers, Article 101 separately caps fines at EUR 15M or 3% of preceding-year global turnover, whichever is higher[^9]. Apply 7% to the consolidated revenue of a Toyota-scale Japanese enterprise and you are looking at fine ceilings that move three orders of magnitude up the scale.
In my view, the fines are not actually the scariest part—"recall" and "market restriction" are. Fines can be absorbed through accounting reserves. The moment a sales-stop order hits in the European market, however, revenue from that product line stops the same day. For companies shipping GPAI-embedded products or services into Europe, the business-continuity risk is the more fundamental issue.
Struggling with AI adoption?
We have prepared materials covering ZEROCK case studies and implementation methods.
Three misconceptions Japanese enterprises tend to hold
There are three misconceptions I see repeatedly in conversations with Japanese enterprises. They have to be cleared up before the rest of the discussion holds together.
The first misconception: "We develop and operate AI in Japan, so the EU AI Act has nothing to do with us." The EU AI Act is a textbook extraterritorial law, and Article 2 explicitly defines its scope. Even if you have no establishment in the EU, you fall in scope when you place an AI system or GPAI model on the EU market, or when the output of an AI system is used in the EU[^10][^11]. This "market location principle" is the same logic that the EU has applied repeatedly since GDPR.
What does that look like in practice? Selling your AI products through EU-based resellers; running a SaaS with non-Japanese UI options that is reachable by EU residents; an internal generative-AI chatbot used by employees of an EU subsidiary; releasing a GPAI model on Hugging Face that is accessible to EU developers as part of an R&D output. All of these, to varying degrees, fall within the extraterritorial reach of the Act[^11][^12]. "We don't sell directly into the EU market" is no longer a sufficient defense.
The second misconception: "We are a user of AI, not a developer, so our liability is light." The EU AI Act explicitly distinguishes between "providers" and "deployers" (users) and assigns separate obligations to each. For GPAI models, the provider (whoever develops the model and places it on the EU market) is responsible for technical documentation, publication of training-data summaries, copyright policies, and—for systemic-risk models—additional evaluation, mitigation, and notification duties[^13][^14]. The deployer, on the other hand, is responsible for the transparency obligations of Article 50 (e.g., disclosing AI-generated content, labelling deepfakes), Fundamental Rights Impact Assessments (FRIAs) when using high-risk AI systems, ensuring human oversight, and retaining logs[^7][^15].
The crucial point: depending on the business model, you can flip from deployer to provider. If you call the OpenAI API, embed it in your own product, add output guardrails or fine-tuning, and make the resulting service available in the EU market, then for that service your company is the provider in EU-AI-Act terms. The fact that OpenAI is the upstream provider does not absolve you. My recommendation is to add a single field to your internal AI-use intake form that explicitly asks: "Is this use case classified as provider, deployer, or both?" That one line changes how the field thinks.
The third misconception: "Someone upstream in the supply chain will handle compliance, so we don't need to do anything." Like GDPR, the EU AI Act distributes responsibility across the supply chain. The training data's copyright handling, the systemic-risk evaluation results, and the technical documentation about the model's capabilities and limitations—the information flow that hands these from the provider to the deployer is at the heart of the Code of Practice[^14][^16]. If you embed a GPAI into your operations as a deployer without knowing what was done upstream, you cannot fulfill your own documentation and risk-evaluation duties. The downstream side then ends up exposed to fines and market restrictions.
What I see most often in practice is the response, "Our vendor said they are compliant with the EU AI Act, so we are fine." Whether you actually are fine depends on whether the vendor has signed the Code of Practice, whether they publish the Model Documentation Form, whether they ship update information on a quarterly cadence, and whether your contract has explicit AI Act compliance clauses. Issuing a green light based on a verbal assurance from a vendor is exactly how you discover, months later, that your own deployer obligations were not being met.
Mapping obligations: GPAI "providers" vs "deployers"
Now to the practical layer. Let me map the major obligations for GPAI models, separating the provider view from the deployer view. The Act on its own reads like an abstract document drafted for lawyers. Read together with the European Commission's "Guidelines for providers of general-purpose AI models" (published July 2025) and the Code of Practice, however, it descends to a level of detail that operators can actually act on[^13][^14][^16].
Key obligations on GPAI providers (eight of the twelve major items):
- Drafting and maintaining technical documentation: Maintain documentation of model design, training methodology, dataset overview, evaluation methodology, capabilities and limitations, and intended downstream use cases—retained for at least ten years[^14][^16].
- Publishing a training-data summary: Publish a sufficiently detailed summary, including handling of copyright-protected content, in the format specified by the EU AI Office[^13][^17].
- Information for downstream providers: Provide the technical and capability information needed by downstream operators who build other AI systems on top of your model[^14].
- Compliance policy with EU copyright law (DSM Directive): Maintain a policy that respects opt-out rights for text and data mining[^14][^16].
- Considering signature to the Code of Practice: Not legally required, but signing functions as a means of demonstrating compliance. As of writing, 26 organizations have signed, including Amazon, Anthropic, Google, IBM, Microsoft, OpenAI, and Mistral AI[^17][^18].
- Notification duties for systemic-risk designation: For models exceeding training-compute thresholds (e.g., 10^25 FLOPs), notify the EU AI Office that the model qualifies as systemic-risk[^13][^15].
- Evaluation and mitigation for systemic-risk models: Safety evaluations, adversarial testing, cybersecurity protections, and serious-incident reporting[^13][^14].
- Designating an EU representative: Non-EU providers must appoint a legal representative inside the EU[^14].
Key obligations on GPAI deployers (high-risk and transparency-driven):
- Transparency obligations under Article 50: Disclose to users that content has been generated or manipulated by AI; label deepfakes[^7][^19].
- AI literacy under Article 4: Provide sufficient technical understanding and training to staff who operate AI systems[^6][^15].
- Human oversight: Design and operate human-in-the-loop controls when using high-risk AI systems[^15].
- Fundamental Rights Impact Assessment (FRIA): Mandatory for use cases such as public bodies and credit assessment (Article 27)[^20].
I recommend laying these twelve items (and adjacent ones) out in a spreadsheet, with each cell carrying status ("applicable / not applicable / under review"), the responsible internal team, and a deadline. A growing number of companies operate this in Notion or a similar database. Keeping it as a living document changes your responsiveness to a Commission inquiry by orders of magnitude. It also pairs naturally with the reporting culture of Japanese enterprises.
The signature decision around the Code of Practice deserves particular attention. The final version of the Code was published on July 10, 2025, and is structured into three chapters: Transparency, Copyright, and Safety & Security[^16][^21]. Signature is voluntary, but signing is recognized by the Commission as a "means of compliance demonstration"—effectively a passport for practitioners[^14][^21]. Providers that have chosen not to sign (such as Meta and certain Chinese AI labs) bear the burden of proving equivalent compliance through their own framework. For Japanese enterprises that intend to deploy GPAIs on the EU market, working through the Code of Practice in detail is unavoidable.
What the fines actually look like
Let me make the fine numbers more concrete. Putting Article 99's three-tier structure side by side with Article 101's GPAI-specific frame yields the following[^3][^4][^9]:
Article 99 (general AI systems):
- Prohibited AI practice violations (Article 5): EUR 35M or 7% of global annual turnover, whichever is higher
- High-risk AI system requirement violations: EUR 15M or 3%, whichever is higher
- Supply of incorrect or misleading information: EUR 7.5M or 1%, whichever is higher
Article 101 (GPAI providers, enforced directly by the European Commission):
- GPAI obligation violations: EUR 15M or 3% of global annual turnover, whichever is higher
Rather than viewing the numbers in the abstract, plug your own consolidated revenue into the formula. For a Japanese enterprise with JPY 1 trillion (around USD 6.5B) in consolidated revenue, the Article 5 ceiling lands at JPY 70B, and the GPAI-obligation ceiling at JPY 30B. For comparison, GDPR maxes out at EUR 20M or 4%—the AI Act sits above that.
From a DPIA / FRIA perspective, the important point is that fines can stack[^22]. Failing to perform a DPIA under GDPR is EUR 20M or 4%, and failing to perform a FRIA under the AI Act is EUR 15M or 3%; theoretically, a stacked exposure of up to EUR 55M or 7% is conceivable. In practice, getting fined twice for the same violation is rare, but if separate violations are recognized, both fines apply. Some legal practitioners refer to this as "double penalty exposure".
For comparable historical context, the largest GDPR fines have been Meta (EUR 1.2B in 2023) and Amazon (EUR 746M in 2021). The EU AI Act has only just begun, but the Commission is explicitly required to ensure fines are "dissuasive", and there is a real possibility of a marquee enforcement case in the first year of operation[^3][^23]. My expectation, in line with the broader industry view, is that one or two enforcement actions against major providers in the high three-digit-millions of euros will land in the first one to two years.
You should also estimate the cost beyond the fine itself. A market-restriction or recall order means European revenue from that product line drops to zero overnight, with cascading costs for retrieval, alternative provisioning, reputational damage, shareholder communications, and customer support staffing. In my experience, the "spillover costs" exceed the headline fine by a wide margin. Boards that only see the fine number tend to debate the wrong question. Surfacing the spillover-cost estimate alongside the fine is where governance leaders earn their keep.
Four internal-readiness steps before August 2026
If you have read this far, the question becomes "where do I start?" My recommendation is to work in four prioritized steps.
Step 1: Extend the scope of your DPIA (Data Protection Impact Assessment)
If you already run DPIAs under GDPR, start by inventorying your "processing operations involving AI systems" and bringing them into scope. France's CNIL has stated that a DPIA is in principle required when developing or deploying high-risk AI systems under the EU AI Act involving personal-data processing[^22][^24]. For foundation models and GPAI systems, a DPIA is generally required at the development stage because their use cases cannot be exhaustively specified.
Extending the DPIA into a FRIA is also a useful pattern. AI Act Recital 96 explicitly contemplates positioning a FRIA as a complement to an existing DPIA[^20][^22]. Adding AI-specific fields—training data, bias-test results, human-oversight design, status of staff training, output re-identification risk—to your existing GDPR DPIA template lets you cover both with a single document. Threading NIST's AI Risk Management Framework (AI RMF) functions—Govern, Map, Measure, Manage—into the chapters of the DPIA gives you a two-for-one[^25][^26].
Step 2: Audit your GPAI model vendors
If you are deploying a GPAI internally, audit your upstream vendor's compliance posture on a documentation basis. The check is straightforward: have they signed the Code of Practice; do they publish the Model Documentation Form; can you obtain the training-data summary; is there a systemic-risk evaluation summary; do your contracts contain explicit AI Act compliance clauses; do they commit to quarterly update information? A vendor that fails on these six points puts your own deployer-side documentation duties at risk.
In McKinsey's 2026 AI Trust Maturity Survey, roughly two-thirds of respondents named "security and risk concerns" the top blocker to scaling agentic AI—above "regulatory uncertainty" and "technical limitations". 74% rated "inaccuracy" a high-relevance risk and 72% rated "cybersecurity" similarly[^27]. In the field, GPAI accuracy and regulatory readiness are now debated on the same page. Skipping vendor audits means you eventually walk into your legal team's office unable to answer "what was this model trained on?"
Step 3: Decide on Code of Practice signature for your own organization
If your company itself is in the GPAI provider role, signing the Code of Practice deserves serious consideration. With 26 signatories so far, including Anthropic, OpenAI, Google, Microsoft, Amazon, IBM, and Mistral AI, the European Commission recognizes signature as a means of demonstrating compliance[^17][^18]. Signing means working within the three-chapter framework (Transparency, Copyright, Safety & Security), which is materially less burdensome than building a unique compliance framework from scratch.
That said, signature carries operational weight: annual reports to file, safety reports for systemic-risk models, and ongoing maintenance of the Model Documentation Form. My own view: if you have a strategy for sustained EU presence with your own model, sign. If your EU presence is limited or one-off, you can defend a bespoke approach. Sign-or-not is a strategic decision that should reach the executive committee.
Step 4: Internal training (AI literacy)
Article 4 sets the AI literacy requirement, mandating sufficient technical understanding, risk awareness, and compliance training for "anyone who operates AI systems"[^6][^15]. The article has been applicable since February 2025, but from August 2026 it becomes subject to enforcement powers. A perfunctory e-learning is not enough; training has to be tailored to actual job duties. The use cases that sales, legal, engineering, and customer support touch are different, so the curriculum has to differ accordingly.
What I recommend: a quarterly refresher, an onboarding session whenever a new AI use case is introduced, and an annual organization-wide comprehension test—the three together. On top of that, place an AI Literacy Officer directly under the executive team and centralize training records. When the EU AI Office asks "produce records of your staff training", you want to be in a position to answer the same day.
TIMEWELL's ZEROCK is built around a design philosophy of running on AWS domestic servers with GraphRAG and data sovereignty, and ships standard with a prompt library, usage logs, and source tracing aligned with internal AI use cases. For governance support, WARP provides end-to-end coverage, from global regulatory mapping (including EU AI Act) to designing the operating model of a governance committee. If your four-step checklist is currently being driven by a single overworked governance lead, bringing in outside expertise typically lowers the total cost of compliance.
Operationalizing AI governance for Japanese enterprises with ZEROCK and WARP
Sustaining the four steps above purely through paper policies and manual review is, realistically, not feasible. Everything the EU AI Act requires—documentation, log retention, risk evaluations, training records, vendor audits—lives in the world of "evidence". Stacking obligations on top of an organization that has no system for producing evidence simply burns out the operators in the field.
TIMEWELL's ZEROCK integrates the three foundations needed for enterprise AI. First, a data-sovereignty foundation that runs the model in an environment closed to the AWS Tokyo region—a placement that satisfies the three layers of EU GDPR, Japan's Act on the Protection of Personal Information, and the AI Act simultaneously. Second, source tracing through GraphRAG and knowledge control through a prompt library—what the AI Act calls "explainability of output", "cross-organizational knowledge management", and "user training", all implemented at the system level. Third, full retention of usage logs and automation of audit response. When a Commission inquiry arrives, the documents required for the response are produced on demand, without putting the operations team in crisis mode.
WARP covers the governance-strategy side. WARP is a monthly-update AI consulting service: ex-major-firm DX and data-strategy specialists support international regulatory mapping (EU AI Act, Japan's AI Business Operator Guidelines, US NIST AI RMF), governance-committee design, and the production of board materials for CTOs and CFOs. The structural challenges Japanese enterprises tend to carry—"global headquarters governance is lagging" or "each local entity is doing its own thing"—are precisely the situations WARP's hands-on experience addresses.
You do not need a perfect system on day one. In my experience, the path is: organize your applicability across the three patterns (EU subsidiary, EU-direct sales, GPAI use for EU customers), then run DPIAs and FRIAs on the three to five highest-risk use cases. That alone gets you to a state where the executive committee can speak credibly about AI governance, within six months. The rest is incremental—quarterly reviews, an annual policy update. It is unglamorous work, but it is from this disciplined base that companies start building durable competitive advantage in the European market.
The EU AI Act is not "rising compliance cost" for globally operating Japanese enterprises—it is "an opportunity to convert governance quality into competitive advantage". As McKinsey's survey shows, AI trust is now a top-of-the-house topic[^27]. Compliance is the floor; on top of that floor, output reliability, explainability, data sovereignty, and training maturity are where companies will differentiate over the next five years. August 2, 2026 is the day that fault line becomes visible.
For an organized walk-through of your own EU AI Act readiness, we offer 30-minute consultations through ZEROCK; for governance strategy more broadly, WARP takes those conversations. For wider context, see also Operationalizing Japan's AI Business Operator Guidelines, SOC 2 / ISO 27001 / ISO 42001 Audit Controls, and Claude Code SOC2 / ISO 27001 Compliance Guide.
References
[^1]: EU Artificial Intelligence Act, "Enforcement of Chapter V under the EU AI Act". https://artificialintelligenceact.eu/enforcement-of-chapter-v-under-the-eu-ai-act/ [^2]: AI Act Service Desk (European Commission), "Article 88: Enforcement of obligations of providers of general-purpose AI models". https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-88 [^3]: EU Artificial Intelligence Act, "Article 99: Penalties". https://artificialintelligenceact.eu/article/99/ [^4]: Holistic AI, "Penalties of the EU AI Act: The High Cost of Non-Compliance" (2025). https://www.holisticai.com/blog/penalties-of-the-eu-ai-act [^5]: EU Artificial Intelligence Act, "Implementation Timeline". https://artificialintelligenceact.eu/implementation-timeline/ [^6]: DLA Piper, "Latest wave of obligations under the EU AI Act take effect: Key considerations" (August 2025). https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect [^7]: EU Artificial Intelligence Act, "Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems". https://artificialintelligenceact.eu/article/50/ [^8]: Kennedys Law, "The EU AI Act implementation timeline: understanding the next deadline for compliance" (2026). https://www.kennedyslaw.com/en/thought-leadership/article/2026/the-eu-ai-act-implementation-timeline-understanding-the-next-deadline-for-compliance/ [^9]: EU Artificial Intelligence Act, "Article 101: Fines for Providers of General-Purpose AI Models". https://artificialintelligenceact.eu/article/101/ [^10]: National Law Review, "Extraterritorial Scope of the EU AI Act" (2026). https://natlawreview.com/article/extraterritorial-scope-eu-ai-act [^11]: Morgan Lewis, "The EU AI Act Is Here—With Extraterritorial Reach" (July 2024). https://www.morganlewis.com/pubs/2024/07/the-eu-artificial-intelligence-act-is-here-with-extraterritorial-reach [^12]: Afriwise, "Extraterritorial Application of the EU AI Act: What Non-EU Companies Should Know". https://www.afriwise.com/blog/extraterritorial-application-of-the-eu-ai-act-what-non-eu-companies-should-know [^13]: European Commission, "Guidelines for providers of general-purpose AI models". https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers [^14]: Latham & Watkins, "EU AI Act: GPAI Model Obligations in Force and Final GPAI Code of Practice in Place" (2025). https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place [^15]: EU Artificial Intelligence Act, "Article 55: Obligations for Providers of General-Purpose AI Models with Systemic Risk". https://artificialintelligenceact.eu/article/55/ [^16]: European Commission, "The General-Purpose AI Code of Practice". https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai [^17]: EU AI Act Code of Practice, "Final Version" (July 10, 2025). https://code-of-practice.ai/ [^18]: EU AI Act Newsletter #83, "GPAI Rules Now Apply". https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-83-gpai [^19]: Herbert Smith Freehills Kramer, "Transparency obligations for AI-generated content under the EU AI Act: From principle to practice" (March 2026). https://www.hsfkramer.com/notes/ip/2026-03/transparency-obligations-for-ai-generated-content-under-the-eu-ai-act-from-principle-to-practice [^20]: EU Artificial Intelligence Act, "Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems". https://artificialintelligenceact.eu/article/27/ [^21]: EU Artificial Intelligence Act, "Overview of the Code of Practice". https://artificialintelligenceact.eu/code-of-practice-overview/ [^22]: Paperclipped, "DSGVO AI Agents Compliance 2026: DPIA Now Mandatory". https://www.paperclipped.de/en/blog/dsgvo-ai-agents-compliance-2026/ [^23]: White & Case, "Long awaited EU AI Act becomes law after publication in the EU's Official Journal". https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal [^24]: CNIL, "Carrying out a data protection impact assessment if necessary". https://www.cnil.fr/en/carrying-out-protection-impact-assessment-if-necessary [^25]: NIST, "AI Risk Management Framework". https://www.nist.gov/itl/ai-risk-management-framework [^26]: NIST, "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST.AI.600-1)" (July 2024). https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence [^27]: McKinsey, "State of AI trust in 2026: Shifting to the agentic era" (2026). https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era
