AIセキュリティ

EU AI Act Goes Live August 2026: Fines up to €35M — Five Steps Japanese Companies Should Take Now

2026-05-15Ryuta Hamamoto

In August 2026, EU AI Act obligations for high-risk AI systems come into full force, with fines up to €35M or 7% of global revenue. Three extraterritorial patterns that catch Japanese companies, the four risk categories, and a five-step implementation roadmap anchored to ISO/IEC 42001.

EU AI Act Goes Live August 2026: Fines up to €35M — Five Steps Japanese Companies Should Take Now
シェア

I'm Ryuta Hamamoto from TIMEWELL.

“EU AI Act? We don't have any EU operations, so it doesn't apply, right?” Until about three months ago, 80-90% of Japanese executives I spoke to reacted this way.

That reading is wrong. On August 2, 2026, the bulk of obligations for high-risk AI systems come into full force, and fines top out at €35M or 7% of global revenue, whichever is higher. Reading the text carefully reveals three patterns under which Japanese-headquartered companies are caught extraterritorially.

“Doesn't apply to us” is a minority position. This article covers the live timeline, when Japanese companies are in scope, and a five-step roadmap to start now.

TL;DR

  • EU AI Act is rolling out in stages: prohibited AI and AI literacy obligations from February 2025, General-Purpose AI (GPAI) from August 2025, most high-risk obligations from August 2, 2026
  • Japanese companies are caught extraterritorially in three patterns: (1) AI products/services placed on the EU market, (2) AI outputs used in the EU, (3) AI used by an EU subsidiary
  • Fines reach €35M or 7% of global revenue, well above GDPR's €20M
  • The pragmatic path forward is a five-step approach anchored to ISO/IEC 42001 with high-risk requirements layered on top

Get the EU AI Act timeline right

EU AI Act took effect on August 1, 2024, but obligations apply in waves:

Date What applies
Feb 2, 2025 Bans on prohibited AI (social scoring, emotion recognition in workplaces/schools, etc.); AI literacy obligation (workforce training)
Aug 2, 2025 GPAI provider obligations; fines and penalties become enforceable
Aug 2, 2026 Most high-risk AI obligations (conformity assessment, technical documentation, logs, human oversight, data governance, transparency, etc.)
Aug 2, 2027 Additional requirements for high-risk AI built into pre-existing safety regulations (toys, medical devices, vehicles)

As of May 2026, the prohibited AI, AI literacy, GPAI provider, and penalty provisions are already live. The remaining major event is the August 2, 2026 high-risk deadline—roughly three months away.

AI Security training, taken seriously

A 2-day intensive course fully aligned with OWASP, NIST, ISO/IEC 42001, and METI. Take it as executives, practitioners, or both.

Penalty structure—heavier than GDPR

Article 99 sets tiered penalties:

Violation Maximum
Use of prohibited AI (social scoring, etc.) €35M or 7% of global revenue (whichever is higher)
Major violations for high-risk AI systems €15M or 3% of global revenue
Providing false information to authorities €7.5M or 1% of global revenue

GDPR maxed at €20M / 4% of global revenue. EU AI Act tops out at €35M / 7%. The EU is signaling—through the size of the fines—its intention to set the global rule for AI.

Three extraterritorial patterns that catch Japanese companies

Article 2 defines scope. The main patterns where Japan-headquartered companies are caught:

Pattern 1: AI products/services on the EU market

A Japanese company places AI systems or AI models into the EU market (sales, SaaS, API, free of charge—all count).

Examples:

  • A cosmetics maker uses AI image recommendations on its EU e-commerce site
  • An automotive parts maker embeds AI control in products sold to EU OEMs
  • A SaaS company offers generative AI features to EU customers

Pattern 2: AI outputs used inside the EU

AI outputs (predictions, classifications, images) are used within the EU.

Examples:

  • A recruitment AI run from Japan evaluates employees at EU subsidiaries
  • A Japan-based credit scoring AI feeds transaction decisions at an EU subsidiary
  • Marketing AI analyses run in Japan feed advertising targeting in the EU

Pattern 3: An EU subsidiary becomes a “deployer”

If an EU subsidiary uses an AI system in its operations, the subsidiary becomes a “deployer” with its own obligations—separate from headquarters' risk management.

Examples:

  • EU subsidiary uses AI-based performance reviews
  • EU subsidiary uses CRM-embedded generative AI features
  • EU subsidiary uses AI agents for internal document summarization

Even if your company doesn't ship products to the EU, AI used daily at an EU subsidiary is often in scope.

Four risk categories and their obligations

EU AI Act classifies AI into four risk tiers:

Tier Includes Obligation weight
Prohibited Social scoring, subliminal manipulation, emotion recognition in workplace/school, etc. Use forbidden (already in force from Feb 2025)
High-risk AI in recruitment / HR, credit scoring, medical diagnosis, education, critical infrastructure operation, law enforcement Conformity assessment, technical documentation, logs, human oversight, data governance, transparency, etc. (in force from Aug 2026)
Limited risk Chatbots, deepfakes, AI-generated content Transparency obligation (disclosure)
Minimal risk AI spam filters, in-game AI, etc. Voluntary (code of conduct encouraged)

For most Japanese companies, the practical issues are high-risk and limited-risk. Companies using recruitment / HR AI, credit scoring, or medical diagnosis aids need to plan for compliance by August 2, 2026.

A five-step implementation roadmap via ISO/IEC 42001

Reading the EU AI Act text alone is too abstract to act on. The pragmatic answer: use ISO/IEC 42001 (the AI Management System standard, published December 2023) as the scaffolding and layer EU AI Act high-risk requirements on top.

Step 1: AI system inventory (1-2 months)

Enumerate every AI system the company develops or uses.

  • In-house, SaaS, embedded—all of it
  • Purpose, business unit, data involved
  • Tentative EU AI Act risk classification

Deliverable: AI system registry (Excel is fine).

Step 2: Identify high-risk and limited-risk systems (2-3 weeks)

Map each inventoried AI system against EU AI Act Annex III (high-risk use-case list).

Pay extra attention to:

  • AI involved in hiring, promotion, firing → high-risk
  • AI for education assessment or admissions → high-risk
  • AI assisting medical diagnosis or treatment → high-risk
  • AI for credit scoring or lending decisions → high-risk

Step 3: Build the ISO/IEC 42001 AIMS scaffold (3-6 months)

Implement core ISO 42001 requirements:

  • Board-approved AI governance policy
  • Documented AI risk assessment procedures
  • Defined AI development and operations lifecycle
  • Incident management process
  • Education and literacy plan

Step 4: Fulfill high-risk-specific requirements (3-6 months)

Implement EU AI Act Chapter III, Section 2 obligations:

  • Risk management system (Article 9)
  • Data governance (Article 10)
  • Technical documentation (Article 11, Annex IV)
  • Logging (Article 12)
  • Transparency and user information (Article 13)
  • Human oversight (Article 14)
  • Accuracy, robustness, security (Article 15)

Step 5: Conformity assessment and ongoing operations (continuous)

  • Conformity assessment for each high-risk AI system (Article 43)
  • CE marking where applicable
  • Serious incident reporting to authorities (Article 73)
  • Audit log retention and periodic review

Achieving ISO 42001 certification provides evidence of “appropriate AI management” in the EU AI Act sense. That is why PwC, EY, and Deloitte sell ISO 42001 certification support as a high-priced product.

Executive decisions required

EU AI Act compliance requires explicit executive decisions:

  1. Scope determination: enumerate all AI at the Japan headquarters that touches the EU
  2. Owner appointment: name a CAIO or an AI governance lead under the CISO
  3. Budget allocation: end-to-end ISO 42001 + EU AI Act compliance runs ¥30M-100M annually for a mid-sized company
  4. Timeline declaration: hit August 2, 2026 or accept staged compliance—either way, decide with risk acceptance in hand
  5. Compliance vs. business: partial exit or reduction of EU-targeted services is a valid management decision

How WARP SECURITY treats this

TIMEWELL's WARP SECURITY Executive DAY runs participants through EU AI Act timeline, fines, and the ISO 42001 relationship in an AI system inventory workshop.

Participants receive a Step-1 AI registry template and an ISO 42001 quick-check roadmap as course materials.

Summary

  • August 2, 2026: EU AI Act high-risk obligations enter full force
  • Japanese companies are caught in three extraterritorial patterns—“not us” is a risky reading
  • Fines reach €35M or 7% of global revenue—above GDPR
  • Pragmatic path: a five-step approach via ISO/IEC 42001—inventory, classification, AIMS scaffold, high-risk requirements, conformity assessment
  • Three months to go. Without executive decisions, you will miss it

EU AI Act gets characterized as “overly aggressive,” but the U.S. Executive Order and Japan's AI Business Guidelines v1.2 are heading the same direction. The only choice is whether to do it now or later.

References

How well do you understand AI?

Take our free 5-minute assessment covering 7 areas from AI comprehension to security awareness.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About AIセキュリティ

Discover the features and case studies for AIセキュリティ.