I'm Ryuta Hamamoto from TIMEWELL.
“We haven't banned ChatGPT, but we haven't endorsed it either. We've just been quietly tolerating it.” I've heard this from CISOs and IT leads more times than I can count over the past ten months.
By May 2026, that posture has clearly broken. The Reco 2025 study found that 71% of knowledge workers use unauthorized AI for work and 68% use ChatGPT through personal accounts—57% of whom have pasted sensitive data. Meanwhile, Japan's Ministry of Economy and Ministry of Internal Affairs published “AI Business Guidelines version 1.2” on March 31, 2026, expanding the framework to cover AI agents and physical AI.
The cost of “just tolerating it” has never been higher. This article unpacks the data and lays out a three-phase policy design aligned with the guidelines.
TL;DR
- The defining feature of shadow AI is that bans, tolerance, and silence all leak
- Japan's AI Business Guidelines v1.2 now imposes explicit obligations on AI users (i.e., the employer)
- The answer is neither blanket bans nor silent tolerance—it is a three-phase approach: visibility → sanctioned alternatives → continuous education
Why shadow AI doesn't stop
Shadow AI differs from other shadow IT in one critical way: employees can feel the productivity gain themselves.
- Meeting transcripts and summaries: 30 minutes → 5 minutes
- English email drafting: 15 minutes → 5 seconds
- Instant bug fix suggestions for code
Against this lived experience, abstract guardrails like “company policy,” “no-learning contract clauses,” and “data classification policies” are almost powerless.
The Reco 2025 study revealed:
- 71% use unauthorized AI for work
- 68% use ChatGPT through personal accounts
- 57% of those have entered sensitive data
It is not that governance is missing. It is that the message has reached employees, but the gravitational pull of productivity is stronger.
AI Security training, taken seriously
A 2-day intensive course fully aligned with OWASP, NIST, ISO/IEC 42001, and METI. Take it as executives, practitioners, or both.
Samsung: what happened, what changed
You cannot discuss shadow AI without referencing the March 2023 Samsung Electronics incident.
Within 20 days, three separate events—semiconductor measurement source code, internal meeting recordings, and confidential transcripts—were pasted into ChatGPT by employees. Samsung banned internal ChatGPT use entirely and pivoted to building its own internal AI assistant.
The lesson is not “ban it because Samsung did.” The lesson is that even companies brave enough to ban will see operations stall if a sanctioned alternative isn't ready in time. Samsung managed because it had the in-house resources to swap quickly. Most mid-market Japanese companies do not.
Japan AI Business Guidelines v1.2 — March 2026 highlights
Version 1.2 (March 31, 2026) follows v1.0 (April 2024) and v1.1 (March 2025). Key updates:
| Lens | What's new in v1.2 |
|---|---|
| Subject parties | Three distinct categories—AI developers, AI providers, and AI users (businesses)—are clarified |
| AI agents | Responsibility allocation when AI agents act autonomously |
| Physical AI | Considerations for AI acting in the physical world (robots, autonomous vehicles) |
| Common principles | Human-centric, safety, fairness, privacy, security, transparency, explainability, education and literacy, accountability, innovation |
The crucial change: the AI user (the employing business) now carries explicit responsibility. “We didn't build it, so it doesn't apply to us” is no longer a defense. Every company using ChatGPT for work is in scope as a user.
Beyond bans vs. tolerance — a three-phase approach
The real answer goes beyond the binary.
Phase 1: Visibility
Start by seeing who is using what, and how much, inside the company.
Means:
- CASB / SSE for visibility into generative AI services (Netskope, Zscaler, etc.)
- Proxy / DNS log analysis for access to
chat.openai.com,claude.ai,copilot.microsoft.com, etc. - Endpoint DLP (Symantec DLP, Microsoft Purview) to detect transmission of sensitive data
Policy without visibility is a sermon you cannot measure.
Phase 2: Sanctioned alternatives
In parallel with visibility, provide an approved generative AI environment. Without this, every ban pushes employees back to personal accounts.
Options:
- ChatGPT Enterprise / Microsoft 365 Copilot / Claude for Work — SaaS with clear DPAs and no-training contracts
- Azure OpenAI Service / Amazon Bedrock — LLM usage within your cloud
- TIMEWELL ZEROCK — Enterprise AI with GraphRAG, in-Japan AWS hosting, and knowledge controls
Non-negotiables:
- Contractual guarantee that inputs are not used for training
- SSO/IdP integration to distinguish corporate from personal accounts
- Log capture and DLP integration
- Use-case-specific system prompt management
Phase 3: Continuous literacy
Policy and infrastructure are not enough. If employees don't understand why a rule exists, deviations re-emerge on schedule.
Pillars:
- Add a 30-minute generative AI usage module to onboarding
- Twice-yearly sharing of real-world incidents (Samsung, Air Canada, hallucination lawsuits)
- Executives modeling correct behavior—“I use ChatGPT for meeting summaries too, but only through our Enterprise tenant”
- Publish violation handling outcomes (anonymized, framed for prevention)
A 10-point checklist for an internal policy
The minimum a v1.2-aligned policy should cover:
- Scope: employees, contractors, outsourced staff, and vendor partners
- Whitelisted AI services: maintained on a rolling basis
- Prohibited inputs: PII, customer lists, non-public financials, source code, contracts, medical data, etc.
- Output verification: treat AI output as “user input,” cite sources, mandatory verification
- External disclosure obligation: label AI-generated content (publications, advertising, articles)
- Incident reporting channel: where to report misinputs, outputs, hallucinations
- Monitoring and audit: employee consent for log capture
- Sanctions: tiered response from minor to severe
- Review cadence: quarterly revision review
- Responsibility split: Legal, Compliance, IT, HR
How WARP SECURITY treats this
TIMEWELL's WARP SECURITY treats this Samsung-style leakage scenario as the first of five simulated incident exercises.
In Executive DAY, leaders role-play the first 72 hours after shadow AI is discovered. Responsibility splits between IT, Legal, HR, and Communications are debated and designed on the spot.
In Practitioner DAY, we go hands-on with CASB deployment, DLP configuration, sanctioned AI selection criteria (including ZEROCK), and SSO-based personal-account separation.
Participants receive a policy template fully aligned with Japan's AI Business Guidelines v1.2 as a course benefit.
Summary
- Shadow AI's 71% will not be stopped by the “ban vs. tolerate” binary
- Japan's AI Business Guidelines v1.2 imposes explicit responsibilities on AI users
- The answer is the three-phase approach: visibility → sanctioned alternatives → continuous education
- Companies that feel safe because “we banned it” are the most exposed
Shadow AI is a management problem, not a technical one. Policy without visibility is decoration; bans without alternatives are hollow; rules without ongoing education go stale. The three phases must run in parallel.
