20 Frequently Asked Questions on Enterprise AI Security
Hamamoto, TIMEWELL.
AI is now part of everyday business operations — but security concerns are growing alongside adoption. For the first time, IPA's "Top 10 Information Security Threats 2026" includes AI-related security risks. "I want to use AI, but I'm worried about data leaks." "Our company has no AI security policy." "I wouldn't know how to answer if auditors asked." This article addresses those concerns head-on with 20 questions about enterprise AI security.
Data Leak Risks
Q1: If I enter confidential company information into an AI, could it leak?
Yes, it could. With publicly available AI services (such as the free version of ChatGPT), input data may be used for model training — meaning what you type could eventually appear in responses to other users. Enterprise plans and API access typically have contractual provisions preventing this, but always verify the terms of service. Assuming "enterprise plan = safe" without checking is the riskiest thing you can do.
Q2: I'm concerned about shadow IT — employees using AI tools without company approval.
That's a legitimate concern. At one client company we surveyed, more than 30% of employees were using external AI tools for work without company authorization. There are three countermeasures: explicitly communicate which AI tools the company approves; establish a policy prohibiting unapproved tools; and monitor usage regularly. Prohibition alone doesn't work. Unless you pair it with providing approved tools that employees actually find useful, they'll just keep using things quietly.
Q3: What information should never be entered into ChatGPT or Copilot?
Here's a list of information that should be off-limits: personal data (names, addresses, phone numbers, etc.); customer information (client names, contract details, etc.); confidential business information (products under development, business strategy, etc.); credentials (user IDs, passwords, API keys, etc.). My personal recommendation: make "when in doubt, don't enter it" your default rule. It's the simplest and most effective approach.
Q4: Have AI-related data leaks actually happened?
Yes. The 2023 Samsung incident — where employees entered semiconductor trade secrets into ChatGPT — is well-known. There have also been cases where confidential information leaked through the responses of internal chatbots. In most cases, the information wasn't entered maliciously — it was entered because the AI was convenient. That's exactly why you need systems to prevent it, not just policies.
Q5: Which has stronger security — cloud AI or on-premises AI?
This question comes up constantly, and the honest answer is that neither is categorically superior. Cloud AI benefits from robust security infrastructure provided by the vendor, but the data does leave your premises. With on-premises AI, data stays internal, but you own the security responsibility. A practical approach for IT leaders: handle high-sensitivity data processing on-premises or through domestic cloud providers, and use cloud services for general business tasks. ZEROCK runs on AWS Tokyo region, so data never leaves Japan — which is a meaningful reassurance for many organizations.
Struggling with AI adoption?
We have prepared materials covering ZEROCK case studies and implementation methods.
Data Protection
Q6: Where is the data I enter into an AI stored?
It depends on the service. For cloud-based AI, data is sent to the service provider's servers. Enterprise plans typically specify data retention periods and storage locations in the contract. The one thing to always verify: is the data processed in domestic data centers, or is it transferred overseas? That single factor significantly changes the risk profile.
Q7: What does "encryption" actually cover?
You need to check two things: encryption in transit and encryption at rest. Encryption in transit (TLS/SSL) is standard, but at-rest encryption varies by service. When selecting an enterprise AI service, confirm that both types of encryption are in place.
Q8: What does Japan's Act on the Protection of Personal Information mean for AI use?
When entering personal information into AI, you must ensure it falls within the stated purpose of use and that appropriate safety management measures are in place. Sending personal information to a cloud AI service may qualify as "entrustment," which creates supervisory obligations. In light of the 2025 amendments to Japan's personal information protection law, coordinate with your legal team to get this sorted.
Q9: Do we need to comply with GDPR, APEC CBPR, or other international regulations?
If you handle data belonging to EU residents, or if your organization has offices in the EU, GDPR compliance is required. Cross-border data transfer restrictions mean you should verify whether the AI service's data processing locations comply with applicable regulations. If your business is entirely domestic, domestic law is your primary concern — but if you have global ambitions, addressing this early is advisable.
Internal Policy
Q10: What should an AI use policy cover?
At minimum, five things: a list of approved AI tools; a definition of information that must not be input; rules for using AI output (including human review requirements); the reporting process for incidents; and consequences for violations. Build in a review cycle of every three to six months. AI evolves fast — a policy written six months ago is already out of date.
Q11: Should the policy be uniform across the company, or differentiated by department?
The recommended structure is: core principles that apply company-wide, with operational rules tailored by department. "Don't enter confidential information into AI" is a universal rule. "Which tools to use for which tasks" depends on what each department does. Legal, HR, and other high-sensitivity functions may warrant additional restrictions on top of the baseline.
Q12: Are there public guidelines we can use as a starting point for policy development?
The most useful resource is the "AI Business Guidelines (Version 1.1)" issued jointly by Japan's Ministry of Economy, Trade and Industry (METI) and Ministry of Internal Affairs and Communications (MIC). Published in March 2025, it clearly lays out the requirements for AI developers, providers, and users respectively — and serves as a solid foundation for building your own internal policy.
Audit Readiness
Q13: What do we need to prepare if auditors ask about our AI use?
Four things: documentation of your AI use policy; a list of AI tools in use along with records confirming their security requirements; access logs and usage logs; and incident response history. In short, you need to be able to explain which tools are being used, by whom, for what purpose, and how they're being managed. Honestly, a lot of organizations haven't gotten this far yet.
Q14: How do we ensure the explainability of AI decision-making?
If you're using RAG (Retrieval-Augmented Generation), attaching the source information the AI referenced to each response provides a basis for the AI's reasoning — that's the first step toward explainability. The second step: maintain logs that track which document and which section each AI response is based on. With these two elements in place, you can clearly explain to auditors how the AI arrived at its conclusions.
Q15: How should AI be integrated into internal controls?
When embedding AI in business processes, the foundational design principle is: "AI output → human review → approval." Build a workflow that prevents AI output from becoming a final judgment without human oversight — and verify periodically that the workflow is actually functioning. That's how AI fits into internal controls.
Regulatory Landscape
Q16: Are there AI-specific laws in Japan?
As of February 2026, Japan has no legislation targeting AI specifically. However, existing laws — including the Act on the Protection of Personal Information, the Unfair Competition Prevention Act, and the Copyright Act — apply to AI use. The government is advancing discussions toward legislation, and there's a reasonable possibility that formal AI regulations based on the AI Business Guidelines will be introduced in the future.
Q17: Does the EU AI Act apply to Japanese companies?
If you offer AI products or services within the EU, yes. The EU AI Act, enacted in 2024, classifies AI by risk level and imposes strict obligations on high-risk AI systems. Companies with EU-facing business should determine which risk category their AI falls into.
Q18: Who owns the copyright in AI-generated content?
Under Japan's Copyright Act, content autonomously generated by AI is not subject to copyright protection. However, if a human uses AI as a tool and contributes meaningfully to the creative process, the output may qualify as a copyrightable work. When publishing or selling AI-generated output, consult your legal team first.
Q19: Is AI being weaponized for cyberattacks?
The risk is rising. AI-crafted phishing emails are becoming more sophisticated, deepfake-based impersonation is increasing, and AI is being used to automate vulnerability scanning. The reality is that defenders need AI-based threat detection and log analysis to keep up — you can't manage it manually anymore. The mindset of "fight AI with AI" is increasingly practical, not theoretical.
Q20: What should we do right now?
Three things. First, develop an AI use policy — if you don't have one, start today. Second, review the security of AI tools already in use (contractual terms, data handling). Third, run AI literacy training for employees, including security awareness. It doesn't need to be perfect — just create a "minimum viable ruleset" and iterate from there. Waiting on this means you'll be scrambling when an incident happens.
Summary
Key points for enterprise AI security:
- Entering confidential information into AI carries leak risk. Even on enterprise plans, always verify the terms of service
- Shadow IT countermeasures need both a prohibition policy AND approved alternatives — prohibition alone doesn't work
- AI use policy works best as two layers: company-wide principles + department-level operational rules
- Audit readiness hinges on maintaining proper logs and ensuring explainability
- Regulations are evolving — build in a review cycle every three to six months
AI security isn't something you perfect before you start using AI — it's something you build as you go. The first step is creating a draft AI use policy. For a secure AI environment, ZEROCK runs on AWS Tokyo region, data is never used for model training, and access and usage logs are fully maintained — covering your audit requirements out of the box.
References
- IPA "Top 10 Information Security Threats 2026," January 2026
- METI/MIC "AI Business Guidelines (Version 1.1)," March 2025
- NRI Secure "Data Security in the Age of Generative AI," 2025
