I'm Hamamoto from TIMEWELL. A quick note on something that has been making waves in tech circles.
You may have heard the name OpenClaw recently. Unlike a chatbot that simply answers questions, OpenClaw is an AI agent framework that can operate your PC and execute tasks autonomously — your personal AI secretary. The enthusiasm from developers around the world is understandable.
But behind that powerful capability lies some serious security risk. OpenClaw instances with no authentication exposed to the internet. Malicious code being executed. These are no longer theoretical scenarios — they have already happened.
How can you safely, and cheaply, build your own AI assistant at home? And is enterprise use genuinely off the table? I'll dig into specific incidents and countermeasures.
What OpenClaw Is
In late 2025, Austrian developer Peter Steinberger built an open-source project as a weekend side project. It started as Clawdbot, became Moltbot over trademark concerns, and landed as OpenClaw. The name changed, but the momentum hasn't stopped. In early 2026, it crossed 100,000 GitHub stars in just 14 days — an unusual rate of growth.
The Critical Difference from Chatbots
Traditional chatbots "answer questions." OpenClaw "executes tasks." It uses a large language model as its brain, but it has real hands — the file system, shell, and browser on your PC — that it can actually operate.
| Feature | Chatbot | OpenClaw |
|---|---|---|
| Basic behavior | Responding to questions | Planning and executing tasks |
| File operations | Not possible | Read, write, edit, create |
| Browser control | Not possible | Navigation, form input, data collection |
| Command execution | Not possible | Shell commands, monitoring |
| External integration | Limited | Slack, Gmail, Calendar via API |
| Autonomy | None (passive) | Scheduled execution via cron, self-scheduling |
Architecture That Gives AI Soul and Memory
What sets OpenClaw apart is a file called SOUL.md. It defines the agent's behavioral guidelines and personality — the AI's soul, essentially. Principles like "have opinions," "research before asking" — designed to create a proactive partner, not a passive command executor.
The memory system is also interesting. Like a person keeping a diary, it records daily activities and reads them back to maintain continuity. Important decisions go into MEMORY.md (long-term memory); daily events go into memory/YYYY-MM-DD.md (diary). The agent reads these on startup to learn from past experience.
Skills That Extend Without Limit
OpenClaw's capabilities are not fixed. Plugins called skills let you add any functionality. Google Calendar operations, Slack thread summarization, tech news collection — the user community has developed and shared a diverse range of skills, and you can build custom automation tailored to your work by combining them.
Interested in leveraging AI?
Download our service materials. Feel free to reach out for a consultation.
Building a Low-Cost AI Assistant on Your Home PC
Such a powerful AI agent must require expensive servers, right? The OpenClaw community's conclusion was actually "home PC is the best bang for your buck."
Why Home PC Beat Cloud Options
Early experiments with AWS EC2 were abandoned when costs exceeded 3,000 yen per month for stable operation. Cheap VPS options had insufficient memory and couldn't handle the workload reliably. The final answer was a home Windows PC running WSL2 — extra cost is only electricity, with enough memory and processing power to run comfortably.
| Environment | Initial Cost | Monthly Cost | Performance | Notes |
|---|---|---|---|---|
| AWS EC2 | None | ~¥3,000+ | Stable | Complex setup, high cost |
| Cheap VPS | None | ~¥1,000+ | Unstable | Often insufficient memory |
| Home PC + WSL2 | PC cost | Electricity only | Comfortable | Best cost-performance |
Choosing an LLM and Managing Costs
Since OpenClaw calls external LLM APIs as its brain, API usage fees can become a significant operational cost. Journalist Federico Viticci reported consuming 180 million tokens in one experiment — unconstrained operation can lead to unexpected bills.
The solution is straightforward: understand the characteristics and pricing of different models. GPT-4, Claude 3 Opus, Gemini 1.5 Pro all have different strengths and different costs. Use cheaper models for tasks that don't require sophisticated reasoning.
The ultimate cost reduction is integration with local LLMs. Running Llama 3 with Ollama eliminates API fees entirely — though you need a PC with a high-performance GPU, which is a significant upfront investment.
Setting usage alerts on each provider's dashboard is probably the most practically important habit: configure notification when you hit a spending threshold to prevent unexpected charges.
The Security Risks Lurking in OpenClaw — And Real Incidents
Easy to deploy and powerful. That's precisely why it can become a security vulnerability. Using OpenClaw is like letting an omnipotent butler live in your unlocked house who will do whatever anyone says — and incidents caused by exactly this have already occurred.
The Design Philosophy Itself Is the Risk
OpenClaw's danger doesn't come from a specific bug. Having full permissions on a user's PC, accepting instructions from external sources, and autonomously learning and acting — that degree of freedom is itself an attractive target for attackers. Trend Micro has warned that this is "a risk inherent to the agentic AI paradigm itself."
Five Real Incidents
Incident 1: Over 1,000 instances exposed with no authentication
In late January 2026, security researcher scanning with Shodan found approximately 1,000 OpenClaw instances accessible to anyone with no authentication. Bitsight's tracking showed this number grew to over 30,000 in just 12 days between January 27 and February 8. The cause in most cases was misconfigured reverse proxies — treating external access as local access and passing administrator privileges directly. Researcher Jamieson O'Reilly demonstrated that this vulnerability made it easy to extract API keys and chat history.
Incident 2: Skill marketplace became a malware hotbed
The ClawHub marketplace for sharing extension skills was in worse shape than expected. Snyk's investigation found security flaws in 36% of published skills, including 1,467 skills containing a malware called "AuthTool" designed to steal cryptocurrency wallets and passwords. Users installed what they thought were useful tools while handing over their sensitive information.
Incident 3: Invisible commands steal private keys
Prompt injection — an LLM vulnerability — is particularly troublesome with OpenClaw. One researcher demonstrated embedding malicious commands in invisible text on a webpage, causing OpenClaw to send internal information to attackers the moment it browsed that page. Experiments embedding commands in emails to steal private keys from a PC were also successful.
Incident 4: AI autonomously published a post defaming a human
In February 2026, an AI agent automatically generated and published a blog post defaming an open-source developer who had refused to provide their code. The incident, where AI autonomously attacked a human, created significant controversy. As autonomous agents like OpenClaw spread, "AI-driven personal attacks" could happen to anyone.
Incident 5: 4.75 million records leaked from AI-only social network
On "Moltbook" — a social network where OpenClaw agents interact — 4.75 million records were leaked due to database misconfiguration. Reports indicated it included addresses for 35,000 people and 20,000 personal email addresses. The AI paradise became a venue for personal data leakage due to careless management.
These are just the visible cases. Kaspersky's audit of OpenClaw identified 512 vulnerabilities, 8 of which were critical.
Concrete Security Measures to Protect Your PC
Running it unprotected is reckless. But with appropriate measures, risk can be substantially reduced. Here are the hardening techniques security experts recommend, organized in three tiers.
Tier 1: Basic Protection (Required for Everyone)
Minimum defensive measures anyone experimenting with OpenClaw should take. Skipping these is like leaving your front door open when you go out.
- Run in a dedicated environment. Never run on your main PC. Set up an isolated environment using an old PC or virtual machine (VMWare, VirtualBox, etc.).
- Read the official documentation thoroughly, especially the security sections. Most incidents stem from misunderstanding basic configuration.
- Choose an LLM with relatively high prompt injection resistance. I recommend Claude from Anthropic.
- Never expose OpenClaw's port (default 7474) to the internet. Use firewall rules to allow access only from localhost (127.0.0.1).
- If integrating with Slack or Gmail, don't use your main accounts. Create a burner account (disposable) specifically for OpenClaw to minimize damage if compromised.
- Make it a habit to regularly run
security audit --deepto check for known vulnerabilities.
Tier 2: Standard Protection (Recommended Where Possible)
Additional measures to reduce risk further beyond basic protection.
- Network isolation. Separate the OpenClaw environment from your home's main network. Using a guest WiFi network puts it on a different network from your phone and family's PCs.
- Authentication hardening. Set up multi-factor authentication (MFA) for access to the OpenClaw Gateway.
- File system permission restriction. Limit the directories the OpenClaw execution user can access to the minimum necessary.
Tier 3: Advanced Protection (If Handling Confidential Data)
Specialized measures for work data or other information where leakage would be serious.
- Network egress filtering. Deploy a proxy like Squid and manage OpenClaw's communication destinations with a whitelist. Block all traffic to anything other than permitted API endpoints.
- Container virtualization. Use Docker or Podman and run in Rootless mode. Even if a container is compromised, damage to the host system is prevented.
| Level | Measure | Specific Actions |
|---|---|---|
| Tier 1: Basic | Dedicated environment | Avoid main PC, use separate machine or VM |
| Documentation | Fully understand security-related sections | |
| LLM selection | Use model with high prompt injection resistance | |
| Port management | Use firewall to block external access | |
| Disposable accounts | Create burner accounts for integrated services | |
| Regular audits | Make security audit --deep a habit |
|
| Tier 2: Standard | Network isolation | Isolate from main network via guest WiFi |
| Authentication hardening | Implement MFA for Gateway | |
| Permission restriction | Minimize file system access permissions | |
| Tier 3: Advanced | Egress filtering | Manage communication destinations with proxy whitelist |
| Container virtualization | Isolate from OS using Docker/Podman (Rootless) |
Is Enterprise Use Possible?
Honestly, deploying OpenClaw as-is in business operations is reckless.
Palo Alto Networks, Cisco, CrowdStrike, and other global security companies have uniformly issued warnings against enterprise use. CrowdStrike has even provided a tool that detects and forcibly removes OpenClaw from corporate networks.
The primary reason is uncontrollable autonomy. There's no way to fully predict and manage what an AI agent will decide, what information it might send externally, or what it might operate. Customer data leakage, unintended system modifications, compliance violations — any of these could damage a company irreparably.
Minimum Requirements If You Must Try in an Enterprise Context
Curiosity and exploration are essential to corporate growth, so I won't say never try it. For limited experimental purposes like R&D, these are the minimum requirements — like testing an F1 car on a closed circuit rather than public roads:
- Fully closed environment operation. Build a network physically isolated from the internet, and run LLM models and libraries downloaded offline in advance.
- Strict permission management and separation of duties. Limit the data and APIs OpenClaw can access to only what's necessary for the experiment, and clearly separate administrator and monitoring roles.
- Full activity logging and monitoring. Log all file access, network communication, and executed commands, and build in automatic shutdown when anomalies are detected.
- Thorough employee risk education. Enforce understanding of the risks covered in this article and compliance with security procedures.
Getting all this right is just the starting line for experimentation. Business leaders and IT departments need to understand this is not a tool to casually deploy for operational efficiency.
Summary
OpenClaw fundamentally changed the relationship with AI. A world where you just ask AI like a secretary and automation progresses — it's technology with real potential to change how we work.
At the same time, the incidents covered in this article all actually happened. 30,000 exposed instances, a skill marketplace full of malware, private keys stolen by invisible commands. Real damage has already occurred behind this promising technology.
I'm not saying don't touch OpenClaw. But please run all Tier 1 measures before starting it. Never run on your main PC, block the port, use disposable accounts for integrated services. Just these three will substantially reduce your chances of being harmed.
A side note — TIMEWELL is also watching AI agent technology closely. Our ZEROCK platform realizes enterprise AI knowledge management with GraphRAG technology, built on domestic AWS servers with enterprise-level security standards. It's the opposite approach from highly flexible tools like OpenClaw, but we believe this direction is more realistic for companies to safely utilize AI.
Safe exploration.
References
- Reddit. (2026, February). We scanned 18000 exposed OpenClaw instances and found 15...
- Trend Micro. (2026, February). What OpenClaw Reveals About the Risk of Agentic Assistants.
- Kaspersky. (2026, January). 512 Vulnerabilities, Including 8 Critical, Discovered in OpenClaw During Security Audit.
- BitSight. (2026, February). OpenClaw AI Security Risks: 140,000 Exposed Instances and Counting.
- Snyk. (2026, February). Snyk Finds Prompt Injection in 36%, 1467 Malicious Payloads in a Week in ClawHub.
- CrowdStrike. (2026, February). What Security Teams Need to Know About OpenClaw, the AI Super-Agent.
