Using OpenClaw: Things to Consider as an MSP
With many publicly declaring OpenClaw to be the next evolution of autonomous AI agents acting as your full-time assistant, given that you are an Evolving MSP, it seems appropriate for you to be leading the change into this next evolution! So let's dive into what OpenClaw is all about and how you can benefit from it.
Big Questions Arise as Big Changes Happen
Since OpenAI hired Peter Steinberger, creator of OpenClaw, many of you may have questions about the future. Let's address some of these:
- What will become of OpenClaw? It will live in open source and OpenAI has committed to providing full support.
- Why did OpenAI hire Peter Steinberger? They beat Anthropic and Google to him and made the best offer. Given what Steinberger accomplished in just a few days from Clawdbot to Moltbot to OpenClaw, imagine what he can do with more time and the resources of a frontier model maker behind him. I'm sure we can look forward to more interesting news coming up about him.
- Should we experiment with OpenClaw or wait to see what Peter Steinberger does with ChatGPT? This one is really up to you. You can anticipate that whatever he does will at least resemble what he did with OpenClaw, so working with it can give you early experience with the technologies. But I wouldn’t encourage deploying OpenClaw for customers at least until OpenAI demonstrates that it is now safe and secure.
OpenClaw has gone from curiosity to crisis at a speed that MSPs simply cannot ignore. Within weeks it amassed roughly 145,000 to 150,000 GitHub stars, with analysts noting that “OpenClaw, the open-source AI agent formerly known as Clawdbot and Moltbot hit 150,000+ GitHub stars in 72 hours.” This surge signaled a shift from experiment to mainstream production. One security study found that 22% of enterprise OpenClaw deployments were unauthorized and that more than half of those ran with privileged access, while token and configuration leaks have been observed “in hundreds of misconfigured instances.” CyberArk calls it “an early view of the identity-focused risks” that enterprise agents will bring, while Fortune quotes one CTO saying, “The only rule is that it has no rules… that game can turn into a security nightmare.”
But first, what is OpenClaw?
OpenClaw itself is an open-source, autonomous AI agent framework that grew out of earlier projects known as Clawdbot and Moltbot, originally built by independent developer Peter Steinberger to show what happens when you let an AI actually do things on its own machines rather than just chat.
It is unusual because it combines several ideas that were previously separate: it runs as a persistent local or cloud service, has long-term memory and scheduled tasks, and can plug into a wide range of tools, including messaging apps, browsers, CRMs and code repositories, while orchestrating external language models behind the scenes.
Its creator, Peter Steinberger, pitches it as a bet on “specialized intelligence,” arguing that hostable, hackable agents like OpenClaw are a better path than chatbots. Early adopters seized on that flexibility to wire it deeply into their workflows. That same mix of openness, the ability to host and broad integration is precisely what makes it different from a typical SaaS copilot and why security researchers now treat it as a landmark case in how fast an autonomous agent can escape the lab and reshape real environments.
Look Before You Leap into OpenClaw
So, to what extent should you consider using OpenClaw? Think of OpenClaw as a training ground, not a production tool. You need hands-on experience with agentic behavior, but under controlled lab conditions. That means trialing it in a segregated environment, under its own low-privilege identity, pointed at non-critical systems, test tenants and synthetic data.
You let your engineers see what it actually does when it can call tools, hit APIs and persist in actions over time. This also introduces friction on purpose, requiring approvals for anything potentially destructive or financially binding, clear logging and a simple, well-documented kill switch.
CyberArk’s advisory outlines what you would want to avoid when using OpenClaw for the first time: a developer installs OpenClaw on a corporate laptop, it inherits local privileges and access to SSH keys and source repos, and suddenly you have “a high-risk gateway where autonomous agents operate outside the oversight of traditional IAM controls.” Your goal should be to learn from that scenario without ever living it.
Promote the Practice of Prudence with Customers
With customers, your stance should be less “Should we roll this out?” and more “How do we recognize and domesticate it when it shows up?”
The reality is that departments are already experimenting. Bitsight and others describe OpenClaw as “gaining rapid adoption” across business environments, often deployed by individual teams rather than through formal procurement. VentureBeat frames this as part of a broader “OpenClaw moment” where autonomous agents start to displace SaaS seats and human headcount, creating what it calls a “SaaSpocalypse” for traditional software models. Your value here is to intercept the experimentation before it becomes an incident. That starts with adding explicit questions about OpenClaw and similar agents to your assessments and onboarding. It continues with a repeatable engagement:
- Discovery of where the agent is running, what it can touch, which identities and secrets it holds, and who “owns” it today.
- Redesign to narrow its scope, reduce its privileges, wrap it in monitoring and governance, so it becomes a bounded, supportable component instead of a rogue script.
You want to help the customer get from the point of 'shadow AI script on a dev box' to 'bounded agent with a clear mandate' without pretending that the genie can be put back into the bottle.
Is it Really Secure?
We now know how best to use OpenClaw, but the question remains: is it secure? Expert opinion on security is both divided and useful on the topic. On the enthusiast side, David Heinemeier Hansson describes OpenClaw as “giving AI its own machine, long-term memory, reminders and persistent execution,” and calls it “a sneak peek at a future where everyone has a personal agent assistant.”
IBM researchers, looking at the same phenomenon from the other side, argue that OpenClaw shows community-driven agents can achieve “true autonomy” without being vertically integrated by big tech. “Fair point,” notes The Biggish, “if you’re running it on a laptop with no corporate access.”
But the security voices have been louder. Cisco’s analysis describes OpenClaw’s trajectory as “the largest security incident in sovereign AI history,” highlighting nine vulnerabilities in a single popular skill and warning that “there are no sufficient guardrails” in the default architecture. CyberArk’s team writes that “OpenClaw’s tooling itself isn’t enterprise-grade,” but that it “offers a useful blueprint for understanding how autonomous agents can affect enterprise security,” precisely because it shows how quickly identity-related risks appear when agents “operate with broad permissions and unpredictable behavior.”
The Only Rule is that there are no Rules
Fortune quotes Ben Seri of Zafran Security saying, “The excitement about OpenClaw… is that it has no restrictions… The only rule is that it has no rules. That’s part of the game,” and then spells out the obvious conclusion: the same lack of rules that thrills hackers and indie builders “can turn into a security nightmare” for enterprises.
Two issues of particular concern:
- Shadow AI and unauthorized deployments
Token Security found that “22 percent of enterprise customers had unauthorized OpenClaw deployments” and that over half of those had privileged access. CyberArk calls this “shadow AI,” where an employee installs an agent “on a developer’s laptop or within the corporate network” and hooks it into Slack, Teams, Salesforce, or other systems, creating a high-risk gateway that sits completely outside standard IAM and changes control workflows.
For you, that means your threat surface now includes tools you never sold, never approved and may not see until the day that something goes wrong. Your response is to assume their presence, discover them proactively using agent scanners where appropriate and bring them under policy, rather than pretending they shouldn’t exist.
- Privilege, identity and exposure of secrets
Multiple studies have shown OpenClaw instances exposing API keys and OAuth tokens in dashboards and logs, with Palo Alto documenting “hundreds of misconfigured instances” leaking credentials, and The Biggish reporting that the platform “has no built-in sandboxing.”
CyberArk’s mapping of the attack surface points to a familiar pattern: agents often require high local privileges to be useful, so a compromised OpenClaw instance can read SSH keys, modify source code or pull sensitive data at machine speed.
BitSight’s and Bitdefender’s advisories also highlight how attackers abuse central hubs like ClawHub to gain initial access, then move laterally by riding the agent’s privileges into messaging apps, calendars and web content. In plain terms, the agent becomes a concentrated bundle of secrets and access rights so once an attacker gets it, they get everything it can reach.
Not Yet Scalable
Installation and operations are where you turn this from theory into a repeatable service. The mainstream OpenClaw story is “one‑liner install, connect your tools, unlock magic,” and tutorials show exactly that: a shell or PowerShell bootstrap that pulls down a bundle, then quick configuration of messaging and model integrations.
That is fine for enthusiasts, but not for regulated customers. You should insist on seeing what the scripts do, pinning versions and running the agent under a dedicated, low-privilege account with confined filesystem and network access. Review where configuration and secrets are stored, how updates are applied, which ports are exposed and how backups and logs are handled.
The Biggish notes that OpenClaw’s architecture “doesn’t scale to enterprise without fundamental changes to privilege isolation and third-party skill vetting,” pointing to real incidents where malicious skills silently exfiltrate data via HTTP calls. That’s exactly the kind of configuration debt you will inherit when a customer says, “It’s already running—just make it safe.”
Suggested Strategy for MSPs
OpenClaw is the first widely visible example of what autonomous agents will look like in your customers’ environments. They're also a preview of the mess they create when deployed without governance.
The right MSP response is not to ban it outright or to turn it into an SKU. Instead, build a structured, reusable response. That means formalizing your own internal policy for experimentation, building a hardened lab and runbooks, adding explicit questions about OpenClaw-style agents to your assessments and packaging an AI Agent Hardening & Governance offer that can wrap around whatever tool a customer has already chosen.
As Mark Kraynak put it in Forbes, “OpenClaw showed the future of AI security, and it’s going to be rough,” because researchers could quickly demonstrate “complete remote exploitation” from its default posture. Your job as an MSP is to make that future survivable for your customers and profitable for you by being the ones who understand the agents, not just the headlines. As always, share your personal OpenClaw stories with me for future updates and insight!
Posted by Howard M. Cohen on February 26, 2026