Cybersecurity researchers are sounding the alarm over a new AI personal assistant called Clawdbot, warning that misconfigured deployments could expose sensitive personal data and API credentials to the public.
On Tuesday, blockchain security firm SlowMist said it identified a “gateway exposure” in Clawdbot that puts “hundreds of API keys and private chat logs at risk.”
“Multiple unauthenticated instances are publicly accessible, and several code flaws could lead to credential theft or even remote code execution,” SlowMist said.
The issue was first detailed on Sunday by security researcher Jamieson O’Reilly, who said that “hundreds of people” have unintentionally exposed their Clawdbot control servers to the internet in recent days.
Clawdbot is an open-source AI assistant built by developer and entrepreneur Peter Steinberger that runs locally on users’ devices. Interest in the tool surged over the weekend, with online discussion reaching “viral” levels, according to Mashable.
Exposed Clawdbot Control servers leak credentials
Clawdbot’s gateway links large language models to messaging platforms and allows command execution through a web-based admin interface known as “Clawdbot Control.”
O’Reilly said the vulnerability arises when the gateway is deployed behind an unconfigured reverse proxy, allowing attackers to bypass authentication. Using internet scanning tools such as Shodan, he was able to quickly locate exposed servers by searching for distinctive HTML identifiers.
“Searching for ‘Clawdbot Control’ took seconds and returned hundreds of results,” O’Reilly said.
According to the researcher, exposed instances could grant access to API keys, bot tokens, OAuth secrets, signing keys, full chat histories across platforms, the ability to send messages as the user, and command execution privileges.
“If you’re running agent infrastructure, audit your configuration today,” O’Reilly warned. “Check what’s actually exposed to the internet, understand what you’re trusting with that deployment, and what you’re trading away.”
“The butler is brilliant. Just make sure he remembers to lock the door.”
Extracting a private key took five minutes
Researchers warned that the AI assistant could also be exploited in more serious ways, including compromising crypto asset security.
Matvey Kukuy, CEO of Archestra AI, demonstrated the risk by attempting to extract a private key using prompt injection. He shared a screenshot showing that he sent Clawdbot an email designed to manipulate the assistant into checking the message and transmitting the private key from the compromised machine.
According to Kukuy, the entire process “took five minutes.”

Clawdbot differs from many other agentic AI tools in that it has full system-level access to users’ machines, allowing it to read and write files, run commands, execute scripts, and control browsers.
“Running an AI agent with shell access on your machine is… spicy,” the Clawdbot FAQ notes. “There is no ‘perfectly secure’ setup.”
The FAQ also outlines the threat model, warning that malicious actors may attempt to trick the AI into performing harmful actions, socially engineer access to user data, or probe for infrastructure details.
SlowMist advised users to mitigate these risks by applying strict IP whitelisting on any exposed ports.

