TL;DR
- What: OpenClaw is a conversation-first AI bot that runs locally on your machine and connects to 12+ messaging platforms.
- Who it’s for: Developers who want a self-hosted AI assistant across WhatsApp, Slack, Discord, Telegram, and more.
- Time to set up: 15-30 minutes for a basic single-channel deployment.
- Caution: A recent security audit found 512 vulnerabilities, 8 of them critical. Read the hardening section before deploying.
What Is OpenClaw?
OpenClaw — formerly known as Moltbot, and before that Clawbot — is an open-source AI bot framework that lets you run large language models locally and pipe them into the messaging platforms you already use. Created by Peter Steinberger (who recently joined OpenAI), the project has exploded in popularity: 196,000+ GitHub stars, over 2 million weekly visitors to its documentation site, and an active ecosystem of community-built skills and plugins.
The core idea is simple. Instead of switching between ChatGPT in one tab, Claude in another, and Gemini in a third, you configure OpenClaw once, point it at your preferred AI provider, and interact with it through WhatsApp, Slack, Discord, or whatever channel your team already lives in. It runs as a local daemon, your API keys never leave your machine, and you own the entire conversation history.
That said, OpenClaw is not without problems. The project has had a rough security track record, the plugin ecosystem needs careful vetting, and the documentation — while improving — still has gaps. I’ll cover all of that honestly in this guide.
| Spec | Details |
|---|---|
| Latest Version | v2026.2.17 |
| GitHub | github.com/openclaw/openclaw |
| GitHub Stars | 196,000+ |
| Supported AI Providers | Anthropic (Claude), OpenAI (GPT), Google (Gemini), MiniMax |
| Supported Channels | WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat, BlueBubbles, Matrix, Zalo |
| License | MIT |
| Config Location | ~/.openclaw/openclaw.json |
| Known CVEs | CVE-2026-25253 (CVSS 8.8) — patched in v2026.2.17 |
Prerequisites
Before you install OpenClaw, make sure you have the following:
- Operating system: macOS 13+, Ubuntu 22.04+ / Debian 12+, or Windows 11 with WSL2
- Node.js: v20 LTS or later (OpenClaw’s daemon runs on Node)
- Git: Any recent version
- At least one AI provider API key: Anthropic, OpenAI, or Google. You can add more later.
- A messaging platform account: WhatsApp, Telegram, Slack, Discord, or any of the 12 supported channels
- curl: For the install script (pre-installed on macOS and most Linux distros)
Optional but recommended: a dedicated user account or VM for running the daemon. I run mine in a Proxmox LXC container with 2GB RAM, and it barely breaks a sweat.
A quick note on API keys: if you do not already have one, here is where to get them:
- Anthropic: console.anthropic.com — sign up, add a payment method, generate an API key. Claude Sonnet is the best balance of cost and quality for a chat bot.
- OpenAI: platform.openai.com — same process. GPT-4o is the flagship model.
- Google: aistudio.google.dev — Google offers a generous free tier for Gemini API access.
Have your key ready before starting the installation. The onboarding wizard will ask for it, and having it on hand keeps the process smooth.
Installation
macOS and Linux
The official installer handles everything — dependencies, PATH configuration, and the daemon binary:
curl -fsSL https://openclaw.ai/install.sh | bash
After the installer finishes, verify it worked:
openclaw --version
# Expected output: openclaw v2026.2.17
If you prefer not to pipe a remote script into bash (and I respect that instinct), you can clone the repo and build from source:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
npm run build
npm link
Windows (via WSL2)
OpenClaw does not run natively on Windows. You need WSL2 with Ubuntu:
# Open PowerShell as Administrator
wsl --install -d Ubuntu-24.04
# Once inside WSL2, run the same Linux installer
curl -fsSL https://openclaw.ai/install.sh | bash
Everything from this point forward works identically to Linux. The daemon runs inside your WSL2 instance, and messaging channels connect through it normally.
One gotcha with WSL2: make sure your .env file uses Unix line endings (LF), not Windows line endings (CRLF). If you edit the file with Notepad, you will end up with invisible carriage return characters appended to your API keys, and every API call will fail with an authentication error. Use VS Code, nano, or vim inside WSL instead.
Updating an Existing Installation
If you already have an older version (including the Moltbot or Clawbot era), the installer detects it and upgrades in place:
curl -fsSL https://openclaw.ai/install.sh | bash
# It will detect the existing install and offer to upgrade
Important: If you’re upgrading from anything before v2026.2.0, update immediately. The v2026.2.17 release patches CVE-2026-25253, a critical vulnerability with a CVSS score of 8.8 that allowed remote code execution through crafted skill payloads.
Initial Configuration: The Setup Wizard
OpenClaw ships with an interactive onboarding wizard that walks you through the first-time setup. Run it like this:
openclaw onboard --install-daemon
The --install-daemon flag tells the wizard to also configure the background daemon that keeps your bot running. Without it, you’d need to manually start OpenClaw every time.
The wizard will ask you to:
- Choose your primary AI provider — Anthropic, OpenAI, Google, or MiniMax
- Enter your API key — the wizard stores this in
~/.openclaw/.env - Select a messaging channel — you can start with one and add more later
- Configure the daemon — auto-start on boot, logging preferences, port selection
When it finishes, your configuration lives in two files:
~/.openclaw/openclaw.json— general settings, channel configs, skill preferences~/.openclaw/.env— API keys and secrets (more on securing this in the hardening section)
The wizard also creates a systemd service (on Linux) or a launchd plist (on macOS) so the daemon starts automatically on boot. You can manage it with standard system commands:
# Linux (systemd)
systemctl --user status openclaw
systemctl --user restart openclaw
# macOS (launchd)
launchctl list | grep openclaw
launchctl kickstart -k gui/$(id -u)/com.openclaw.daemon
If you skipped the --install-daemon flag during onboard, you can add it later:
openclaw daemon install
Configuring AI Models
OpenClaw supports multiple AI providers simultaneously. You can route different channels to different models — for example, Claude for your personal WhatsApp and GPT-4 for your team’s Slack workspace.
Anthropic (Claude)
The v2026.2.17 release added full support for Anthropic’s latest Claude models, including Claude Opus 4. To configure:
# Add to ~/.openclaw/.env
ANTHROPIC_API_KEY=sk-ant-your-key-here
Then in ~/.openclaw/openclaw.json, set the model:
{
"ai": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"maxTokens": 4096,
"temperature": 0.7
}
}
OpenAI (GPT)
# Add to ~/.openclaw/.env
OPENAI_API_KEY=sk-your-key-here
{
"ai": {
"provider": "openai",
"model": "gpt-4o",
"maxTokens": 4096,
"temperature": 0.7
}
}
Google (Gemini)
# Add to ~/.openclaw/.env
GOOGLE_API_KEY=your-google-api-key-here
{
"ai": {
"provider": "google",
"model": "gemini-2.0-flash",
"maxTokens": 4096,
"temperature": 0.7
}
}
MiniMax
# Add to ~/.openclaw/.env
MINIMAX_API_KEY=your-minimax-key-here
MiniMax support is newer and less battle-tested. If you are experimenting, it works, but for production use I would stick with Anthropic or OpenAI for now.
Choosing the Right Model
Your model choice affects response quality, speed, and cost. Here is my practical take after running OpenClaw across multiple channels for several weeks:
- Claude Sonnet 4 — Best overall for conversational use. Fast, capable, and cost-effective. This is my default recommendation for most OpenClaw deployments.
- Claude Opus 4 — The most capable model available, but slower and more expensive. Use it if you need complex reasoning or long-form analysis through chat.
- GPT-4o — Strong all-rounder with good multimodal capabilities. Slightly faster response times than Claude in my testing.
- Gemini 2.0 Flash — The budget option. Fast and cheap, with a massive context window. Quality is a step below Claude and GPT-4o, but for simple Q&A and summarization it is perfectly fine.
For most developers running OpenClaw as a personal assistant, Claude Sonnet or GPT-4o will cost $5-15 per month in API fees. If you are deploying for a team, multiply that by active users and add a 20% buffer.
Multi-Provider Setup
You can define all your API keys in .env and then assign different providers per channel in openclaw.json:
{
"channels": {
"whatsapp": {
"ai": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" }
},
"slack": {
"ai": { "provider": "openai", "model": "gpt-4o" }
}
}
}
This is one of OpenClaw’s strongest features. You configure once, and each channel can use a different backend without any additional infrastructure.
Connecting Your First Channel
I’ll walk through Telegram as the first channel because it has the simplest setup process. The pattern is similar for other platforms.
Telegram Setup
- Open Telegram and message @BotFather
- Send
/newbotand follow the prompts to create a bot - Copy the API token BotFather gives you
- Add it to your OpenClaw config:
# Add to ~/.openclaw/.env
TELEGRAM_BOT_TOKEN=your-telegram-bot-token
Then enable the channel in openclaw.json:
{
"channels": {
"telegram": {
"enabled": true,
"allowedUsers": ["your_telegram_username"]
}
}
}
Restart the daemon:
openclaw daemon restart
Message your bot on Telegram. If everything is configured correctly, it should respond using your chosen AI provider.
If it does not respond, check the daemon logs first. Nine times out of ten, the issue is a misconfigured bot token or the daemon not running. The openclaw doctor command (covered in the troubleshooting section) will pinpoint the problem quickly.
Quick Notes on Other Channels
- WhatsApp: Requires a WhatsApp Business API account or a linked device session. More complex setup but well-documented in the official wiki.
- Slack: Create a Slack App, enable Socket Mode, and add the bot token to
.env. - Discord: Create a Discord Application, generate a bot token, and invite the bot to your server with message permissions.
- iMessage/BlueBubbles: Requires a Mac running BlueBubbles server. This is the most involved setup but lets you use iMessage as an AI interface, which is surprisingly useful.
- Signal: Uses the signal-cli bridge. Solid privacy choice but fiddly to configure.
- Microsoft Teams: Requires Azure AD app registration. Enterprise-friendly but heavy on the setup.
- Matrix: Self-hosted option for privacy-focused teams. Requires a Matrix homeserver like Synapse.
- Google Chat: Needs a Google Workspace account and a published Chat app. Works well for organizations already on Google Workspace.
- Zalo: Popular in Vietnam. Setup requires a Zalo Official Account.
- WebChat: A built-in web interface you can embed on any website. Useful for internal tools or customer-facing bots.
You can run multiple channels simultaneously. I currently have Telegram for personal use, Slack for my team, and WebChat embedded on an internal dashboard. Each channel maintains its own conversation history and can be configured with different AI providers and system prompts.
Essential Security Hardening
This is the section you should not skip. OpenClaw’s security track record has been concerning. A recent independent audit found 512 vulnerabilities, 8 of which were rated critical. The most severe, CVE-2026-25253 (CVSS 8.8), allowed remote code execution through malicious skill payloads. It has been patched in v2026.2.17, but the audit revealed deeper systemic issues.
Here is what I recommend as minimum hardening:
1. Lock Down Your .env File
Your API keys live in ~/.openclaw/.env. This file should be readable only by your user:
chmod 600 ~/.openclaw/.env
chmod 700 ~/.openclaw/
Verify the permissions:
ls -la ~/.openclaw/.env
# Should show: -rw------- 1 youruser youruser
If you see anything other than -rw-------, fix it immediately. Anyone with read access to this file has your API keys.
2. Restrict the Config Directory
chmod 700 ~/.openclaw/
chmod 600 ~/.openclaw/openclaw.json
chmod 600 ~/.openclaw/.env
3. Use allowedUsers on Every Channel
Never leave a channel open to all users. Always specify exactly who can interact with your bot:
{
"channels": {
"telegram": {
"enabled": true,
"allowedUsers": ["your_username"],
"allowGroups": false
}
}
}
Without allowedUsers, anyone who discovers your bot can send it messages — and your API bill will reflect that.
4. Disable Automatic Skill Installation
By default, OpenClaw can auto-install skills from ClawHub when a user requests functionality. Turn this off:
{
"skills": {
"autoInstall": false,
"allowUntrusted": false
}
}
This is critical. Of the 341 malicious skills discovered on ClawHub, many were designed to exfiltrate API keys from the .env file. Manual review before installing any skill is not optional — it is a requirement.
5. Enable Logging and Audit
{
"logging": {
"level": "info",
"file": "~/.openclaw/logs/openclaw.log",
"maxSize": "50MB",
"maxFiles": 5
}
}
Review your logs periodically. Look for unexpected skill installations, unknown user interactions, or API calls to providers you did not configure.
6. Run as a Dedicated User
If you are running OpenClaw on a server (not your personal machine), create a dedicated system user:
sudo useradd -r -m -s /bin/bash openclaw
sudo -u openclaw openclaw onboard --install-daemon
This limits the blast radius if something goes wrong. The daemon only has access to its own home directory, not yours.
7. Keep It Updated
The OpenClaw team has been responsive about patching vulnerabilities once reported. Run the update regularly:
curl -fsSL https://openclaw.ai/install.sh | bash
openclaw daemon restart
Useful Skills and Plugins
OpenClaw’s plugin system, called Skills, extends the bot’s functionality beyond basic AI chat. Skills are distributed through ClawHub, the community marketplace.
However, approach ClawHub with extreme caution. As mentioned above, 341 malicious skills have been identified and removed. Before installing any skill:
- Check the author’s GitHub profile and history
- Read the skill’s source code (skills are just JavaScript/TypeScript modules)
- Look for suspicious network calls, file system access, or environment variable reads
- Prefer skills from verified authors (marked with a blue checkmark on ClawHub)
Recommended Skills (Verified Authors)
- claw-web-search — Adds web search capability using your preferred search API
- claw-calendar — Google Calendar integration for scheduling through chat
- claw-code-runner — Sandboxed code execution for quick scripts (use with caution even though it is verified)
- claw-summarizer — URL and document summarization
- claw-reminder — Timed reminders delivered through your connected channels
Install a skill manually:
openclaw skill install claw-web-search
openclaw daemon restart
List installed skills:
openclaw skill list
Remove a skill:
openclaw skill remove skill-name
openclaw daemon restart
Troubleshooting
The openclaw doctor Command
OpenClaw ships with a built-in diagnostic tool that checks your entire setup:
openclaw doctor
This validates:
- Node.js version compatibility
- Config file syntax and required fields
- API key presence and format (it does not test the keys against the API)
- Daemon status and port availability
- Channel connectivity
- File permissions on sensitive files
- Installed skill integrity
If openclaw doctor reports all green, your setup is structurally sound. If it flags issues, it provides specific remediation steps.
Common Issues and Fixes
Daemon won’t start
# Check if the port is already in use
lsof -i :3100
# Check daemon logs
tail -50 ~/.openclaw/logs/openclaw.log
# Try starting in foreground mode for verbose output
openclaw daemon start --foreground
Channel not connecting
Most channel connection issues come down to token problems. Double-check that:
- The token is in
~/.openclaw/.envwith the correct variable name - The token has no trailing whitespace or newline characters
- The bot has the required permissions on the platform (message read/write at minimum)
AI responses are empty or erroring
# Test your API key directly
openclaw test-provider anthropic
# Check your API key has credits/quota
# Visit your provider's dashboard to verify billing status
High memory usage
OpenClaw keeps conversation history in memory by default. If it is consuming too much RAM, limit the history:
{
"ai": {
"maxHistory": 20,
"maxHistoryTokens": 8000
}
}
“Permission denied” errors
Usually a file permission issue. Reset everything:
chmod 700 ~/.openclaw/
chmod 600 ~/.openclaw/.env
chmod 600 ~/.openclaw/openclaw.json
chown -R $(whoami) ~/.openclaw/
Frequently Asked Questions
Is OpenClaw free?
Yes, OpenClaw itself is free and open source under the MIT license. However, you will pay for AI provider API usage (Anthropic, OpenAI, Google, or MiniMax) based on how much you use the bot. For light personal use, expect $5-15/month in API costs depending on your provider and model.
Can I run OpenClaw on a Raspberry Pi?
Technically yes, but I would not recommend it for anything beyond experimentation. The daemon’s memory usage with active conversation history can push past 1GB, and ARM performance with Node.js is not ideal. A small VPS or an old laptop running Ubuntu is a better choice.
Is it safe to use after the security audit findings?
The critical CVE-2026-25253 has been patched in v2026.2.17. The OpenClaw team has committed to a formal security program going forward. That said, 512 vulnerabilities in a single audit is a lot, and not all of them are resolved yet. If you follow the hardening steps in this guide, disable auto-install for skills, and keep the software updated, the risk is manageable. For enterprise or sensitive use, I would wait for the next audit cycle.
Can I use multiple AI providers at the same time?
Yes. You can assign different providers to different channels, or even configure fallback providers. If your primary provider’s API goes down, OpenClaw can automatically route to a secondary. See the multi-provider setup section above.
How does OpenClaw compare to Claude Code or other AI agents?
Different tools for different jobs. OpenClaw is a messaging bot framework — it connects AI models to chat platforms. Claude Code is a development-focused CLI agent that works inside your terminal and codebase. You might use both: Claude Code for writing and reviewing code, and OpenClaw for team communication and quick AI queries through Slack or Telegram. See our detailed comparison for the full breakdown.
Final Thoughts
OpenClaw fills a real gap in the AI tooling landscape. Most AI interfaces force you into a browser tab or a proprietary app. OpenClaw lets you meet the AI where you already are — in your team’s Slack, your personal WhatsApp, or any of the dozen platforms it supports. The multi-provider architecture means you are not locked into a single AI vendor, and the local-first approach keeps your data under your control.
The security concerns are real and should not be dismissed. But with proper hardening, careful skill vetting, and regular updates, OpenClaw is a powerful addition to a developer’s toolkit. Just treat it like any other piece of infrastructure: lock it down, monitor it, and keep it patched.
If you are evaluating OpenClaw for your team, start with a single channel and a single AI provider. Get comfortable with the configuration, understand the security model, and then expand. The worst thing you can do is enable every channel and every skill on day one without understanding what each one does. Take it slow, read the source of any skill you install, and keep your .env locked down. That approach has served me well in 25 years of running production systems, and it applies just as much to AI tooling as it does to traditional infrastructure.
Last updated: February 2026 | OpenClaw v2026.2.17