Disclosure: RunAICode.ai may earn a commission when you purchase through links on this page. This doesn’t affect our reviews or rankings. We only recommend tools we’ve tested and believe in. Learn more.
Affiliate Disclosure: Some links on this page are affiliate links. If you click through and make a purchase, RunAICode may earn a commission at no additional cost to you. We only recommend tools we have personally tested and believe provide value. See our full disclosure policy.

TL;DR

  • What: OpenClaw is a conversation-first AI bot that runs locally on your machine and connects to 12+ messaging platforms.
  • Who it’s for: Developers who want a self-hosted AI assistant across WhatsApp, Slack, Discord, Telegram, and more.
  • Time to set up: 15-30 minutes for a basic single-channel deployment.
  • Caution: A recent security audit found 512 vulnerabilities, 8 of them critical. Read the hardening section before deploying.

What Is OpenClaw?

OpenClaw — formerly known as Moltbot, and before that Clawbot — is an open-source AI bot framework that lets you run large language models locally and pipe them into the messaging platforms you already use. Created by Peter Steinberger (who recently joined OpenAI), the project has exploded in popularity: 196,000+ GitHub stars, over 2 million weekly visitors to its documentation site, and an active ecosystem of community-built skills and plugins.

The core idea is simple. Instead of switching between ChatGPT in one tab, Claude in another, and Gemini in a third, you configure OpenClaw once, point it at your preferred AI provider, and interact with it through WhatsApp, Slack, Discord, or whatever channel your team already lives in. It runs as a local daemon, your API keys never leave your machine, and you own the entire conversation history.

That said, OpenClaw is not without problems. The project has had a rough security track record, the plugin ecosystem needs careful vetting, and the documentation — while improving — still has gaps. I’ll cover all of that honestly in this guide.

OpenClaw at a Glance
Spec Details
Latest Version v2026.2.17
GitHub github.com/openclaw/openclaw
GitHub Stars 196,000+
Supported AI Providers Anthropic (Claude), OpenAI (GPT), Google (Gemini), MiniMax
Supported Channels WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat, BlueBubbles, Matrix, Zalo
License MIT
Config Location ~/.openclaw/openclaw.json
Known CVEs CVE-2026-25253 (CVSS 8.8) — patched in v2026.2.17

Prerequisites

Before you install OpenClaw, make sure you have the following:

Optional but recommended: a dedicated user account or VM for running the daemon. I run mine in a Proxmox LXC container with 2GB RAM, and it barely breaks a sweat.

A quick note on API keys: if you do not already have one, here is where to get them:

Have your key ready before starting the installation. The onboarding wizard will ask for it, and having it on hand keeps the process smooth.

Installation

macOS and Linux

The official installer handles everything — dependencies, PATH configuration, and the daemon binary:

curl -fsSL https://openclaw.ai/install.sh | bash

After the installer finishes, verify it worked:

openclaw --version
# Expected output: openclaw v2026.2.17

If you prefer not to pipe a remote script into bash (and I respect that instinct), you can clone the repo and build from source:

git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
npm run build
npm link

Windows (via WSL2)

OpenClaw does not run natively on Windows. You need WSL2 with Ubuntu:

# Open PowerShell as Administrator
wsl --install -d Ubuntu-24.04

# Once inside WSL2, run the same Linux installer
curl -fsSL https://openclaw.ai/install.sh | bash

Everything from this point forward works identically to Linux. The daemon runs inside your WSL2 instance, and messaging channels connect through it normally.

One gotcha with WSL2: make sure your .env file uses Unix line endings (LF), not Windows line endings (CRLF). If you edit the file with Notepad, you will end up with invisible carriage return characters appended to your API keys, and every API call will fail with an authentication error. Use VS Code, nano, or vim inside WSL instead.

Updating an Existing Installation

If you already have an older version (including the Moltbot or Clawbot era), the installer detects it and upgrades in place:

curl -fsSL https://openclaw.ai/install.sh | bash
# It will detect the existing install and offer to upgrade

Important: If you’re upgrading from anything before v2026.2.0, update immediately. The v2026.2.17 release patches CVE-2026-25253, a critical vulnerability with a CVSS score of 8.8 that allowed remote code execution through crafted skill payloads.

Initial Configuration: The Setup Wizard

OpenClaw ships with an interactive onboarding wizard that walks you through the first-time setup. Run it like this:

openclaw onboard --install-daemon

The --install-daemon flag tells the wizard to also configure the background daemon that keeps your bot running. Without it, you’d need to manually start OpenClaw every time.

The wizard will ask you to:

  1. Choose your primary AI provider — Anthropic, OpenAI, Google, or MiniMax
  2. Enter your API key — the wizard stores this in ~/.openclaw/.env
  3. Select a messaging channel — you can start with one and add more later
  4. Configure the daemon — auto-start on boot, logging preferences, port selection

When it finishes, your configuration lives in two files:

The wizard also creates a systemd service (on Linux) or a launchd plist (on macOS) so the daemon starts automatically on boot. You can manage it with standard system commands:

# Linux (systemd)
systemctl --user status openclaw
systemctl --user restart openclaw

# macOS (launchd)
launchctl list | grep openclaw
launchctl kickstart -k gui/$(id -u)/com.openclaw.daemon

If you skipped the --install-daemon flag during onboard, you can add it later:

openclaw daemon install

Configuring AI Models

OpenClaw supports multiple AI providers simultaneously. You can route different channels to different models — for example, Claude for your personal WhatsApp and GPT-4 for your team’s Slack workspace.

Anthropic (Claude)

The v2026.2.17 release added full support for Anthropic’s latest Claude models, including Claude Opus 4. To configure:

# Add to ~/.openclaw/.env
ANTHROPIC_API_KEY=sk-ant-your-key-here

Then in ~/.openclaw/openclaw.json, set the model:

{
  "ai": {
    "provider": "anthropic",
    "model": "claude-sonnet-4-20250514",
    "maxTokens": 4096,
    "temperature": 0.7
  }
}

OpenAI (GPT)

# Add to ~/.openclaw/.env
OPENAI_API_KEY=sk-your-key-here
{
  "ai": {
    "provider": "openai",
    "model": "gpt-4o",
    "maxTokens": 4096,
    "temperature": 0.7
  }
}

Google (Gemini)

# Add to ~/.openclaw/.env
GOOGLE_API_KEY=your-google-api-key-here
{
  "ai": {
    "provider": "google",
    "model": "gemini-2.0-flash",
    "maxTokens": 4096,
    "temperature": 0.7
  }
}

MiniMax

# Add to ~/.openclaw/.env
MINIMAX_API_KEY=your-minimax-key-here

MiniMax support is newer and less battle-tested. If you are experimenting, it works, but for production use I would stick with Anthropic or OpenAI for now.

Choosing the Right Model

Your model choice affects response quality, speed, and cost. Here is my practical take after running OpenClaw across multiple channels for several weeks:

For most developers running OpenClaw as a personal assistant, Claude Sonnet or GPT-4o will cost $5-15 per month in API fees. If you are deploying for a team, multiply that by active users and add a 20% buffer.

Multi-Provider Setup

You can define all your API keys in .env and then assign different providers per channel in openclaw.json:

{
  "channels": {
    "whatsapp": {
      "ai": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" }
    },
    "slack": {
      "ai": { "provider": "openai", "model": "gpt-4o" }
    }
  }
}

This is one of OpenClaw’s strongest features. You configure once, and each channel can use a different backend without any additional infrastructure.

Connecting Your First Channel

I’ll walk through Telegram as the first channel because it has the simplest setup process. The pattern is similar for other platforms.

Telegram Setup

  1. Open Telegram and message @BotFather
  2. Send /newbot and follow the prompts to create a bot
  3. Copy the API token BotFather gives you
  4. Add it to your OpenClaw config:
# Add to ~/.openclaw/.env
TELEGRAM_BOT_TOKEN=your-telegram-bot-token

Then enable the channel in openclaw.json:

{
  "channels": {
    "telegram": {
      "enabled": true,
      "allowedUsers": ["your_telegram_username"]
    }
  }
}

Restart the daemon:

openclaw daemon restart

Message your bot on Telegram. If everything is configured correctly, it should respond using your chosen AI provider.

If it does not respond, check the daemon logs first. Nine times out of ten, the issue is a misconfigured bot token or the daemon not running. The openclaw doctor command (covered in the troubleshooting section) will pinpoint the problem quickly.

Quick Notes on Other Channels

You can run multiple channels simultaneously. I currently have Telegram for personal use, Slack for my team, and WebChat embedded on an internal dashboard. Each channel maintains its own conversation history and can be configured with different AI providers and system prompts.

Essential Security Hardening

This is the section you should not skip. OpenClaw’s security track record has been concerning. A recent independent audit found 512 vulnerabilities, 8 of which were rated critical. The most severe, CVE-2026-25253 (CVSS 8.8), allowed remote code execution through malicious skill payloads. It has been patched in v2026.2.17, but the audit revealed deeper systemic issues.

Here is what I recommend as minimum hardening:

1. Lock Down Your .env File

Your API keys live in ~/.openclaw/.env. This file should be readable only by your user:

chmod 600 ~/.openclaw/.env
chmod 700 ~/.openclaw/

Verify the permissions:

ls -la ~/.openclaw/.env
# Should show: -rw------- 1 youruser youruser

If you see anything other than -rw-------, fix it immediately. Anyone with read access to this file has your API keys.

2. Restrict the Config Directory

chmod 700 ~/.openclaw/
chmod 600 ~/.openclaw/openclaw.json
chmod 600 ~/.openclaw/.env

3. Use allowedUsers on Every Channel

Never leave a channel open to all users. Always specify exactly who can interact with your bot:

{
  "channels": {
    "telegram": {
      "enabled": true,
      "allowedUsers": ["your_username"],
      "allowGroups": false
    }
  }
}

Without allowedUsers, anyone who discovers your bot can send it messages — and your API bill will reflect that.

4. Disable Automatic Skill Installation

By default, OpenClaw can auto-install skills from ClawHub when a user requests functionality. Turn this off:

{
  "skills": {
    "autoInstall": false,
    "allowUntrusted": false
  }
}

This is critical. Of the 341 malicious skills discovered on ClawHub, many were designed to exfiltrate API keys from the .env file. Manual review before installing any skill is not optional — it is a requirement.

5. Enable Logging and Audit

{
  "logging": {
    "level": "info",
    "file": "~/.openclaw/logs/openclaw.log",
    "maxSize": "50MB",
    "maxFiles": 5
  }
}

Review your logs periodically. Look for unexpected skill installations, unknown user interactions, or API calls to providers you did not configure.

6. Run as a Dedicated User

If you are running OpenClaw on a server (not your personal machine), create a dedicated system user:

sudo useradd -r -m -s /bin/bash openclaw
sudo -u openclaw openclaw onboard --install-daemon

This limits the blast radius if something goes wrong. The daemon only has access to its own home directory, not yours.

7. Keep It Updated

The OpenClaw team has been responsive about patching vulnerabilities once reported. Run the update regularly:

curl -fsSL https://openclaw.ai/install.sh | bash
openclaw daemon restart

Useful Skills and Plugins

OpenClaw’s plugin system, called Skills, extends the bot’s functionality beyond basic AI chat. Skills are distributed through ClawHub, the community marketplace.

However, approach ClawHub with extreme caution. As mentioned above, 341 malicious skills have been identified and removed. Before installing any skill:

Recommended Skills (Verified Authors)

Install a skill manually:

openclaw skill install claw-web-search
openclaw daemon restart

List installed skills:

openclaw skill list

Remove a skill:

openclaw skill remove skill-name
openclaw daemon restart

Troubleshooting

The openclaw doctor Command

OpenClaw ships with a built-in diagnostic tool that checks your entire setup:

openclaw doctor

This validates:

If openclaw doctor reports all green, your setup is structurally sound. If it flags issues, it provides specific remediation steps.

Common Issues and Fixes

Daemon won’t start

# Check if the port is already in use
lsof -i :3100

# Check daemon logs
tail -50 ~/.openclaw/logs/openclaw.log

# Try starting in foreground mode for verbose output
openclaw daemon start --foreground

Channel not connecting

Most channel connection issues come down to token problems. Double-check that:

AI responses are empty or erroring

# Test your API key directly
openclaw test-provider anthropic

# Check your API key has credits/quota
# Visit your provider's dashboard to verify billing status

High memory usage

OpenClaw keeps conversation history in memory by default. If it is consuming too much RAM, limit the history:

{
  "ai": {
    "maxHistory": 20,
    "maxHistoryTokens": 8000
  }
}

“Permission denied” errors

Usually a file permission issue. Reset everything:

chmod 700 ~/.openclaw/
chmod 600 ~/.openclaw/.env
chmod 600 ~/.openclaw/openclaw.json
chown -R $(whoami) ~/.openclaw/

Frequently Asked Questions

Is OpenClaw free?

Yes, OpenClaw itself is free and open source under the MIT license. However, you will pay for AI provider API usage (Anthropic, OpenAI, Google, or MiniMax) based on how much you use the bot. For light personal use, expect $5-15/month in API costs depending on your provider and model.

Can I run OpenClaw on a Raspberry Pi?

Technically yes, but I would not recommend it for anything beyond experimentation. The daemon’s memory usage with active conversation history can push past 1GB, and ARM performance with Node.js is not ideal. A small VPS or an old laptop running Ubuntu is a better choice.

Is it safe to use after the security audit findings?

The critical CVE-2026-25253 has been patched in v2026.2.17. The OpenClaw team has committed to a formal security program going forward. That said, 512 vulnerabilities in a single audit is a lot, and not all of them are resolved yet. If you follow the hardening steps in this guide, disable auto-install for skills, and keep the software updated, the risk is manageable. For enterprise or sensitive use, I would wait for the next audit cycle.

Can I use multiple AI providers at the same time?

Yes. You can assign different providers to different channels, or even configure fallback providers. If your primary provider’s API goes down, OpenClaw can automatically route to a secondary. See the multi-provider setup section above.

How does OpenClaw compare to Claude Code or other AI agents?

Different tools for different jobs. OpenClaw is a messaging bot framework — it connects AI models to chat platforms. Claude Code is a development-focused CLI agent that works inside your terminal and codebase. You might use both: Claude Code for writing and reviewing code, and OpenClaw for team communication and quick AI queries through Slack or Telegram. See our detailed comparison for the full breakdown.

Final Thoughts

OpenClaw fills a real gap in the AI tooling landscape. Most AI interfaces force you into a browser tab or a proprietary app. OpenClaw lets you meet the AI where you already are — in your team’s Slack, your personal WhatsApp, or any of the dozen platforms it supports. The multi-provider architecture means you are not locked into a single AI vendor, and the local-first approach keeps your data under your control.

The security concerns are real and should not be dismissed. But with proper hardening, careful skill vetting, and regular updates, OpenClaw is a powerful addition to a developer’s toolkit. Just treat it like any other piece of infrastructure: lock it down, monitor it, and keep it patched.

If you are evaluating OpenClaw for your team, start with a single channel and a single AI provider. Get comfortable with the configuration, understand the security model, and then expand. The worst thing you can do is enable every channel and every skill on day one without understanding what each one does. Take it slow, read the source of any skill you install, and keep your .env locked down. That approach has served me well in 25 years of running production systems, and it applies just as much to AI tooling as it does to traditional infrastructure.

Last updated: February 2026 | OpenClaw v2026.2.17

Related Articles