Skip to content

Running Open Claw in Docker Compose

Getting Started

The first thing is to clone the repo, https://github.com/openclaw/openclaw?tab=readme-ov-file. We have to build the docker image ourselves.

Before running the docker-setup.sh script to kick it off, I found that the script will always overwrite a .env that you create yourself. If you want to to preconfigure .env variables, do it in the docker-setup.sh. So for example, you can see where OPENCLAW_CONFIG_DIR and OPENCLAW_WORKSPACE_DIR are being defined.

Now run docker-setup.sh.

I pretty much skipped everything, because I wasn't able to configure ollama off the bat.

When you access the browser for the first time, you need to authenticate your browser. Run the commands to see requests and to authorize the requests. You can bash into the container with

sudo docker exec -it openclaw-gateway bash

assuming you set container_name to openclaw-gateway.

Adding ollama

This can be configured in the web application under Config -> Models.

If using the form, go to Model Providers, create a custom entry for ollama, set the api adapter to ollama, and give it a nonsense key. Ollama doesn't need a key, but Openclaw won't use it without sending a key.

When looking at the Raw config, you can set up the models that you have available in ollama like this:

"models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://192.168.4.45:7869",
        "apiKey": "__OPENCLAW_REDACTED__",
        "api": "ollama",
        "injectNumCtxForOpenAICompat": false,
        "models": [
          {
            "id": "qwen3:8b",
            "name": "qwen3:8b",
            "api": "ollama",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 16000,
            "maxTokens": 8192
          },
          {
            "id": "qwen3:4b",
            "name": "qwen3:4b",
            "api": "ollama",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 16000,
            "maxTokens": 8192
          },
          {
            "id": "qwen3:1.7b",
            "name": "qwen3:1.7b",
            "api": "ollama",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 16000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "github-copilot/gpt-4.1"
      },
      "models": {
        "ollama/qwen3:4b": {},
        "github-copilot/gpt-4o": {},
        "github-copilot/gpt-4.1": {},
        "ollama/qwen3:8b": {},
        "ollama/qwen3:1.7b": {}
      },
      "workspace": "/home/node/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      },
      "sandbox": {
        "mode": "off"
      }
    }
  },

Adding Github Copilot

First authorize the app

openclaw models auth login-github-copilot

Then you can set different models that you have access to with copilot. For example

openclaw models set github-copilot/gpt-4o
openclaw models set github-copilot/gpt-5.1
openclaw models set github-copilot/gpt-5.2
openclaw models set github-copilot/claude-sonnet-4.6
openclaw models set github-copilot/claude-opus-4.6
openclaw models set github-copilot/claude-haiku-4.5
openclaw models set github-copilot/gpt-5-mini
openclaw models set github-copilot/gpt-4o
openclaw models set github-copilot/grok-code-fast-1
openclaw models set github-copilot/raptor-mini
openclaw models set github-copilot/gemini-2.5-pro
openclaw models set github-copilot/gpt-4.1

To view a list of models

openclaw models list

You should see something like

Model                                      Input      Ctx      Local Auth  Tags
github-copilot/gpt-4.1                     text+image 63k      no    yes   default,configured
ollama/qwen3:4b                            text       16k      no    yes   configured
github-copilot/gpt-4o                      text+image 63k      no    yes   configured
ollama/qwen3:8b                            text       16k      no    yes   configured
ollama/qwen3:1.7b                          text       16k      no    yes   configured

Adding agents to Mattermost

Read up about multiple agents in the docs: https://docs.openclaw.ai/concepts/multi-agent.

Most importantly, agents will have the following directories and an entry in openclaw.json.

Config: ~/.openclaw/openclaw.json (or OPENCLAW_CONFIG_PATH)
State dir: ~/.openclaw (or OPENCLAW_STATE_DIR)
Workspace: ~/.openclaw/workspace (or ~/.openclaw/workspace-<agentId>)
Agent dir: ~/.openclaw/agents/<agentId>/agent (or agents.list[].agentDir)
Sessions: ~/.openclaw/agents/<agentId>/sessions
openclaw.json
"agents": {
    "defaults": {
      "model": {
        "primary": "github-copilot/gpt-4.1"
      },
      "models": {
        "ollama/qwen3:4b": {},
        "github-copilot/gpt-4o": {},
        "github-copilot/gpt-4.1": {},
        "ollama/qwen3:8b": {},
        "ollama/qwen3:1.7b": {}
      },
      "workspace": "/home/node/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      },
      "sandbox": {
        "mode": "off"
      }
    },
    "list": [
      {
        "id": "scrum",
        "workspace": "/home/node/.openclaw/workspace-scrum",
        "agentDir": "/home/node/.openclaw/agents/scrum/agent",
        "model": {
          "primary": "github-copilot/gpt-4.1"
        }
      },
      {
        "id": "lead",
        "workspace": "/home/node/.openclaw/workspace-lead",
        "agentDir": "/home/node/.openclaw/agents/lead/agent",
        "model": {
          "primary": "github-copilot/gpt-4.1"
        }
      },
      .... 

Go to the System Console and find Integrations -> Bot Accounts. Set both values to true. Now go back to Mattermost and go to Integrations (not the system console). Go to Bot Accounts. Hit the Add Bot Account for each of your Agents that you want to be able to chat with. You can now add them as members to your channels.

Configure Openclaw

Also see https://docs.openclaw.ai/channels/mattermost.

"channels": {
    "mattermost": {
      "enabled": true,
      "baseUrl": "https://chat.domain.com",
      "dmPolicy": "open",
      "chatmode": "onchar",
      "oncharPrefixes": [
        ">"
      ],
      "groupPolicy": "open",
      "team": "Wildium",
      "channels": [
        "town-square"
      ],
      "accounts": {
        "default": {
          "botToken": "<token>",
          "baseUrl": "https://chat.domain.com",
          "chatmode": "onchar"
        },
        "ted": {
          "botToken": "<token>",
          "baseUrl": "https://chat.domain.com",
          "chatmode": "onchar"
        },

You don't have to set the chat mode to onchar but it was the best and only way I found to actually send a single prompt to all bots and force them to reply. Otherwise, direct @name will still work.

YouTrack

You can create a youtrack user for your bot. After signing in, to Profile -> Account Security and generate a permanent token.

Modifying Dockerfile and Rebuilding

sudo docker build -t openclaw:local . && sudo docker compose down && sudo docker compose up -d

mcporter

After installing, I needed to run

mcporter list --config /home/node/.openclaw/mcporter.json

In mcporter.json, I was able to get a github tool working with

{
  "mcpServers": {
    "github": {
      "type": "http",
      "url": "https://api.githubcopilot.com/mcp/",
      "headers": {
        "Authorization": "Bearer <token>"
      }
    }
  }
}

The only issue is that this is available to all agents now.

YouTrack

Getting a mcp server to work was probably harder than just creating a skill with the end points that I actually cared about.

I cloned the repo from https://github.com/devstroop/youtrack-mcp/tree/main

Comments