Agent Workspace Execution: Local, SSH, and Docker Compared
Mentiko Team
Every agent in a chain needs a place to work. It reads files, writes output, runs scripts, and produces events. The workspace is where all of that happens. Mentiko supports three execution modes: local filesystem, SSH remote, and Docker container. Each has clear tradeoffs, and choosing wrong means either unnecessary complexity or unnecessary risk.
What workspace execution actually means
When an agent runs, it needs a filesystem, a shell, and access to tools. The workspace defines all three. It determines:
- Where the agent's input files are read from
- Where the agent writes its output
- What binaries and tools are available (python, node, curl, etc.)
- What environment variables and secrets are accessible
- How isolated the agent is from other agents and the host system
In Mentiko, the workspace is configured per-agent in the chain definition. Different agents in the same chain can use different workspaces. A research agent might run on a remote server with database access while a writing agent runs locally where you can review output in real time.
Local filesystem execution
The simplest mode. Agents run directly on the machine where Mentiko is installed, using the local filesystem for reads and writes.
{
"agent": "data-formatter",
"workspace": {
"type": "local",
"directory": "./workspaces/data-formatter"
}
}
Strengths:
- Zero setup. Works immediately after install.
- Fastest I/O. No network overhead, no container startup time.
- Full access to local tools, packages, and language runtimes.
- Easy to inspect output. Just open the workspace directory.
Weaknesses:
- No isolation. A misconfigured agent can write anywhere on your filesystem.
- Environment pollution. Agents share the same PATH, packages, and dependencies.
- Not reproducible. Works on your machine, might break on a teammate's.
Best for: Development, prototyping, single-user setups, chains where you trust every agent's behavior.
Local setup
No configuration needed beyond the chain definition. Mentiko creates the workspace directory if it doesn't exist. Make sure the tools your agents need (python, node, jq, etc.) are installed on the host.
# Verify your agents' dependencies are available
which python3 node jq curl
SSH remote execution
Agents run on a remote server over SSH. Mentiko opens an SSH session, executes the agent's work in a real PTY, and streams events back.
{
"agent": "database-analyst",
"workspace": {
"type": "ssh",
"host": "analytics-server.internal",
"user": "mentiko-agent",
"key": "$SECRET_SSH_KEY",
"directory": "/opt/mentiko/workspaces/db-analyst"
}
}
Strengths:
- Run agents where the data lives. No need to sync databases or large datasets to your local machine.
- Access production or staging infrastructure safely through a dedicated agent user with limited permissions.
- Real PTY sessions. Agents interact with the remote shell exactly like a human would.
- Scale horizontally. Run compute-heavy agents on beefy servers while keeping Mentiko on a lightweight machine.
Weaknesses:
- Network dependency. SSH disconnects kill the agent mid-run (Mentiko's watchdog detects this and marks the run as failed).
- Setup overhead. SSH keys, user accounts, firewall rules, and host verification.
- Latency. Every file read/write goes over the network.
Best for: Accessing production data, running on GPU servers, agents that need tools not available locally, multi-machine setups.
SSH setup
- Create a dedicated user on the remote server:
# On the remote server
sudo useradd -m -s /bin/bash mentiko-agent
sudo mkdir -p /opt/mentiko/workspaces
sudo chown mentiko-agent:mentiko-agent /opt/mentiko/workspaces
- Generate and deploy an SSH key:
# On your local machine
ssh-keygen -t ed25519 -f ~/.ssh/mentiko-agent -N ""
ssh-copy-id -i ~/.ssh/mentiko-agent.pub mentiko-agent@analytics-server.internal
- Store the key in Mentiko's secrets vault:
mentiko secret set SSH_KEY --file ~/.ssh/mentiko-agent
- Reference it in your chain definition using
$SECRET_SSH_KEY. Mentiko injects it at runtime without writing it to disk.
Security tip: Give the agent user only the permissions it needs. Read-only access to data directories, write access to its workspace, no sudo. If an agent goes off the rails, the damage is contained by Unix permissions.
Docker container execution
Agents run inside ephemeral Docker containers. Each agent gets a fresh container from a specified image, does its work, and the container is removed when the agent completes.
{
"agent": "code-executor",
"workspace": {
"type": "docker",
"image": "mentiko/agent-workspace:python3.12",
"directory": "/workspace",
"mounts": [
{ "host": "./input-data", "container": "/data", "readonly": true }
],
"resources": {
"memory": "2g",
"cpus": "2"
}
}
}
Strengths:
- Full isolation. Agents cannot affect the host or other agents.
- Reproducible environments. The image defines exactly what's installed. No "works on my machine."
- Resource limits. Cap memory and CPU per agent.
- Disposable. The container is destroyed after the run. No cleanup needed.
- Safe for untrusted workloads. User-submitted chains from a marketplace can run without risking your infrastructure.
Weaknesses:
- Container startup overhead. Adds 1-3 seconds per agent, depending on image size.
- Image management. You need to build and maintain images with the right tools installed.
- Networking complexity. Agents that need to reach external services require explicit network configuration.
- Docker must be installed and running on the Mentiko host.
Best for: Production workloads, untrusted agent code, CI/CD pipelines, multi-tenant environments, anything where isolation matters.
Docker setup
- Pull or build your workspace image:
# Use Mentiko's pre-built image
docker pull mentiko/agent-workspace:python3.12
# Or build your own
cat > Dockerfile <<'DOCKERFILE'
FROM python:3.12-slim
RUN pip install pandas numpy requests
RUN apt-get update && apt-get install -y jq curl && rm -rf /var/lib/apt/lists/*
WORKDIR /workspace
DOCKERFILE
docker build -t my-agent-workspace .
- Verify Docker is accessible by the Mentiko process:
docker run --rm mentiko/agent-workspace:python3.12 echo "workspace ready"
- For GPU workloads, install the NVIDIA container toolkit and use the
gpuflag:
{
"workspace": {
"type": "docker",
"image": "mentiko/agent-workspace:cuda12",
"gpu": true
}
}
Choosing by use case
| Scenario | Recommended mode | Why | |---|---|---| | Local development | Local | Fast iteration, no setup overhead | | Staging / QA | Docker | Matches production isolation, reproducible | | Production | Docker | Isolation, resource limits, disposability | | Accessing remote databases | SSH | Run where the data lives | | GPU / ML workloads | SSH or Docker (GPU) | Needs hardware not on your laptop | | Marketplace chains | Docker | Never trust third-party agent code | | Quick one-off chains | Local | Minimal friction |
Multi-workspace chains
The most practical pattern: different agents in the same chain use different workspaces. This isn't a special feature. It's just how the chain definition works -- each agent specifies its own workspace independently.
A common pattern for content pipelines:
{
"chain": "research-and-write",
"agents": [
{
"agent": "researcher",
"workspace": {
"type": "ssh",
"host": "data-server.internal",
"user": "mentiko-agent",
"key": "$SECRET_SSH_KEY",
"directory": "/opt/mentiko/workspaces/research"
},
"triggers": ["research:start"],
"emits": ["research:complete"]
},
{
"agent": "writer",
"workspace": {
"type": "local",
"directory": "./workspaces/writer"
},
"triggers": ["research:complete"],
"emits": ["draft:ready"]
},
{
"agent": "code-validator",
"workspace": {
"type": "docker",
"image": "mentiko/agent-workspace:node20",
"directory": "/workspace"
},
"triggers": ["draft:ready"],
"emits": ["validation:complete"]
}
]
}
The researcher runs on a remote server where it has access to internal databases. Its output event (research:complete) is written to the event bus. The writer picks it up locally where you can watch the draft appear in real time. The code validator runs in Docker so any code snippets in the draft are tested in isolation.
Event handoff between workspaces is automatic. Mentiko's event bus transfers event files regardless of where the agents run. The agents don't know or care about each other's workspace types.
Practical recommendations
Start local, add isolation as needed. Don't over-engineer your first chain with Docker. Get the agent logic right locally, then wrap agents in containers when you move to production.
Use SSH for data gravity. If your agent needs to read 50GB of logs, don't transfer the data -- send the agent to the data. SSH workspaces make this trivial.
Pin Docker images to digests for production chains. Tags can change. Digests don't. A chain that worked yesterday shouldn't break because latest moved.
{
"image": "mentiko/agent-workspace@sha256:a1b2c3d4..."
}
Set resource limits on Docker workspaces. An agent stuck in a loop will consume all available memory. The resources field prevents one misbehaving agent from taking down the host.
Rotate SSH keys. Store them in Mentiko's secrets vault and rotate on a schedule. Never hardcode keys in chain definitions.
Test workspace compatibility before long runs. A 20-agent chain that fails on agent 15 because jq isn't installed in the Docker image wastes everyone's time. Run a smoke test first:
mentiko workspace test --chain ./chains/my-chain.json
This spins up each workspace, verifies the required tools are present, and reports any missing dependencies before you commit to a full run.
Get new posts in your inbox
No spam. Unsubscribe anytime.