Environment Variables and Secrets in Mentiko
Mentiko Team
Every agent chain touches external services. Your research agent calls an LLM API. Your notification agent posts to Slack. Your data agent queries a database. Each of these needs credentials, and how you manage those credentials determines whether your chain is production-ready or a security incident waiting to happen.
Mentiko separates configuration into two categories: environment variables (non-sensitive config) and secrets (sensitive credentials). They're stored differently, accessed differently, and have different lifecycle rules.
Environment variables vs. secrets
Environment variables are plain configuration values: model names, output formats, retry counts, feature flags. They're not sensitive. Seeing MODEL=gpt-5.4 in a log file is fine.
Secrets are credentials: API keys, database passwords, webhook tokens, service account files. Seeing OPENAI_API_KEY=sk-abc123... in a log file is a breach.
Mentiko treats these differently:
- Environment variables are stored in plain text in your chain config or
.envfiles. They appear in event metadata and debug output. - Secrets are stored in Mentiko's encrypted vault. They're injected into agent workspaces at runtime and scrubbed from all output -- event files, logs, and debug traces.
Setting environment variables
There are three ways to set environment variables for a chain.
In the chain definition:
{
"name": "content-pipeline",
"variables": {
"MODEL": "gpt-5.4",
"OUTPUT_FORMAT": "markdown",
"MAX_TOKENS": "4000"
},
"agents": [
{
"name": "writer",
"prompt": "Write a {OUTPUT_FORMAT} article using {MODEL}. Stay under {MAX_TOKENS} tokens.",
"triggers": ["chain:start"],
"emits": ["chain:complete"]
}
]
}
Variables defined in the chain JSON are the defaults. They travel with the chain definition, they're version-controlled, and they're visible to anyone reading the config.
At runtime, override them from the CLI:
mentiko run content-pipeline \
--var MODEL="claude-4" \
--var MAX_TOKENS="8000"
Runtime overrides take precedence over chain defaults. The OUTPUT_FORMAT stays as markdown from the chain definition. MODEL and MAX_TOKENS use the values passed at runtime.
Or set them in a .env file for workspace-level defaults:
# .env
MODEL=gpt-5.4
OUTPUT_FORMAT=markdown
MAX_TOKENS=4000
LOG_LEVEL=info
The precedence order: runtime flags override .env values, which override chain definition defaults.
Managing secrets with the vault
Secrets never go in chain definitions. They never go in .env files committed to git. They go in Mentiko's secrets vault.
Add a secret:
mentiko secret set OPENAI_API_KEY
# Prompts for value (not echoed to terminal)
mentiko secret set ANTHROPIC_API_KEY
mentiko secret set SLACK_WEBHOOK_URL
mentiko secret set DB_PASSWORD
The vault encrypts secrets at rest using a key derived from your instance configuration. Secrets are decrypted only when an agent workspace starts, and only the secrets that agent is authorized to access.
List stored secrets (values are never shown):
mentiko secret list
# Output:
# OPENAI_API_KEY set 2026-03-20 used by: content-pipeline, research-chain
# ANTHROPIC_API_KEY set 2026-03-20 used by: code-review-chain
# SLACK_WEBHOOK_URL set 2026-03-21 used by: alerting-chain
# DB_PASSWORD set 2026-03-21 used by: data-enrichment
The used by column shows which chains reference each secret. If you rotate a key, you know exactly which chains are affected.
Referencing secrets in chain definitions
Secrets are referenced in chain definitions using the secrets block, not the variables block:
{
"name": "research-chain",
"variables": {
"MODEL": "gpt-5.4",
"TOPIC": "default topic"
},
"secrets": ["OPENAI_API_KEY"],
"agents": [
{
"name": "researcher",
"prompt": "Research {TOPIC} thoroughly.",
"triggers": ["chain:start"],
"emits": ["chain:complete"]
}
]
}
The secrets array lists the vault keys this chain needs. When the chain runs, Mentiko resolves those keys from the vault and injects them as environment variables into each agent's workspace. The agent's code or LLM calls can access OPENAI_API_KEY as a standard environment variable -- but it never appears in the chain JSON, event files, or logs.
Per-agent secret scoping
Not every agent in a chain needs every secret. A research agent needs the LLM API key. A notification agent needs the Slack webhook. Neither needs the other's credentials.
Scope secrets per agent:
{
"name": "research-and-notify",
"agents": [
{
"name": "researcher",
"prompt": "Research the topic and summarize findings.",
"triggers": ["chain:start"],
"emits": ["research:complete"],
"secrets": ["OPENAI_API_KEY"]
},
{
"name": "notifier",
"prompt": "Post the research summary to the team channel.",
"triggers": ["research:complete"],
"emits": ["chain:complete"],
"secrets": ["SLACK_WEBHOOK_URL"]
}
]
}
The researcher's workspace only has OPENAI_API_KEY. The notifier's workspace only has SLACK_WEBHOOK_URL. If the researcher's execution is compromised -- a prompt injection, a malicious tool call -- it can't access the Slack webhook because it was never injected.
This is the principle of least privilege applied to agent workspaces. Each agent sees only what it needs.
Secret scrubbing
Even with vault storage and per-agent scoping, secrets can leak through agent output. An LLM might echo back the API key in its response. A debug log might print environment variables. A stack trace might include connection strings.
Mentiko scrubs known secret values from all output paths:
- Agent stdout/stderr -- scrubbed before writing to the run log
- Event files -- secret values replaced with
[REDACTED]in event metadata - Debug traces -- scrubbed before display in the dashboard
- Error messages -- scrubbed before routing to error handlers
The scrubbing is pattern-based. Mentiko knows the exact values of all injected secrets (it just decrypted them) and replaces any occurrence in output text. This catches cases where the LLM or tool output inadvertently includes credential values.
Rotating secrets
When you rotate an API key, update it in the vault:
mentiko secret set OPENAI_API_KEY
# Enter new value
The next chain run uses the new key. No chain definitions to update, no deployments to trigger, no containers to restart. The vault is the single source of truth for credential values.
For zero-downtime rotation on scheduled chains, Mentiko supports dual-key mode:
mentiko secret set OPENAI_API_KEY --version 2
# Enter new key
# Both versions are active until you finalize
mentiko secret finalize OPENAI_API_KEY --version 2
# Old version is archived
During the transition period, running chains use version 1 and new chain runs use version 2. Once you've confirmed the new key works, finalize it to archive the old one.
Common mistakes
Do not put secrets in the variables block. Variables are logged. Secrets are scrubbed. Mixing them up means your API key shows up in event files.
Do not hardcode secrets in prompts. "prompt": "Call the API with key sk-abc123" defeats every protection layer. Use "secrets": ["API_KEY"] and let the agent read it from the environment.
Do not commit .env files with secrets. Use .env for non-sensitive defaults only. Secrets go in the vault.
Do not share secrets across agents unnecessarily. Use per-agent scoping. Fewer access paths means fewer exposure vectors.
Environment variables handle configuration. The vault handles credentials. Per-agent scoping limits blast radius. Scrubbing catches accidental exposure. That's the baseline for running agent chains in production.
For a deeper look at the security architecture, see Securing API Keys in Agent Workflows and Securing AI Agent Workflows.
Get new posts in your inbox
No spam. Unsubscribe anytime.