Skip to content
← all posts
6 min read

Migrating from LangChain to Mentiko: A Step-by-Step Guide

Mentiko Team

You've been running LangChain in production for a while. The agents work. The chains produce output. But you're hitting the walls: debugging opaque chain internals, managing Python environments across servers, dealing with abstraction layers that make simple things complicated, and running everything through code that only the person who wrote it understands.

Mentiko takes a fundamentally different approach to agent orchestration. This guide walks you through what maps between the two systems, what changes, and how to migrate without rewriting everything at once.

What LangChain does well

Credit where it's due. LangChain pioneered accessible agent tooling. Its strengths are real:

  • Deep Python ecosystem integration. If there's a Python library for it, LangChain probably has a wrapper.
  • Massive community. Lots of examples, lots of people who've solved the problem you're hitting.
  • Rapid prototyping. You can go from zero to a working agent in a Jupyter notebook fast.
  • Model abstraction. Swap between OpenAI, Anthropic, local models, and others with minimal code changes.

If you're in the prototyping phase, running everything on a single machine, and the developer who built it is the one maintaining it -- LangChain might be exactly what you need. This guide is for teams that have outgrown that stage.

Where the friction starts

The problems that drive teams to migrate usually fall into a few categories:

Debugging is archaeology. When a LangChain chain fails, you're reading stack traces through multiple abstraction layers. The actual prompt, response, and decision are buried under AgentExecutor, LLMChain, OutputParser, and custom callbacks. Mentiko writes every input, output, and decision to an event file. cat events/research-complete.event -- that's your debugging story.

Deployment is a Python problem. Your LangChain agents need the right Python version, the right packages, and compatibility between packages that update weekly. Mentiko agents run in workspaces -- local, Docker, or SSH -- defined in the chain config, not a requirements.txt that's three months stale.

Chains are code, not configuration. A LangChain chain is a Python program. You can't look at it and understand the flow without reading the code. Mentiko chains are JSON. A product manager can read the chain definition and understand what it does. DevOps can review it in a PR. It diffs cleanly, versions cleanly, and deploys without a build step.

Observability requires instrumentation. LangChain has LangSmith and callback handlers, but you have to opt in and wire them up. In Mentiko, observability is structural. Every event is a file. Every run has a timeline. You get tracing for free because the architecture is the trace.

Conceptual mapping

Here's how LangChain concepts map to Mentiko:

  • Chain -> Chain. Python classes wired together become a JSON file with agents, triggers, and events.
  • Agent -> Agent. ReAct loop with tools becomes a prompt with a workspace and emitted events.
  • Tool -> Workspace. Python wrapper functions become execution environments (bash, Docker, SSH) where the agent uses existing CLI tools directly.
  • Memory -> Events. Conversation history objects become event files passed between agents.
  • Callbacks -> Event Stream. Custom callback handlers become the event system itself -- observability is structural, not opt-in.

Step-by-step migration

Step 1: Map your existing chain

Write down what your LangChain chain actually does. Not the code -- the workflow. "Agent A researches a topic. Agent B writes a draft from the research. Agent C edits the draft." Strip away the LangChain abstractions and describe the pipeline in plain English.

If you can't describe what each step does without referencing LangChain internals, that's a sign the abstractions have become the architecture. Simplify before migrating.

Step 2: Define agents in Mentiko

For each step in your workflow, create a Mentiko agent. The key fields:

{
  "name": "researcher",
  "prompt": "Research the given topic. Focus on recent developments, key statistics, and expert opinions. Output structured findings as JSON.",
  "triggers": ["chain:start"],
  "emits": ["research:complete"],
  "workspace": "docker",
  "model": "claude-sonnet-4-20250514"
}

The prompt replaces your LangChain agent's system message plus any prompt template logic. The triggers and emits replace the Python wiring that connected your LangChain components. The workspace replaces your tool definitions -- instead of wrapping requests.get as a LangChain tool, the agent can just curl in its workspace.

Step 3: Create the chain JSON

Assemble your agents into a chain definition:

{
  "name": "content-pipeline",
  "description": "Research, write, and edit content on a given topic.",
  "agents": [
    {
      "name": "researcher",
      "prompt": "Research {TOPIC}. Output findings as structured JSON.",
      "triggers": ["chain:start"],
      "emits": ["research:complete"],
      "workspace": "docker"
    },
    {
      "name": "writer",
      "prompt": "Write a 1500-word article from the research findings.",
      "triggers": ["research:complete"],
      "emits": ["draft:complete"]
    },
    {
      "name": "editor",
      "prompt": "Edit for clarity, accuracy, and tone. Fix grammar. Tighten prose.",
      "triggers": ["draft:complete"],
      "emits": ["chain:complete"]
    }
  ]
}

This is your entire pipeline. No imports, no class hierarchies, no callback handlers. The chain definition is the documentation.

Step 4: Migrate your tools to workspaces

This is where the migration diverges most. In LangChain, you write Python wrapper functions for every capability. In Mentiko, you configure a workspace and let the agent use existing tools directly.

A search_web tool that wraps the Serper API becomes a workspace with curl and the API key as an environment variable. A run_sql tool becomes a workspace with psql available. The mental shift: stop wrapping capabilities in Python functions. The agent already knows how to use curl, jq, psql, and grep.

Step 5: Test the chain

Run the chain with the same inputs you used in LangChain. Compare outputs. The first run won't be identical -- prompts tuned for LangChain's ReAct loop need adjustment for Mentiko's event-driven flow. Common adjustments:

  • Remove references to "tools" in prompts. The agent doesn't have tools; it has a workspace.
  • Add explicit output format instructions. LangChain's output parsers handled this silently. In Mentiko, tell the agent what format the next agent expects.
  • Simplify complex prompts. LangChain's multi-step ReAct reasoning often requires elaborate instructions. Mentiko agents do one thing per step, so each prompt can be more focused.

Step 6: Add scheduling and error handling

Once the chain produces correct output, add the production features your LangChain setup probably handles through cron jobs and try/except blocks:

{
  "schedule": "0 9 * * 1-5",
  "retry": { "max_attempts": 3, "backoff": "exponential" },
  "dead_letter": { "enabled": true, "alert_channel": "slack:pipeline-alerts" }
}

Scheduling, retries, and dead letter queues are declarative in the chain config. No external cron, no wrapper scripts, no custom error handling code.

Common gotchas

Prompt tuning takes a pass. Prompts optimized for LangChain's agent loop (with tool-use instructions, ReAct formatting, and output parser expectations) need rewriting for Mentiko's single-task-per-agent model. Budget a day for prompt tuning per chain.

Stateful agents need rethinking. If your LangChain agents maintain conversation memory across calls, you'll need to pass that state through events or use Mentiko's workspace persistence. Mentiko agents are stateless by default -- state lives in the events, not the agent.

Custom output parsers disappear. LangChain's PydanticOutputParser and friends don't exist in Mentiko. Instead, specify the output format in the prompt and validate in the next agent. This is actually simpler, but it's a different pattern.

Python-specific integrations need wrappers. If your LangChain chain calls a Python library that has no CLI equivalent, you'll need a small script in the workspace that exposes it. Most common integrations (APIs, databases, file processing) have CLI tools, but niche Python libraries might need a thin wrapper.

When NOT to migrate

Don't migrate if: you're a solo developer and LangChain's complexity isn't hurting you, your chain is deeply integrated with Python libraries that have no CLI equivalent, or you're in rapid prototyping mode where the chain changes daily. Stabilize first.

Migration makes sense when you need: multi-user access, visual chain editing, self-hosted deployment, clean observability, or when your LangChain codebase has become unmaintainable.

The migration path is incremental

Start with one chain -- ideally one that's been stable and runs on a schedule. Migrate it, run both in parallel for a week, compare outputs, then cut over. Each subsequent migration gets faster because you've already solved the common patterns.

Start your first Mentiko chain with our getting started guide, or see how Mentiko compares on our comparison page.

Get new posts in your inbox

No spam. Unsubscribe anytime.