Skip to content
← all posts
7 min read

Why JSON Chain Definitions Beat Python Code for Agent Orchestration

Mentiko Team

Every agent orchestration framework makes a choice: how do users define their workflows? The two dominant approaches are imperative code (write Python that calls APIs) and declarative definitions (write JSON/YAML that describes the pipeline). Most frameworks chose Python. We chose JSON. Here's why.

Imperative vs declarative: what's the actual difference

An imperative definition tells the system how to do something. A declarative definition tells the system what you want.

Imperative (Python):

from crewai import Agent, Task, Crew

researcher = Agent(
    role="Researcher",
    goal="Research the topic thoroughly",
    backstory="You are an expert researcher...",
    llm="gpt-5.4"
)
writer = Agent(
    role="Writer",
    goal="Write a compelling blog post",
    backstory="You are a skilled writer...",
    llm="gpt-5.4"
)
research_task = Task(
    description="Research {topic}",
    agent=researcher,
    expected_output="Research notes"
)
write_task = Task(
    description="Write blog post from research",
    agent=writer,
    expected_output="Blog post",
    context=[research_task]
)
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
result = crew.kickoff(inputs={"topic": "AI agents"})

Declarative (JSON):

{
  "name": "content-pipeline",
  "agents": [
    {
      "name": "researcher",
      "model": "gpt-5.4",
      "prompt": "Research {topic} thoroughly. Compile findings with sources.",
      "triggers": ["chain:start"],
      "emits": ["research:complete"]
    },
    {
      "name": "writer",
      "model": "gpt-5.4",
      "prompt": "Write a compelling blog post from the research findings.",
      "triggers": ["research:complete"],
      "emits": ["chain:complete"]
    }
  ]
}

Both define the same workflow. The Python version is 20 lines. The JSON version is 17 lines. But the differences go far beyond line count.

Where JSON wins

Portability

A JSON chain definition runs on any system that has a Mentiko runtime. It doesn't depend on a Python interpreter, a specific library version, or an OS with the right packages installed. You can execute it on a developer's laptop, a CI server, a Docker container, or a production cluster without changing a character.

Python orchestration code is tied to its runtime. The CrewAI example above requires Python 3.10+, the crewai package at a compatible version, and any transitive dependencies. Move it to a new machine and you're troubleshooting pip install failures. Put it in a container and you're maintaining a Dockerfile. The workflow definition is inseparable from its execution environment.

JSON is the lingua franca of APIs. Every language, every platform, every tool can read and write it. Your chain definition is data, not code. That distinction matters.

Version control and diffing

JSON diffs are clean and reviewable. When someone changes an agent's prompt or switches a model, the diff shows exactly what changed:

  {
    "name": "researcher",
-   "model": "gpt-5.4",
+   "model": "claude-sonnet",
    "prompt": "Research {topic} thoroughly."
  }

Python diffs are noisy. Changing the same thing in Python code might touch multiple lines, involve import changes, or require updating constructor arguments across several objects. Code review for workflow changes becomes code review for application changes -- you're reviewing logic, not intent.

This matters for teams. When the marketing team wants to tweak an agent's prompt, they should be editing a string in a JSON file, not navigating Python class hierarchies. When the change lands in a PR, the reviewer sees "prompt changed from X to Y" not "line 47 of orchestrator.py has a new string."

Visual builder compatibility

JSON is trivially renderable as a visual graph. Each agent is a node. Triggers and emits are edges. A drag-and-drop chain builder reads and writes JSON -- it can't read and write arbitrary Python.

This is why Mentiko's visual chain builder exists and why frameworks that use Python definitions don't have one. You can't parse arbitrary Python into a visual graph and reassemble it after the user drags a node. You can absolutely do this with JSON.

Visual builders aren't just for non-technical users. They're the fastest way for anyone to understand a chain's structure at a glance. A 10-agent chain in JSON renders as a clear flowchart. The same chain in Python is 100+ lines that you have to read sequentially to understand the data flow.

Runtime flexibility

Because JSON is data, the runtime can do things with it that are impossible with code:

Validation before execution. The runtime can validate the entire chain before running anything. Are all trigger events matched by an emitter? Are there circular dependencies? Is every model specified valid? With Python, you discover these issues at runtime -- maybe 3 agents into a 10-agent chain.

Dynamic modification. A running chain can be modified without restarting. Swap a model, change a prompt, add a new agent -- update the JSON and the next run picks up the changes. With Python, you redeploy.

Introspection. The runtime knows everything about the chain because the definition is structured data. It can calculate cost estimates before running, visualize execution plans, identify optimization opportunities (parallel agents that are defined sequentially), and generate documentation automatically.

Multi-runtime execution. The same JSON chain can run on different runtimes: a lightweight CLI runner for development, a distributed runner for production, a mock runner for testing. The Python version would need to be rewritten or wrapped for each.

Where Python wins

This isn't a one-sided argument. Python orchestration has real advantages.

Complex conditional logic

When your workflow requires logic that's more complex than "if score > threshold, route to A else B," Python is more natural. Nested conditionals, loops over dynamic data, error handling with retry logic -- these are awkward in declarative JSON. You end up inventing a mini programming language inside your JSON schema, which is worse than just using a real programming language.

Custom tool integration

If an agent needs to call a custom function -- query a database, hit an internal API, run a computation -- Python makes this trivial. Import the library, call the function. In JSON, custom tools need to be registered in the runtime and exposed to the chain definition. It's doable but it's an extra step.

Rapid prototyping

When you're exploring an idea and don't know what the chain looks like yet, writing Python in a notebook is faster than editing JSON and running it through a CLI. The feedback loop is tighter. You can inspect intermediate results, tweak prompts, and iterate in the same environment.

Ecosystem

Python's ML/AI ecosystem is enormous. LangChain, LlamaIndex, Hugging Face, pandas, scikit-learn -- if your chain needs to integrate with these tools, Python gives you direct access. JSON definitions need explicit integration points.

The hybrid approach

The best answer isn't pure JSON or pure Python. It's JSON for structure, Python for tools.

Define your chain's topology -- agents, triggers, models, prompts -- in JSON. When an agent needs to execute custom logic, it calls a tool defined in Python. The chain definition stays portable, diffable, and visual-builder compatible. The complex logic lives in Python functions that are registered as tools.

{
  "name": "analysis-pipeline",
  "agents": [
    {
      "name": "data-fetcher",
      "triggers": ["chain:start"],
      "tools": ["fetch_sales_data"],
      "prompt": "Use the fetch_sales_data tool to get Q1 numbers for {company}.",
      "emits": ["data:ready"]
    },
    {
      "name": "analyzer",
      "triggers": ["data:ready"],
      "model": "claude-opus",
      "prompt": "Analyze the sales data and identify trends.",
      "emits": ["chain:complete"]
    }
  ]
}
# tools/fetch_sales_data.py
def fetch_sales_data(company: str, quarter: str) -> dict:
    """Registered as a tool available to agents."""
    db = connect_to_warehouse()
    return db.query(f"SELECT * FROM sales WHERE company = %s AND quarter = %s", company, quarter)

The chain definition is still JSON. Anyone can read it, the visual builder can render it, diffs are clean. But the data-fetcher agent has access to a Python function that does the complex work. The orchestration layer is declarative. The execution layer is imperative where it needs to be.

What this means in practice

Teams that use JSON chain definitions share some common traits:

Faster onboarding. New team members read the chain JSON and understand the workflow in minutes. They don't need to understand the orchestration framework's Python API.

More contributors. Product managers, data analysts, and ops people can modify chain definitions. They can't (and shouldn't) modify Python orchestration code.

Cleaner deploys. Updating a chain is a JSON file change. No dependency updates, no build step, no risk of breaking unrelated code.

Better observability. The runtime knows exactly what the chain will do before it runs. Cost estimation, execution planning, and performance analysis are built-in, not bolted on.

The tradeoff is flexibility. Python lets you do anything. JSON constrains you to what the schema supports. For 90% of agent workflows -- sequential pipelines, fan-out/fan-in, conditional routing, scheduled chains -- JSON covers it cleanly. For the other 10%, use the hybrid approach: JSON structure, Python tools.

If your team is building standard agent workflows and you want them to be portable, reviewable, and editable by anyone, JSON is the better default. If you're building a research prototype that changes hourly and touches five different ML libraries, use Python. Most teams should start with JSON and reach for Python only when JSON can't express what they need.

Get new posts in your inbox

No spam. Unsubscribe anytime.