Skip to main content
The MCPToolkit connects to one or more MCP (Model Context Protocol) servers and exposes their tools as native agent tools. Your agent can call any tool from any connected MCP server without additional configuration.

Basic Usage

from definable.agent import Agent
from definable.mcp import MCPToolkit, MCPConfig, MCPServerConfig
from definable.model.openai import OpenAIChat

config = MCPConfig(servers=[
    MCPServerConfig(
        name="filesystem",
        transport="stdio",
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
    ),
])

toolkit = MCPToolkit(config=config)

async with toolkit:
    agent = Agent(
        model=OpenAIChat(id="gpt-4o"),
        toolkits=[toolkit],
    )
    output = await agent.arun("List the files in /tmp")
    print(output.content)

Parameters

config
MCPConfig
required
MCP configuration with server definitions.
tool_name_prefix
str
default:"\"\""
Prefix added to all tool names (e.g., "mcp_").
include_server_prefix
bool
default:"true"
Whether to include the server name in tool names (e.g., filesystem_list_files).
require_confirmation
bool
default:"false"
Require user confirmation before executing any MCP tool.

Multiple Servers

Connect to several MCP servers at once. Tools from all servers are available to the agent:
config = MCPConfig(servers=[
    MCPServerConfig(
        name="filesystem",
        transport="stdio",
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
    ),
    MCPServerConfig(
        name="github",
        transport="stdio",
        command="npx",
        args=["-y", "@modelcontextprotocol/server-github"],
    ),
])

toolkit = MCPToolkit(config=config)

Context Manager

MCPToolkit must be used as a context manager to properly manage server connections:
async with MCPToolkit(config=config) as toolkit:
    agent = Agent(model=model, toolkits=[toolkit])
    output = await agent.arun("Do something")
Alternatively, manage the lifecycle manually:
toolkit = MCPToolkit(config=config)
await toolkit.initialize()

try:
    agent = Agent(model=model, toolkits=[toolkit])
    output = await agent.arun("Do something")
finally:
    await toolkit.shutdown()

Tool Discovery

After initialization, inspect available tools:
async with MCPToolkit(config=config) as toolkit:
    for t in toolkit.tools:
        print(f"{t.name}: {t.description}")

Refreshing Tools

If server tools change at runtime, refresh the tool list:
await toolkit.refresh_tools()

Combining with Other Toolkits

MCP toolkit works alongside other toolkits and individual tools:
agent = Agent(
    model=model,
    tools=[my_custom_tool],
    toolkits=[MCPToolkit(config=mcp_config), MathToolkit()],
)

Agent-Managed Lifecycle

Instead of manually managing the toolkit lifecycle with async with, you can let the agent handle it automatically. When you pass an uninitialized MCPToolkit to an agent, the agent will initialize it on first use and shut it down when the agent shuts down:
from definable.agent import Agent
from definable.mcp import MCPToolkit, MCPConfig, MCPServerConfig
from definable.model.openai import OpenAIChat

config = MCPConfig(servers=[
    MCPServerConfig(
        name="filesystem",
        transport="stdio",
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
    ),
])

# No `async with toolkit:` needed — agent handles lifecycle
toolkit = MCPToolkit(config=config)
agent = Agent(model=OpenAIChat(id="gpt-4o"), toolkits=[toolkit])
output = await agent.arun("List the files in /tmp")

# Or use the agent as a context manager for explicit cleanup
async with Agent(model=OpenAIChat(id="gpt-4o"), toolkits=[toolkit]) as agent:
    output = await agent.arun("List the files in /tmp")
If you pre-initialize the toolkit yourself (via async with toolkit: or await toolkit.initialize()), the agent detects this and does not take ownership — you remain responsible for shutting it down.
For more on MCP configuration, transports, and advanced features, see the MCP section.