API Reference

This section provides comprehensive documentation for the mcp_use API, including all components, methods, their arguments, and when to use different options.

MCPClient

The MCPClient is the core class for interacting with MCP servers. It handles connection management, session creation, and communication with MCP servers.

Initialization Methods

From Config File

from mcp_use import MCPClient

client = MCPClient.from_config_file(config_path="config.json")
ParameterTypeRequiredDescription
config_pathstrYesPath to the JSON configuration file

From Dictionary

from mcp_use import MCPClient

config = {
  "mcpServers": {
    "my_server": {
      "command": "npx",
      "args": ["@my-mcp/server"],
      "env": {
        "PORT": "3000"
      }
    }
  }
}

client = MCPClient.from_dict(config=config)
ParameterTypeRequiredDescription
configdictYesDictionary containing MCP server configuration

Core Methods

create_session

Creates a new session with an MCP server.

session = await client.create_session(server_name="my_server")
ParameterTypeRequiredDefaultDescription
server_namestrYes-Name of the server as defined in config
timeoutfloatNo30.0Connection timeout in seconds
retry_countintNo3Number of connection retry attempts

When to use:

  • Use a longer timeout for servers that take more time to initialize
  • Increase retry_count in unstable network environments
  • Use specific server_name when working with multiple servers in the same config

close_session

Closes a specific session.

await client.close_session(session_id="session_id")
ParameterTypeRequiredDescription
session_idstrYesID of the session to close

close_all_sessions

Closes all active sessions.

await client.close_all_sessions()

When to use:

  • Always call this at the end of your application to clean up resources
  • Use when switching between different tasks that require different servers

get_server

Gets a server instance by name.

server = client.get_server(name="my_server")
ParameterTypeRequiredDescription
namestrYesName of the server as defined in config

MCPAgent

The MCPAgent class combines an LLM with an MCPClient to create an intelligent agent capable of using MCP tools.

Initialization

from mcp_use import MCPAgent, MCPClient
from langchain_openai import ChatOpenAI

agent = MCPAgent(
    llm=ChatOpenAI(model="gpt-4o", temperature=0.7),
    client=MCPClient.from_config_file("config.json"),
    max_steps=30,
    session_options={"timeout": 60.0},
    auto_initialize=True,
    memory_enabled=True,
    system_prompt=None,
    system_prompt_template=None,
    additional_instructions=None
)
ParameterTypeRequiredDefaultDescription
llmBaseLanguageModelYes-Any LangChain-compatible language model
clientMCPClientNoNoneThe MCPClient instance
connectorslist[BaseConnector]NoNoneList of connectors if not using client
server_namestrNoNoneName of the server to use
max_stepsintNo5Maximum number of steps the agent can take
auto_initializeboolNoFalseWhether to initialize automatically
memory_enabledboolNoTrueWhether to enable memory
system_promptstrNoNoneCustom system prompt
system_prompt_templatestrNoNoneCustom system prompt template
additional_instructionsstrNoNoneAdditional instructions for the agent
session_optionsdictNoAdditional options for session creation
output_parserOutputParserNoNoneCustom output parser for LLM responses

When to use different parameters:

  • llm:

    • mcp_use supports ANY LLM that is compatible with LangChain
    • You can use models from OpenAI, Anthropic, Google, Mistral, Groq, Cohere, or any other provider with a LangChain integration
    • You can even use open source models via LlamaCpp, HuggingFace, or other interfaces
    • Custom or self-hosted models are also supported as long as they implement LangChain’s interface
  • max_steps:

    • Increase for complex tasks that require many interactions
    • Decrease for simpler tasks to improve efficiency
    • Use higher values (50+) for web browsing or multi-stage tasks
    • Use lower values (10-20) for targeted, specific tasks
  • system_prompt / system_prompt_template:

    • Use to customize the initial instructions given to the LLM
    • Helps shape the agent’s behavior and capabilities
    • Use for specialized tasks or custom interaction patterns
  • memory_enabled:

    • Enable to maintain conversation history
    • Disable for stateless operation or to save on token usage
  • session_options:

    • Customize timeout for long-running server operations
    • Set retry parameters for unstable connections

Core Methods

run

Runs the agent with a given query.

result = await agent.run(
    query="Find information about Python libraries",
    max_steps=25,
    stop_on_first_result=False
)
ParameterTypeRequiredDefaultDescription
querystrYes-The query to run
max_stepsintNoNoneOverrides the instance max_steps
stop_on_first_resultboolNoFalseWhether to stop at first result
server_namestrNoNoneSpecific server to use
callbackslistNoNoneCallback functions for events

When to use different parameters:

  • max_steps: Override the instance default for specific queries
  • stop_on_first_result: Use True for simple lookups, False for thorough exploration
  • server_name: Specify when using multiple servers for different tasks
  • callbacks: Add for monitoring or logging specific runs

reset

Resets the agent state.

agent.reset()

When to use:

  • Between different tasks to clear context
  • When starting a new conversation thread
  • When agent gets stuck in a particular strategy

get_history

Gets the agent’s interaction history.

history = agent.get_history()

When to use:

  • For debugging agent behavior
  • When implementing custom logging
  • To provide context for follow-up queries

Configuration Details

MCP Server Configuration Schema

{
  "mcpServers": {
    "server_name": {
      "command": "command_to_run",
      "args": ["arg1", "arg2"],
      "env": {
        "ENV_VAR": "value"
      },
      "timeout": 30.0,
      "retry": {
        "max_attempts": 3,
        "backoff_factor": 1.5
      }
    }
  }
}
FieldTypeRequiredDescription
commandstringYesThe command to start the MCP server
argsarrayNoArguments to pass to the command
envobjectNoEnvironment variables for the server
timeoutnumberNoConnection timeout in seconds
retryobjectNoRetry configuration
retry.max_attemptsnumberNoMaximum retry attempts
retry.backoff_factornumberNoBackoff multiplier between retries

When to use different options:

  • command & args: Vary based on the specific MCP server implementation

  • env:

    • Set environment-specific variables needed by the server
    • Override default server settings (ports, directories)
    • Set display settings for GUI-based servers
  • timeout:

    • Increase for servers with longer startup times
    • Lower for simpler servers to fail fast
  • retry configuration:

    • Adjust for different network conditions
    • Increase max_attempts in unstable environments
    • Adjust backoff_factor based on server behavior

Error Handling

mcp_use provides several exception types to handle different error scenarios:

ExceptionDescriptionWhen It Occurs
MCPConnectionErrorConnection to MCP server failedNetwork issues, server not running
MCPAuthenticationErrorAuthentication with server failedInvalid credentials or tokens
MCPTimeoutErrorOperation timed outServer takes too long to respond
MCPServerErrorServer returned an errorInternal server error
MCPClientErrorClient-side errorInvalid configuration or parameters
MCPErrorGeneric MCP-related errorAny other MCP-related issue

Handling Strategies:

from mcp_use.exceptions import MCPConnectionError, MCPTimeoutError

try:
    result = await agent.run("Find information")
except MCPConnectionError:
    # Handle connection issues
    print("Failed to connect to the MCP server")
except MCPTimeoutError:
    # Handle timeout issues
    print("Operation timed out")
except Exception as e:
    # Handle other exceptions
    print(f"An error occurred: {e}")

Advanced Usage

Multi-Server Configuration

Configure and use multiple MCP servers in a single application:

from mcp_use import MCPClient, MCPAgent
from langchain_openai import ChatOpenAI

# Create client with multiple servers
client = MCPClient.from_dict({
    "mcpServers": {
        "browser": {
            "command": "npx",
            "args": ["@playwright/mcp@latest"]
        },
        "custom_server": {
            "command": "python",
            "args": ["-m", "my_custom_mcp_server"]
        }
    }
})

# Create agent
agent = MCPAgent(llm=ChatOpenAI(model="gpt-4o"), client=client)

# Run with specific server
result_browser = await agent.run(
    "Search the web for Python libraries",
    server_name="browser"
)

# Run with different server
result_custom = await agent.run(
    "Perform custom operation",
    server_name="custom_server"
)

Custom Output Parsing

Implement custom output parsers for specialized MCP servers:

from langchain.schema import OutputParser
from mcp_use import MCPAgent, MCPClient

class CustomOutputParser(OutputParser):
    def parse(self, text):
        # Custom parsing logic
        return processed_result

# Use the custom parser
agent = MCPAgent(
    llm=llm,
    client=client,
    output_parser=CustomOutputParser()
)

This approach is useful when:

  • The MCP server returns structured data that needs special handling
  • You need to extract specific information from responses
  • You’re integrating with custom or specialized MCP servers