An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.
Learn more about Atla here. Learn more about the Model Context Protocol here.
evaluate_llm_response
: Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response.evaluate_llm_response_on_multiple_criteria
: Evaluate an LLM's response to a prompt across multiple evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria.To use the MCP server, you will need an Atla API key. You can find your existing API key here or create a new one here.
We recommend using
uv
to manage the Python environment. See here for installation instructions.
git clone https://github.com/atla-ai/atla-mcp-server.git
cd atla-mcp-server
uv venv
source .venv/bin/activate
# Basic installation
uv pip install -e .
# Installation with development tools (recommended)
uv pip install -e ".[dev]"
pre-commit install
ATLA_API_KEY
to your environment:export ATLA_API_KEY=<your-atla-api-key>
Once you have installed the server, you can connect to it using any MCP client.
Here, we provide specific instructions for connection to some common MCP clients.
In what follows:
- If you are having issues with
uv
, you might need to pass in the full path to theuv
executable. You can find it by runningwhich uv
in your terminal.path/to/atla-mcp-server
is the path to theatla-mcp-server
directory, which is the path to the repository you cloned in step 1.
Having issues or need help connecting to another client? Feel free to open an issue or contact us!
For more details on using the OpenAI Agents SDK with MCP servers, refer to the official documentation.
pip install openai-agents
import os
from agents import Agent
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
params={
"command": "uv",
"args": ["run", "--directory", "/path/to/atla-mcp-server", "atla-mcp-server"],
"env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")}
}
) as atla_mcp_server:
...
For more details on configuring MCP servers in Claude Desktop, refer to the official MCP quickstart guide.
claude_desktop_config.json
file:{
"mcpServers": {
"atla-mcp-server": {
"command": "uv",
"args": [
"--directory",
"/path/to/atla-mcp-server",
"run",
"atla-mcp-server"
],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
You should now see options from atla-mcp-server
in the list of available MCP tools.
For more details on configuring MCP servers in Cursor, refer to the official documentation.
.cursor/mcp.json
file:{
"mcpServers": {
"atla-mcp-server": {
"command": "uv",
"args": [
"--directory",
"/path/to/atla-mcp-server",
"run",
"atla-mcp-server"
],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
You should now see atla-mcp-server
in the list of available MCP servers.
If you are using an MCP client, you will generally not need to run the server locally.
Running the server locally can be useful for development and debugging. After installation, you can run the server in several ways:
uv run
(recommended):cd path/to/atla-mcp-server
uv run atla-mcp-server
cd path/to/atla-mcp-server
python -m atla_mcp_server
cd path/to/atla-mcp-server
uv run mcp dev src/atla_mcp_server/debug.py
All methods will start the MCP server with stdio
transport, ready to accept connections from MCP clients. The MCP Inspector will provide a web interface for testing and debugging the MCP server.
Contributions are welcome! Please see the CONTRIBUTING.md file for details.
This project is licensed under the MIT License. See the LICENSE file for details.
Seamless access to top MCP servers powering the future of AI integration.