A secure Python code execution service that provides a self-hosted alternative to OpenAI's Code Interpreter or Anthropic's Claude analysis tool. Built with FastAPI and IPython kernels, it supports session-based code execution and integrates with LLM function calling.
It also now supports the Model Context Protocol (MCP) for seamless integration with LLM applications.
CodeBox-AI now supports the Model Context Protocol (MCP), allowing LLM applications (like Claude Desktop) to interact with your code execution service in a standardized way.
You can run the MCP server in several ways:
Standalone (for MCP clients or Claude Desktop):
uv run mcp dev mcp_server.py
This starts the MCP server in development mode for local testing and debugging.
Register with Claude Desktop:
uv run mcp install mcp_server.py --name "CodeBox-AI"
This will make your server available to Claude Desktop as a custom tool.
Combined FastAPI + MCP server:
uv run run.py
This starts both the FastAPI API and the MCP server (MCP available at /mcp
).
MCP server only:
uv run run.py --mode mcp
execute_code
: Execute Python code and return resultssession://{session_id}
: Get info about a sessionsessions://
: List all active sessionsuv run mcp dev mcp_server.py
Edit the file ~/Library/Application Support/Claude/claude_desktop_config.json
. The following is an example configuration:
{
"mcpServers": {
"CodeBox-AI": {
"command": "uv",
"args": [
"run",
"--project",
"/Users/username/src/codebox-ai",
"/Users/username/src/codebox-ai/mcp_server.py",
"--mount",
"/Users/username/Downloads"
]
}
}
}
Unfortunately, all paths need to be absolute. This example shows how to mount the Downloads
directory into the container.
git clone https://github.com/yourusername/codebox-ai.git
cd codebox-ai
# Install uv if you don't have it yet
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a virtual environment and install dependencies in one step
uv sync
uv run -m codeboxai.main
The API will be available at http://localhost:8000
For development, install with the development extras:
uv sync --extra "dev docs"
If you encounter a "file not found" DockerException
when running the server on MacOS, you might need to set the DOCKER_HOST
environment variable. First, find out which context you are using by running:
docker context ls
Then set the DOCKER_HOST
environment variable to the correct endpoint:
export DOCKER_HOST="unix:///Users/tconte/.docker/run/docker.sock"
.env
file in the project root:AZURE_OPENAI_ENDPOINT=https://xxx.cognitiveservices.azure.com/
AZURE_OPENAI_API_KEY=foo
AZURE_OPENAI_DEPLOYMENT=gpt-4o
OPENAI_API_VERSION=2024-05-01-preview
uv sync --extra "examples"
uv run examples/example_openai.py
This will start an interactive session where you can chat with GPT-4 and have it execute Python code. The script maintains state between executions, so variables and imports persist across interactions.
If you want to expose a local directory to the container, you can do so by using
the CODEBOX_MOUNT_PATH
environment variable. For example, to mount your
Downloads
directory:
export CODEBOX_MOUNT_PATH=/Users/username/Downloads
uv run examples/example_openai.py
curl -X POST http://localhost:8000/sessions \
-H "Content-Type: application/json" \
-d '{
"dependencies": ["numpy", "pandas"]
}'
curl -X POST http://localhost:8000/execute \
-H "Content-Type: application/json" \
-d '{
"code": "x = 42\nprint(f\"Value of x: {x}\")",
"session_id": "YOUR_SESSION_ID"
}'
curl -X GET http://localhost:8000/execute/YOUR_REQUEST_ID/status
curl -X GET http://localhost:8000/execute/YOUR_REQUEST_ID/results
curl -X POST http://localhost:8000/execute \
-H "Content-Type: application/json" \
-d '{
"code": "print(f\"x is still: {x}\")",
"session_id": "YOUR_SESSION_ID"
}'
curl -X POST http://localhost:8000/sessions \
-H "Content-Type: application/json" \
-d '{
"execution_options": {
"mount_points": [
{
"host_path": "/Users/tconte/Downloads",
"container_path": "/data/downloads",
"read_only": true
}
],
"timeout": 300
}
}'
curl -X POST http://localhost:8000/execute \
-H "Content-Type: application/json" \
-d '{
"code": "import os\nprint(\"Files in mounted directory:\")\nfor file in os.listdir(\"/data/downloads\"):\n print(f\" - {file}\")",
"session_id": "YOUR_SESSION_ID"
}'
POST /sessions
- Create a new sessionPOST /execute
- Execute code in a sessionGET /execute/{request_id}/status
- Get execution statusGET /execute/{request_id}/results
- Get execution resultsDELETE /sessions/{session_id}
- Cleanup a sessionMIT License - See LICENSE file for details.
This code was pair-programmed with Claude 3.5 Sonnet (yes, an AI helping to build tools for other AIs - very meta). While I handled the product decisions and architecture reviews, Claude did most of the heavy lifting in terms of code generation and documentation. Even this README was written by Claude, which makes this acknowledgment a bit like an AI writing about an AI writing about AI tools... we need to go deeper 🤖✨
Humans were (mostly) present during the development process. No AIs were harmed in the making of this project, though a few might have gotten slightly dizzy from the recursion.
A prototype implementation, not intended for production use without additional security measures.
{
"mcpServers": {
"CodeBox-AI": {
"env": {},
"args": [
"run",
"--project",
"/Users/username/src/codebox-ai",
"/Users/username/src/codebox-ai/mcp_server.py",
"--mount",
"/Users/username/Downloads"
],
"command": "uv"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.