A Model Context Protocol (MCP) server that tries to solve condebase indexing (until agents can).
get_codebase_context
) tool for AI clients (like Cursor, Claude Desktop).list_repository_structure
, read_files
, grep_codebase
) to understand the code.python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
tree
: Ensure the tree
command is available on your system.
brew install tree
sudo apt update && sudo apt install tree
.env.example
to .env
..env
and add your Google Gemini API Key:
GEMINI_API_KEY="YOUR_ACTUAL_API_KEY"
--gemini-api-key
command-line argument.By default, the server runs in SSE mode, which allows you to:
Run the server:
python kontxt_server.py --repo-path /path/to/your/codebase
PS: you can use pwd
to list the project path
The server will start on http://127.0.0.1:8080/sse
by default.
For additional options:
python kontxt_server.py --repo-path /path/to/your/codebase --host 0.0.0.0 --port 6900
The server can be stopped by pressing Ctrl+C
in the terminal where it's running. The server will attempt to close gracefully with a 3-second timeout.
Once your server is running, you can connect Cursor to it by editing your ~/.cursor/mcp.json
file:
{
"mcpServers": {
"kontxt-server": {
"serverType": "sse",
"url": "http://localhost:8080/sse"
}
}
}
PS: remember to always refresh the MCP server on Cursor Settings or other client to connect to the MCP via sse
If you prefer to have the client start and manage the server process:
python kontxt_server.py --repo-path /path/to/your/codebase --transport stdio
For this mode, configure your ~/.cursor/mcp.json
file like this:
{
"mcpServers": {
"kontxt-server": {
"serverType": "stdio",
"command": "python",
"args": ["/absolute/path/to/kontxt_server.py", "--repo-path", "/absolute/path/to/your/codebase", "--transport", "stdio"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
--repo-path PATH
: Required. Absolute path to the local code repository to analyze.--gemini-api-key KEY
: Google Gemini API Key (overrides .env
if provided).--token-threshold NUM
: Target maximum token count for the context. Allowed values are:
--gemini-model NAME
: Specific Gemini model to use (default: 'gemini-2.0-flash').--transport {stdio,sse}
: Transport protocol to use (default: sse).--host HOST
: Host address for the SSE server (default: 127.0.0.1).--port PORT
: Port for the SSE server (default: 8080).Example queries:
PS: you can further specify the agent to use the MCP tool if it's not using it: "What is the last word of the third codeblock of the auth file? Use the MCP tool available."
Your referenced files/context in your queries are included as context for analysis:
The server will mention these files to Gemini but will NOT automatically read or include their contents. Instead, Gemini will decide which files to read using its tools based on the query context.
This approach allows Gemini to only read files that are actually needed and prevents the context from being bloated with irrelevant file content.
The server tracks token usage across different operations:
This information is logged during operation, helping you monitor API usage and optimize your queries.
PD: want the tool to improve? PR's are open.
Seamless access to top MCP servers powering the future of AI integration.