A Model Context Protocol (MCP) server for accessing Google BigQuery. This server enables Large Language Models (LLMs) to understand BigQuery dataset structures and execute SQL queries.
query
list_all_datasets
list_all_tables_with_dataset
get_table_information
dry_run_query
# Clone the repository
git clone https://github.com/yourusername/bigquery-mcp-server.git
cd bigquery-mcp-server
# Install dependencies
bun install
# Build the server
bun run build
# Install command to your own path.
cp dist/bigquery-mcp-server /path/to/your_place
You can also run the server in a Docker container:
# Build the Docker image
docker build -t bigquery-mcp-server .
# Run the container
docker run -it --rm \
bigquery-mcp-server \
--project-id=your-project-id
Or using Docker Compose:
# Edit docker-compose.yml to set your project ID and other options
# Then run:
docker-compose up
To use this server with an MCP-enabled LLM, add it to your MCP configuration:
{
"mcpServers": {
"BigQuery": {
"command": "/path/to/dist/bigquery-mcp-server",
"args": [
"--project-id",
"your-project-id",
"--location",
"asia-northeast1",
"--max-results",
"1000",
"--max-bytes-billed",
"500000000000"
],
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "/path/to/service-account-key.json"
}
}
}
}
You can also use Application Default Credentials instead of a service account key file:
{
"mcpServers": {
"BigQuery": {
"command": "/path/to/dist/bigquery-mcp-server",
"args": [
"--project-id",
"your-project-id",
"--location",
"asia-northeast1",
"--max-results",
"1000",
"--max-bytes-billed",
"500000000000"
]
}
}
}
To authenticate using Application Default Credentials:
Install the Google Cloud SDK if you haven't already:
# For macOS
brew install --cask google-cloud-sdk
# For other platforms, see: https://cloud.google.com/sdk/docs/install
Run the authentication command:
gcloud auth application-default login
Follow the prompts to log in with your Google account that has access to the BigQuery project.
The credentials will be saved to your local machine and automatically used by the BigQuery MCP server.
You can use inspector for testing and debugging.
npx @modelcontextprotocol/inspector dist/bigquery-mcp-server --project-id={{your_own_project}}
The included run-server.sh
script makes it easy to start the server with common configurations:
# Make the script executable
chmod +x run-server.sh
# Run with Application Default Credentials
./run-server.sh --project-id=your-project-id
# Run with a service account key file
./run-server.sh \
--project-id=your-project-id \
--location=asia-northeast1 \
--key-file=/path/to/service-account-key.json \
--max-results=1000 \
--max-bytes-billed=500000000000
You can also run the compiled binary directly:
# Run with Application Default Credentials
./dist/bigquery-mcp-server --project-id=your-project-id
# Run with a service account key file
./dist/bigquery-mcp-server \
--project-id=your-project-id \
--location=asia-northeast1 \
--key-file=/path/to/service-account-key.json \
--max-results=1000 \
--max-bytes-billed=500000000000
An example Node.js client is included in the examples
directory:
# Make the example executable
chmod +x examples/sample-query.js
# Edit the example to set your project ID
# Then run it
cd examples
./sample-query.js
--project-id
: Google Cloud project ID (required)--location
: BigQuery location (default: asia-northeast1)--key-file
: Path to service account key file (optional)--max-results
: Maximum rows to return (default: 1000)--max-bytes-billed
: Maximum bytes to process (default: 500000000000, 500GB)The service account or user credentials should have one of the following:
roles/bigquery.user
(recommended)Or both of these:
roles/bigquery.dataViewer
(for reading table data)roles/bigquery.jobUser
(for executing queries){
"query": "SELECT * FROM `project.dataset.table` LIMIT 10",
"maxResults": 100
}
// No parameters required
{
"datasetId": "your_dataset"
}
{
"datasetId": "your_dataset",
"tableId": "your_table",
"partition": "20250101"
}
{
"query": "SELECT * FROM `project.dataset.table` WHERE date = '2025-01-01'"
}
The server provides detailed error messages for:
The server is organized into the following structure:
src/
├── index.ts # Entry point
├── server.ts # BigQueryMcpServer class
├── types.ts # Type definitions
├── tools/ # Tool implementations
│ ├── query.ts # query tool
│ ├── list-datasets.ts # list_all_datasets tool
│ ├── list-tables.ts # list_all_tables_with_dataset tool
│ ├── table-info.ts # get_table_information tool
│ └── dry-run.ts # dry_run_query tool
└── utils/ # Utility functions
├── args-parser.ts # Command line argument parser
└── query-utils.ts # Query validation and response formatting
MIT
Seamless access to top MCP servers powering the future of AI integration.