This repository is no longer maintained.
The functionality of this tool is now available in mcp-omnisearch, which combines multiple MCP tools in one unified package.
Please use mcp-omnisearch instead.
A Model Context Protocol (MCP) server for integrating Perplexity's AI API with LLMs. This server provides advanced chat completion capabilities with specialized prompt templates for various use cases.
This server requires configuration through your MCP client. Here are examples for different environments:
Add this to your Cline MCP settings:
{
"mcpServers": {
"mcp-perplexity-search": {
"command": "npx",
"args": ["-y", "mcp-perplexity-search"],
"env": {
"PERPLEXITY_API_KEY": "your-perplexity-api-key"
}
}
}
}
For WSL environments, add this to your Claude Desktop configuration:
{
"mcpServers": {
"mcp-perplexity-search": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"source ~/.nvm/nvm.sh && PERPLEXITY_API_KEY=your-perplexity-api-key /home/username/.nvm/versions/node/v20.12.1/bin/npx mcp-perplexity-search"
]
}
}
}
The server requires the following environment variable:
PERPLEXITY_API_KEY
: Your Perplexity API key (required)The server implements a single MCP tool with configurable parameters:
Generate chat completions using the Perplexity API with support for specialized prompt templates.
Parameters:
messages
(array, required): Array of message objects with:
role
(string): 'system', 'user', or 'assistant'content
(string): The message contentprompt_template
(string, optional): Predefined template to use:
technical_docs
: Technical documentation with code examplessecurity_practices
: Security implementation guidelinescode_review
: Code analysis and improvementsapi_docs
: API documentation in JSON formatcustom_template
(object, optional): Custom prompt template with:
system
(string): System message for assistant behaviourformat
(string): Output format preferenceinclude_sources
(boolean): Whether to include sourcesformat
(string, optional): 'text', 'markdown', or 'json' (default:
'text')include_sources
(boolean, optional): Include source URLs (default:
false)model
(string, optional): Perplexity model to use (default:
'sonar')temperature
(number, optional): Output randomness (0-1, default:
0.7)max_tokens
(number, optional): Maximum response length
(default: 1024)pnpm install
pnpm build
pnpm dev
The project uses changesets for version management. To publish:
pnpm changeset
pnpm changeset version
pnpm release
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see the LICENSE file for details.
Seamless access to top MCP servers powering the future of AI integration.