Any Chat Completions MCP Server
Integrate Claude with any OpenAI SDK compatible chat completion API. Use multiple LLMs and agents at once - OpenAI, Perplexity, Groq, xAI, PyroPrompts and more.
🚀 Quick Start
Add this to your Claude Desktop configuration file:
{
"mcpServers": {
"chat-openai": {
"command": "npx",
"args": ["@pyroprompts/any-chat-completions-mcp"],
"env": {
"AI_CHAT_KEY": "your-openai-api-key",
"AI_CHAT_NAME": "OpenAI",
"AI_CHAT_MODEL": "gpt-4o",
"AI_CHAT_BASE_URL": "https://api.openai.com/v1"
}
}
}
}
Or install manually via npx in your Claude Desktop configuration
What is MCP?
The Model Context Protocol (MCP) is an open standard that enables AI assistants like Claude to securely connect with external data sources and tools. This MCP server acts as a bridge, allowing Claude to communicate with any OpenAI-compatible chat completion API.
Key Features
- Universal Compatibility - Works with any OpenAI SDK compatible API
- Multiple LLM Support - Use OpenAI, Perplexity, Groq, xAI, PyroPrompts simultaneously
- Easy Configuration - Simple environment variable setup
- Claude Desktop Integration - Seamless integration with Claude Desktop
- LibreChat Support - Also works with LibreChat
- Simple Setup - Easy NPX installation with configuration files
Supported Providers
OpenAI
GPT-4, GPT-3.5, and other OpenAI models
Perplexity
Sonar and other Perplexity models
Groq
Fast inference with Llama, Mixtral, and more
xAI
Grok and other xAI models
PyroPrompts
Ash and other PyroPrompts models
Any Compatible API
Works with any OpenAI SDK compatible endpoint
Installation Methods
Method 1: Manual NPX Installation (Recommended)
Add this configuration to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"chat-openai": {
"command": "npx",
"args": [
"@pyroprompts/any-chat-completions-mcp"
],
"env": {
"AI_CHAT_KEY": "your-openai-api-key",
"AI_CHAT_NAME": "OpenAI",
"AI_CHAT_MODEL": "gpt-4o",
"AI_CHAT_BASE_URL": "https://api.openai.com/v1"
}
}
}
}
Method 2: Smithery
Alternatively, you can use Smithery for automatic configuration:
npx -y @smithery/cli install any-chat-completions-mcp-server --client claude
Multiple Provider Setup
You can configure multiple LLM providers simultaneously by adding multiple server configurations. This allows you to use different models for different tasks within the same Claude conversation:
{
"mcpServers": {
"chat-openai": {
"command": "npx",
"args": ["@pyroprompts/any-chat-completions-mcp"],
"env": {
"AI_CHAT_KEY": "your-openai-key",
"AI_CHAT_NAME": "OpenAI",
"AI_CHAT_MODEL": "gpt-4o",
"AI_CHAT_BASE_URL": "https://api.openai.com/v1"
}
},
"chat-perplexity": {
"command": "npx",
"args": ["@pyroprompts/any-chat-completions-mcp"],
"env": {
"AI_CHAT_KEY": "your-perplexity-key",
"AI_CHAT_NAME": "Perplexity",
"AI_CHAT_MODEL": "sonar",
"AI_CHAT_BASE_URL": "https://api.perplexity.ai"
}
},
"chat-groq": {
"command": "npx",
"args": ["@pyroprompts/any-chat-completions-mcp"],
"env": {
"AI_CHAT_KEY": "your-groq-key",
"AI_CHAT_NAME": "Groq",
"AI_CHAT_MODEL": "llama-3.1-70b-versatile",
"AI_CHAT_BASE_URL": "https://api.groq.com/openai/v1"
}
}
}
}
How It Works
Once configured, you'll see chat tools for each provider in Claude Desktop. You can then:
- Ask Claude to consult other LLMs - "Ask OpenAI what it thinks about this code"
- Compare responses - Get different perspectives from multiple models
- Use specialized models - Use Perplexity for research, Groq for speed, etc.
- Leverage unique capabilities - Each model has different strengths
Use Cases
Research & Analysis
Use Perplexity for real-time web search and research while using Claude for analysis and synthesis.
Code Review
Get multiple perspectives on code by asking different models to review the same code snippet.
Creative Writing
Compare creative outputs from different models to get diverse writing styles and approaches.
Technical Documentation
Use specialized models for different aspects - one for technical accuracy, another for clarity.
GitHub Repository
The complete source code, documentation, and examples are available on GitHub:
Need Help with MCP Server Development?
If you need assistance developing custom MCP servers, integrating AI tools, or want custom development work, I offer consulting services to help you build powerful AI integrations.
LibreChat Integration
This MCP server also works with LibreChat. Add this configuration to your LibreChat MCP settings:
chat-perplexity:
type: stdio
command: npx
args:
- -y
- @pyroprompts/any-chat-completions-mcp
env:
AI_CHAT_KEY: "your-perplexity-key"
AI_CHAT_NAME: Perplexity
AI_CHAT_MODEL: sonar
AI_CHAT_BASE_URL: "https://api.perplexity.ai"
PATH: '/usr/local/bin:/usr/bin:/bin'
Development
Want to contribute or modify the server? Here's how to get started with development:
git clone https://github.com/ferrants/any-chat-completions-mcp.git
cd any-chat-completions-mcp
npm install
npm run build
For development with auto-rebuild:
npm run watch
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. Use the MCP Inspector for debugging:
npm run inspector
Getting Started
- Choose your installation method: Manual NPX configuration (recommended) or Smithery
- Get API keys for the providers you want to use
- Configure Claude Desktop with your provider settings
- Restart Claude Desktop to load the new MCP server
- Start chatting with multiple LLMs through Claude!
💡 Pro Tip
Try asking Claude: "Ask OpenAI to review this code, then ask Perplexity to research best practices for this pattern, and finally give me your own analysis combining both perspectives."
This MCP server opens up powerful possibilities for multi-model AI workflows. Check out the GitHub repository for detailed documentation, examples, and contribution guidelines.