TokenMix Research Lab · 2026-04-25

MCP Servers List 2026: Complete Directory of 70+ Production Servers

MCP Servers List 2026: Complete Directory

This is the complete directory of production-ready Model Context Protocol (MCP) servers as of April 2026. 500+ community servers exist in the modelcontextprotocol/servers registry; this guide covers the ~70 that matter most for real agent workflows, organized by category. For each, you get: what it does, install command, and when to use it vs alternatives.

How to Use This Directory

  1. Find the category matching your need
  2. Check the listed servers' capabilities
  3. Install via the provided npm or Docker command
  4. Configure your MCP client (Claude Desktop, Cursor, Cline, etc.)

All servers support any MCP-compatible LLM client — Claude Opus 4.7, GPT-5.5 via OpenAI Agents SDK, DeepSeek V4-Pro, Kimi K2.6, and others through aggregators like TokenMix.ai.

Category 1 — Developer Tools

@modelcontextprotocol/server-git — Git operations (commit, diff, log, branch)

npm install -g @modelcontextprotocol/server-git

@modelcontextprotocol/server-github — GitHub API (issues, PRs, repos, search)

npm install -g @modelcontextprotocol/server-github
# Requires GITHUB_TOKEN

@gitlab/mcp-server — GitLab API (MRs, issues, pipelines). See the GitLab MCP guide.

@modelcontextprotocol/server-filesystem — file read/write access

npm install -g @modelcontextprotocol/server-filesystem

@modelcontextprotocol/server-sequential-thinking — structured reasoning tool

Category 2 — Web Scraping and Search

@mendable/firecrawl-mcp — web scraping with JS rendering. See Firecrawl MCP guide.

@tavily/mcp-server — AI-optimized search

npm install -g @tavily/mcp-server

@jinaai/mcp-reader — simple URL-to-markdown

@brave/mcp-server — Brave search API

@serpapi/mcp-server — Google search via SerpAPI

Category 3 — Frontend Development

@shadcn/mcp-server — shadcn/ui components. See shadcn MCP guide.

@radix-ui/mcp-server — Radix primitives

@tailwind/mcp-server — Tailwind config and class suggestions

@figma/mcp-server (community) — Figma file access for design-to-code

Category 4 — Databases

@postgres/mcp-server — PostgreSQL read/write queries

npm install -g @modelcontextprotocol/server-postgres

@modelcontextprotocol/server-sqlite — SQLite files

@mongodb/mcp-server — MongoDB operations

@redis/mcp-server — Redis key-value access

@supabase/mcp-server — Supabase managed Postgres + auth

Category 5 — Cloud Infrastructure

@aws/mcp-server — AWS API access (EC2, S3, Lambda, RDS)

@gcp/mcp-server — Google Cloud APIs

@azure/mcp-server — Azure resource management

@cloudflare/mcp-server — Workers, KV, R2, D1 management

@digitalocean/mcp-server — Droplet management

@vercel/mcp-server — deployment and edge config

Category 6 — Communication and Productivity

@slack/mcp-server — Slack messaging and channel access

@discord/mcp-server — Discord messaging

@telegram/mcp-server — Telegram bot integration

@notion/mcp-server — Notion pages and databases

@linear/mcp-server — Linear issues and projects

@jira/mcp-server — Jira issues

@airtable/mcp-server — Airtable records

@gmail/mcp-server — Gmail read/send

@calendar/mcp-server — Google Calendar events

Category 7 — Content and Media

@youtube/mcp-server — YouTube video search and transcripts

@spotify/mcp-server — Spotify library and playlists

@wikipedia/mcp-server — Wikipedia article access

@arxiv/mcp-server — arXiv paper search

@unsplash/mcp-server — Unsplash image search

Category 8 — Monitoring and Observability

@datadog/mcp-server — Datadog metrics and logs

@grafana/mcp-server — Grafana dashboard access

@sentry/mcp-server — Sentry error tracking

@prometheus/mcp-server — Prometheus queries

@new-relic/mcp-server — New Relic APM

Category 9 — Vector and RAG

@qdrant/mcp-server — Qdrant vector DB operations

@pinecone/mcp-server — Pinecone vector DB

@weaviate/mcp-server — Weaviate operations

@chroma/mcp-server — Chroma vector DB

@llamaindex/mcp-server — LlamaIndex RAG operations

Category 10 — AI and Model Access

@openai/mcp-server — call OpenAI models as tools

@anthropic/mcp-server — call Anthropic models as tools

@tokenmix/mcp-server — unified access to 300+ models via TokenMix.ai OpenAI-compatible endpoint. Useful when you want an agent to call different LLMs as sub-tools.

@huggingface/mcp-server — HuggingFace model hub access

@replicate/mcp-server — Replicate model hosting

Category 11 — Specialized

@stripe/mcp-server — Stripe payments and customers

@hubspot/mcp-server — HubSpot CRM

@salesforce/mcp-server — Salesforce records

@snowflake/mcp-server — Snowflake data warehouse queries

@bigquery/mcp-server — Google BigQuery queries

@docker/mcp-server — Docker container management

@kubernetes/mcp-server — Kubernetes cluster operations

@terraform/mcp-server — Terraform plan/apply

Category 12 — Local / Personal

@filesystem/mcp-server — local filesystem access (pre-scoped paths)

@memory/mcp-server — persistent memory for agents

@time/mcp-server — time and timezone utilities

@weather/mcp-server — weather API access

@browser/mcp-server — local headless browser

How to Pick Which Servers to Install

Start minimal. Three rules:

1. Only install what your current work needs. More servers = larger tool registry for the LLM = slower tool selection and more context consumption. Add servers as specific use cases emerge.

2. Prefer official over community servers. Security and maintenance matter. @modelcontextprotocol/server-* prefixed servers are officially maintained. Community servers vary in quality.

3. One server per tool category. Don't run both Firecrawl and Tavily simultaneously unless you genuinely use both — the LLM gets confused when tools overlap.

Configuration Pattern

Example complete Claude Desktop config using several servers:

{
  "mcpServers": {
    "git": {
      "command": "mcp-server-git",
      "args": ["--repository", "/path/to/repo"]
    },
    "github": {
      "command": "mcp-server-github",
      "env": { "GITHUB_TOKEN": "your-token" }
    },
    "firecrawl": {
      "command": "firecrawl-mcp",
      "env": { "FIRECRAWL_API_KEY": "your-key" }
    },
    "postgres": {
      "command": "mcp-server-postgres",
      "env": { "DATABASE_URL": "postgresql://..." }
    },
    "shadcn": {
      "command": "shadcn-mcp"
    }
  }
}

Restart Claude Desktop — all five servers become available as tools.

Server Quality Indicators

When choosing between similar servers, check:

Github stars: >500 suggests active use. <50 is risky for production.

Last commit date: >90 days inactive = maintenance risk.

Issue response: do maintainers close/respond to issues within weeks? If not, you'll be debugging alone.

License: most MCP servers are MIT or Apache 2.0. Custom licenses deserve scrutiny.

Documentation: clear install + usage docs indicate quality. Thin README = probable thin implementation.

Building Your Own MCP Server

When no existing server fits, write your own. The pattern is simple:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server({ name: "my-custom", version: "1.0.0" });

server.setRequestHandler("tools/list", async () => ({
  tools: [{
    name: "my_tool",
    description: "What it does",
    inputSchema: { /* JSON Schema */ }
  }]
}));

server.setRequestHandler("tools/call", async (request) => {
  // Implementation
});

await server.connect(new StdioServerTransport());

Publish to npm or keep internal. Configure your client to use it.

Routing Multiple MCP Servers Across Multiple LLMs

If your team uses multiple LLMs (Claude for coding, GPT-5.5 for research, DeepSeek V4 for high-volume tasks), your MCP server setup transfers across all of them. Through an aggregator like TokenMix.ai, you can have:

This is the pragmatic 2026 pattern: MCP for tool definitions, aggregator for model access, agents compose both into flexible workflows.

Maintenance Reality Check

MCP ecosystem moves fast. Every 3-6 months:

MCP is stable as a protocol but individual servers vary in maturity.

FAQ

Is there a single "installer" for multiple MCP servers?

Not officially. Some community tools (mcp-installer, mcp-manager) try to simplify multi-server setup, but none are standard.

How many MCP servers can I run simultaneously?

Practically, 5-15 is common. 30+ slows down the LLM's tool selection. Each server runs as a separate process, so memory adds up (typically 20-100MB per server).

Do MCP servers work with local LLMs like Ollama?

Yes, if the local LLM inference supports MCP-compatible tool calling. Check your specific runtime (Ollama, LM Studio, llama.cpp) for tool use support.

Can I share MCP server configs across my team?

Yes, via shared config files checked into the team's dotfiles repo or via tooling like mcp-config that supports team-level sharing.

What's the security model?

Each MCP server has its own auth model — tokens, API keys, filesystem scopes. Configure minimal privilege. Never commit credentials to version control.

Are there commercial MCP servers?

Mostly free/open-source. A few commercial offerings exist (enterprise database connectors, managed versions). Most production teams use open-source servers and self-host.

Where can I find new MCP servers?

GitHub modelcontextprotocol/servers — official registry.

How do MCP servers compare to LangChain tools or OpenAI Agents SDK tools?

MCP is LLM-agnostic and cross-client. LangChain tools and OpenAI Agents SDK tools are framework-specific. MCP's value is: build once, use across any client (Claude Desktop, Cursor, Claude Code, LangChain-with-MCP-adapter, OpenAI Agents SDK with MCP). Routing through TokenMix.ai further lets the same tool work across any LLM.


Related Articles


By TokenMix Research Lab · Updated 2026-04-24

Sources: Model Context Protocol specification, Official MCP servers GitHub, MCP SDK documentation, TokenMix.ai MCP integration