Guide

Complete Guide to MCP Servers: What They Are and How to Use Them

A practical guide to Model Context Protocol (MCP) servers: what MCP is, how to set up servers in Cursor and Claude Code, top server categories, and common pitfalls.

By AI Coding Tools Directory2026-02-2812 min read
Last reviewed: 2026-02-28
ACTD
AI Coding Tools Directory

Editorial Team

The AI Coding Tools Directory editorial team researches and reviews AI-powered development tools to help developers find the best solutions for their workflows.

Model Context Protocol (MCP) is an open standard that gives AI coding agents access to external tools and data -- GitHub repositories, databases, Notion pages, Figma designs, and more -- through a uniform protocol. Instead of pasting context by hand, your agent discovers and calls MCP server tools at runtime. This guide explains what MCP is, how to configure servers in Cursor and Claude Code, and which servers to use for different workflows.

Claude Code logo
Claude CodeSubscription

Anthropic's terminal-based AI coding agent with 80.9% SWE-bench, Agent Teams, and GitHub Actions

TL;DR

  • MCP is a JSON-RPC 2.0 protocol (by Anthropic) that lets AI tools connect to external systems through servers that expose tools, resources, and prompts.
  • Supported by Cursor, Claude Code, OpenAI Codex, Windsurf, GitHub Copilot (VS Code), and Continue.
  • Add servers to .cursor/mcp.json or your tool's config with command/args (local) or URL (remote); restart the tool to activate.
  • Top servers: GitHub (repos/PRs), Postgres/MongoDB (databases), Notion/Linear (docs/tickets), Figma (design), Supabase (backend), and Sentry (observability).
  • Store API keys in environment variables, prefer read-only access, and manually approve tool calls to avoid unintended mutations.

Quick Answer

MCP is an open standard (by Anthropic) that lets AI applications talk to external systems through a uniform protocol. MCP servers are processes that expose tools (functions) and resources (data) to AI agents. You add servers to your AI tool's config; the agent discovers and calls them at runtime.

OpenAI Codex logo
OpenAI CodexFreemium

Cloud coding agent with 1M+ developers, Desktop App, and parallel sandboxed environments

GitHub Copilot logo
GitHub CopilotFreemium

AI pair programmer built into GitHub and popular IDEs

Step Action
1. Pick a server Choose from GitHub, Postgres, Notion, Figma, Supabase, and 15+ others.
2. Add to config Put the server in .cursor/mcp.json (Cursor) or your tool's MCP config with command and args or a url.
3. Restart Restart the AI tool so it picks up the new server.
4. Use it Ask the agent to "search my GitHub issues" or "query the database schema" and it will use the MCP tools.

What is MCP?

MCP defines a JSON-RPC 2.0 protocol for AI applications to connect to external data and tools. A host (Cursor, Claude Code, Codex) runs an MCP client that talks to one or more MCP servers. Each server exposes:

  • Tools — Callable functions (e.g., create_issue, execute_sql)
  • Resources — Readable data (e.g., database schemas, file contents)
  • Prompts — Reusable prompt templates

Two common transport modes:

  • STDIO — The host spawns the server as a local process; communication over stdin/stdout.
  • HTTP/SSE — The server runs remotely; the client connects via URL (often with OAuth).

Practical Setup Walkthrough

Cursor

  1. Create .cursor/mcp.json in your project root (or ~/.cursor/mcp.json for global config).
  2. Add a server. Example for GitHub:
{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token"
      }
    }
  }
}
  1. Restart Cursor. In Settings > Tools & MCP you should see the server listed.
  2. In a chat, try: "Search my repo for uses of fetchUser."

Claude Code / Claude Desktop

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Use the same mcpServers structure. Restart Claude after editing.

VS Code with GitHub Copilot

Some clients support code --add-mcp '...' or a GUI for adding servers. Check your client's MCP documentation.

Remote servers (Supabase, Sentry, Vercel)

Many SaaS MCP servers use a URL and OAuth instead of a local process:

{
  "mcpServers": {
    "supabase": { "url": "https://mcp.supabase.com/mcp" },
    "sentry": { "url": "https://mcp.sentry.dev/mcp" },
    "vercel": { "url": "https://mcp.vercel.com" }
  }
}

On first use, the client will open a browser for OAuth login.

Top Server Categories

Code and repos

Databases

Product and docs

Design and deployment

Infrastructure and observability

See the full MCP directory for all covered servers.

Tool Compatibility

Tool MCP support Config location
Cursor Yes .cursor/mcp.json
Claude Code Yes claude_desktop_config.json
OpenAI Codex Yes Codex MCP settings
Windsurf Yes Windsurf MCP config
GitHub Copilot Yes (VS Code) Via VS Code/Copilot MCP
Continue Yes Continue config

Configuration format is similar across tools: mcpServers object with server name, command/args for stdio or url for remote. Check each tool's docs for exact paths and options.

Common Pitfalls

  1. Config not loaded — Ensure the config file path is correct and restart the AI tool.
  2. Missing env vars — Tokens in env must be set; avoid hardcoding secrets in committed files.
  3. Wrong transport — Use command/args for stdio servers, url for HTTP. Mixing them causes failures.
  4. Production access — Do not connect MCP servers to production databases or critical systems; use dev/staging.
  5. Too many servers — Each server adds tools to the context. Enable only what you need for the current task.
  6. Approval disabled — If your client supports it, keep manual approval for tool calls to avoid unintended mutations.

Security Recommendations

  • Store API keys and tokens in environment variables, not in config files.
  • Prefer read-only modes and scoped access (e.g., project-specific Supabase, repo-specific GitHub tokens).
  • Use OAuth-hosted servers when available instead of long-lived tokens.
  • Review tool call parameters before approving; agents can be prompted to run harmful commands.
  • Isolate sensitive data; do not give agents access to PII or production databases.

Next Steps

Related in This Cluster

Related guides: AI coding agents explained | How to use Cursor | Directory

Free Resource

2026 AI Coding Tools Comparison Chart

Side-by-side comparison of features, pricing, and capabilities for every major AI coding tool.

No spam, unsubscribe anytime.

Frequently Asked Questions

What is MCP?
MCP (Model Context Protocol) is an open standard that lets AI applications connect to external tools, databases, and services. MCP servers expose tools (like search_code, query_database) that AI coding agents can call during development.
Which AI tools support MCP?
Cursor, Claude Code, OpenAI Codex, Windsurf, GitHub Copilot (VS Code), and many others support MCP. Configuration goes in .cursor/mcp.json (Cursor), Claude Desktop config, or your tool's MCP settings.
Do I need to install each MCP server separately?
Yes. Add each server to your mcpServers config with its command (e.g., npx) and args. Some servers run locally (stdio), others use a remote URL. Restart your AI tool after adding servers.
Are MCP servers secure?
Security depends on the server. Use read-only modes when possible, avoid connecting to production data, store tokens in env vars (not config files), and manually approve tool calls when your client supports it.
What are the most useful MCP servers?
GitHub (repos, PRs, issues), Postgres/MongoDB (database context), Notion/Linear (docs and tickets), Supabase (backend as a service), Figma (design to code), and Playwright (browser automation) are widely used. See our [MCP directory](/mcp-servers) for the full list.