Add MCP Servers in Codex, Claude Code, and Gemini

Add MCP Servers in Codex, Claude Code, and Gemini (Notion + Sentry Examples)

If your AI assistant can’t see your real project context (docs, tickets, dashboards, errors), it will guess.
MCP (Model Context Protocol) fixes that by letting tools like Codex, Claude Code, and Gemini CLI connect to MCP servers that expose real tools + data.

This post explains:

  • What MCP is (simple + practical)
  • Why MCP is useful for coding and debugging
  • Pros/cons and security concerns
  • A real end-to-end use case (Notion + Sentry)
  • Step-by-step setup in:
  • OpenAI Codex (CLI / IDE)
  • Claude Code
  • Gemini CLI
  • Copy-paste examples for Notion MCP and Sentry MCP (plus a few bonus servers)

What is MCP (Model Context Protocol)?

MCP (Model Context Protocol) is an open standard that connects an AI tool (the “client”) to external capabilities (“servers”) so the model can use tools and fetch live context instead of hallucinating. :contentReference[oaicite:0]{index=0}

Think of it like USB-C for AI integrations:
- Your AI app (Codex/Claude/Gemini) becomes a “host”
- An MCP server exposes:
- Tools (actions like “search issues”, “create page”, “list errors”)
- Resources (readable context like docs/pages)
- Sometimes prompts and updates (depends on the client/server)

MCP servers can run:
- Locally (STDIO): a process on your machine (good for local scripts)
- Remotely (Streamable HTTP): a hosted server you connect to by URL (good for SaaS tools like Notion/Sentry)
- SSE exists but is often deprecated in favor of HTTP streaming in many clients :contentReference[oaicite:1]{index=1}


Why MCP is useful (especially for developers)

Without MCP:
- AI can only use what’s in the chat + your local files
- It can’t reliably fetch “what actually happened” in production
- It can’t safely take real actions in your tools

With MCP:
- Your AI can pull the latest truth from systems like Notion and Sentry
- You can ask it to do real workflows:
- “Find the runbook in Notion and apply it to this incident”
- “Pull top Sentry errors from last 24h and summarize root causes”
- “Create a bug ticket, link the stack trace, propose a patch”

Claude Code explicitly calls out workflows like issue trackers + monitoring + databases once MCP servers are connected. :contentReference[oaicite:2]{index=2}
Codex supports MCP in both CLI and IDE and shares the same config. :contentReference[oaicite:3]{index=3}


MCP in one picture (mental model)

(Codex / Claude / Gemini)  <--MCP-->  (MCP Server)  --->  Notion / Sentry / GitHub / DB
        client/host                      tools/resources              real systems

Pros and Cons of MCP

Pros

  • Less hallucination: answers come from live data (Sentry issues, Notion docs)
  • Reusable integrations: same server can work across multiple AI tools (standard protocol)
  • Automation: turn “chat” into “actions” (create pages, query errors, open PRs)
  • Human-in-the-loop friendly: many servers are designed around safe, confirmed actions

Cons / risks

  • Security / prompt injection risk: a server can return untrusted content; only connect servers you trust
  • Permission blast radius: MCP can read/write depending on auth scopes
  • Reliability: network/server downtime affects your assistant
  • Latency & cost: remote calls can slow loops; some servers require API keys or paid tiers
  • Data governance: be careful with sensitive data, regulated logs, customer PII

Rule of thumb: treat MCP servers like adding a new dependency with production access.


Real use case: “Auto-triage production errors using Sentry + Notion”

Here’s a realistic workflow you can run in any MCP-enabled coding agent:

  1. Sentry MCP: fetch top errors in last 24h, group by release, extract stack traces
  2. Notion MCP: pull the incident runbook + service ownership page
  3. Agent generates:
    • root-cause hypothesis
    • impacted endpoints/users
    • suggested code patch
    • Notion incident update draft
  4. Developer reviews + merges

Sentry’s MCP server is explicitly optimized for debugging workflows and human-in-the-loop agents.

Notion MCP is designed to securely read/write workspace data via MCP clients.

Example prompts you can use after setup:

  • “List the most common errors in the last 24 hours and summarize patterns.”
  • “For the top error, pull the stack trace and identify the likely code path.”
  • “Find the relevant Notion runbook and propose the mitigation steps.”
  • “Draft an incident update for Notion: impact, cause, mitigation, next steps.”“

Setup: Add MCP servers in Codex, Claude Code, and Gemini CLI

Below are copy-paste setups for Notion MCP and Sentry MCP, plus a couple useful extras.


1) OpenAI Codex: Add MCP servers (CLI + config.toml)

Codex stores MCP config in ~/.codex/config.toml. CLI and IDE share it.
Codex supports STDIO servers and Streamable HTTP servers (and OAuth login for servers that support it).

A. Add Notion MCP in Codex

Option 1 — CLI

codex mcp add notion --url https://mcp.notion.com/mcp
codex mcp list

Option 2 — config.toml

[mcp_servers.notion]
url = "https://mcp.notion.com/mcp"

Notion provides the public MCP endpoint for manual connections.

Auth note: if the server uses OAuth, Codex supports OAuth login via its MCP flow.


B. Add Sentry MCP in Codex

Remote server (recommended when supported by your client)

codex mcp add sentry --url https://mcp.sentry.dev/mcp
codex mcp list

Sentry’s MCP server is a remote server on the Sentry platform.

If OAuth login is required, Codex indicates you can authenticate for OAuth-enabled servers using its MCP login flow.


C. Bonus: Add Context7 docs MCP in Codex (great for up-to-date docs)

Codex docs show Context7 as an example MCP server:

codex mcp add context7 -- npx -y @upstash/context7-mcp

2) Claude Code: Add MCP servers (fastest setup)

Claude Code provides a clean CLI for adding MCP servers (HTTP, SSE, STDIO).
It also supports authenticating with OAuth for remote servers using the /mcp menu.

A. Add Notion MCP in Claude Code

claude mcp add --transport http notion https://mcp.notion.com/mcp
claude mcp list

Then inside Claude Code, run:

/mcp

Notion’s endpoint is the hosted Notion MCP server.


B. Add Sentry MCP in Claude Code

Claude’s docs include a practical Sentry example:

claude mcp add --transport http sentry https://mcp.sentry.dev/mcp

Then authenticate inside Claude Code:

/mcp

C. Example prompts to validate your setup (Claude Code)

Try:

  • What are the most common errors in the last 24 hours? (Sentry)
  • Search my Notion workspace for the API rate limit runbook. (Notion)

3) Gemini CLI: Add MCP servers via .gemini/settings.json

Gemini CLI reads MCP server configuration from .gemini/settings.json.
Firebase Studio docs also confirm Gemini CLI uses that path and shows example mcpServers configuration (including headers/env).

A. Add Notion MCP in Gemini CLI (using Notion’s recommended configs)

Notion’s docs include a Gemini-friendly approach using a local STDIO wrapper (mcp-remote) that points at the hosted Notion server.

Create/edit .gemini/settings.json:

{
  "mcpServers": {
    "notion": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://mcp.notion.com/mcp"]
    }
  }
}

This is taken directly from Notion’s “STDIO (Local Server)” option for connecting tools that may prefer local transport.


B. Add Sentry MCP in Gemini CLI (STDIO mode with access token)

Sentry’s repo documents a STDIO mode you can run via npx, authenticated using a Sentry User Auth Token, plus optional env vars for host and OpenAI API key.

1) Export your token

export SENTRY_ACCESS_TOKEN="your_sentry_user_token"

2) Configure .gemini/settings.json

{
  "mcpServers": {
    "sentry": {
      "command": "npx",
      "args": ["-y", "@sentry/mcp-server@latest", "--access-token", "$SENTRY_ACCESS_TOKEN"],
      "env": {
        "SENTRY_ACCESS_TOKEN": "$SENTRY_ACCESS_TOKEN"
      }
    }
  }
}

Sentry’s docs in the repo show the npx @sentry/mcp-server@latest –access-token=… launch pattern and environment variables (including optional OPENAI_API_KEY for some AI-powered search tools).

Tip: If you’re on self-hosted Sentry, Sentry’s MCP server supports a –host override / env var approach.


C. Bonus: Add GitHub MCP (headers example)

Firebase’s docs show how to configure a remote MCP server with an Authorization header (example is GitHub).

{
  "mcpServers": {
    "github": {
      "url": "https://api.githubcopilot.com/mcp/",
      "headers": {
        "Authorization": "Bearer <ACCESS_TOKEN>"
      }
    }
  }
}

Best practices (so MCP doesn’t become a security hole)

  • Connect only trusted servers (treat them like privileged dependencies)
  • Use least privilege scopes (read-only when possible)
  • Prefer remote HTTP servers for SaaS; prefer STDIO for local scripts
  • Keep secrets in env vars, not hardcoded config
  • Watch for prompt injection from external content (issues, tickets, wiki pages)
  • Add guardrails in prompts
    • Do not execute destructive actions unless I confirm.
    • Summarize findings before creating/updating anything.

FAQ

Is MCP only for coding tools?

No. Any AI tool that implements an MCP client can use MCP servers to access tools and context (docs, tickets, databases, observability).

What’s the difference between an MCP server and a plugin?

MCP is the protocol; “plugins” are product-specific packaging. MCP servers expose standardized tools/resources. Clients decide how to install/manage them.

Which is better: remote MCP or local (stdio) MCP?

  • Remote: best for SaaS tools (Notion/Sentry hosted endpoints)
  • Local: best for custom internal tools, scripts, or self-hosted connectors

Can MCP servers write data (dangerous)?

Yes—many can read/write depending on auth scopes. That’s why docs warn to only add trusted servers.


Tags: MCP, Model Context Protocol, MCP server, Codex MCP, Claude Code MCP, Gemini MCP, Notion MCP, Sentry MCP, AI tools integration


Explore More AI Posts

  • AI
  • 4 min read
Securing Your MCP Server: Auth, Rate Limiting, and Input Validation

Learn how to secure your MCP (Model Context Protocol) server using FastAPI. Implement authentication, rate limiting, and input validation to protect …

Read More
  • AI
  • 3 min read
Understanding ChatGPT Models: GPT-3.5, GPT-4, GPT-4-turbo, and Beyond

Explore the different ChatGPT models offered by OpenAI, including GPT-3.5, GPT-4, GPT-4-turbo, GPT-4.1, GPT-4.5, and GPT-4o. Learn how they differ in…

Read More
  • AI
  • 4 min read
How to Build and Register Plugins in an MCP Server (OpenAI & YouTube API Examples)

Learn how to build modular, context-aware plugins for your Model Context Protocol (MCP) server. Includes step-by-step examples for OpenAI GPT-based s…

Read More
  • AI
  • 6 min read
What is an MCP (Model Context Protocol) Server? A Complete Guide for AI Engineers

Discover the power of the Model Context Protocol (MCP) Server — an innovative way to orchestrate AI models and tools with context-awareness. Learn ho…

Read More