How to Build and Register Plugins in an MCP Server (OpenAI & YouTube API Examples)
🔍 Introduction
In a Model Context Protocol (MCP) system, plugins are the building blocks of functionality. These tools receive structured context, perform a specific task (e.g., summarizing text, extracting metadata), and return updated output.
By building plugins as modular, context-driven functions, you gain:
- 🔌 Plug-and-play flexibility
- ♻️ Reusability across workflows
- 🔎 Traceability with full context logging
In this guide, we’ll walk through:
- Designing the plugin interface
- Registering plugins dynamically
- Implementing real plugins using:
- 📺 YouTube Data API
- 🧠 OpenAI GPT-4
🧠 MCP Plugin Architecture
Every plugin follows a simple convention:
def my_plugin(context: dict) -> dict:
# Perform action
return {"some_output": value}
Plugins should be:
- Pure functions: input → output
- Context-driven: read from context[“input”]
- Schema-respecting: validate required keys
We register each plugin into a global registry for discoverability.
⚙️ Plugin Registry System
# plugin_registry.py
TOOL_REGISTRY = {}
def register_tool(name):
def decorator(func):
TOOL_REGISTRY[name] = func
return func
return decorator
📺 Example 1: YouTube Metadata Extractor
We’ll use the YouTube Data API v3 to fetch metadata from a video URL.
Prerequisites
pip install google-api-python-client
youtube_plugin.py
from plugin_registry import register_tool
from googleapiclient.discovery import build
import re
YOUTUBE_API_KEY = "your_api_key_here"
def extract_video_id(url):
match = re.search(r"v=([a-zA-Z0-9_-]{11})", url)
return match.group(1) if match else None
@register_tool("youtube_metadata")
def youtube_metadata_plugin(context):
video_url = context["input"].get("video_url")
video_id = extract_video_id(video_url)
if not video_id:
return {"error": "Invalid YouTube URL"}
youtube = build("youtube", "v3", developerKey=YOUTUBE_API_KEY)
response = youtube.videos().list(part="snippet", id=video_id).execute()
snippet = response["items"][0]["snippet"]
return {
"title": snippet["title"],
"description": snippet["description"],
"tags": snippet.get("tags", []),
"channel": snippet["channelTitle"]
}
🧠 Example 2: OpenAI GPT Summarizer Plugin
This plugin summarizes text using OpenAI’s GPT model.
Prerequisites
pip install openai
openai_plugin.py
import openai
from plugin_registry import register_tool
openai.api_key = "your_openai_api_key"
@register_tool("gpt_summarize")
def gpt_summarize_plugin(context):
text = context["input"].get("text", "")
if not text:
return {"error": "No text provided"}
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful summarizer."},
{"role": "user", "content": f"Summarize the following:\n{text}"}
],
temperature=0.5,
)
return {
"summary": response.choices[0].message.content.strip()
}
🏃 MCP Runner: Bringing It All Together
mcp_runner.py
from plugin_registry import TOOL_REGISTRY
def mcp_runner(context):
for step in context.get("workflow", []):
tool = TOOL_REGISTRY.get(step)
if not tool:
context.setdefault("errors", []).append(f"Tool '{step}' not found")
continue
result = tool(context)
context["output"].update(result)
return context
🧪 Test the Workflow
example_context.json
{
"input": {
"video_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"text": "The quick brown fox jumps over the lazy dog."
},
"workflow": ["youtube_metadata", "gpt_summarize"],
"output": {}
}
main.py
import json
from mcp_runner import mcp_runner
with open("example_context.json") as f:
context = json.load(f)
result = mcp_runner(context)
print(json.dumps(result, indent=2))
🧭 Best Practices for MCP Plugins
Practice | Why it matters |
---|---|
Validate required inputs | Prevent runtime errors and better error messaging |
Return structured output | Enables chaining and composability |
Add error fallback in output | So workflows can gracefully handle failures |
Use semantic names | Helps users understand purpose of plugins |
Isolate API logic | Easier testing and mocking |
🌐 Use Cases Extended
- Combine GPT and YouTube for auto-reel generation
- Use Google Maps + GPT for travel route explanation
- Use Stripe + GPT for automated invoicing
Once your plugin is registered, it can be reused across any workflow in your MCP system.
Conclusion
MCP plugins turn your AI project into a modular, extensible powerhouse. With just a few lines of Python, you can integrate real-world APIs like OpenAI and YouTube, plug them into a unified runner, and scale workflows with ease.
Keywords: MCP server plugin
, OpenAI plugin Python
, YouTube API plugin
, build MCP tool
, register tool decorator Python
, model context protocol plugin example
, gpt summarizer
, ai workflow plugin
, plugin chaining MCP
, python ai plugins