Overview
The Content.one MCP Remote Server implements MCP over HTTP using
the @modelcontextprotocol/sdk. It functions as a
secure gateway between MCP clients and Content.one instance APIs
so tools and LLMs can fetch instance-specific context when
generating content or answering prompts.
- Stateless authentication via Bearer tokens (per-request).
- Extensible tools surface instance resources: content, media, models, labels, settings, etc.
- Integrates with monitoring (Sentry) and supports streaming JSON‑RPC responses.
API Endpoint
POST /mcp — main MCP entrypoint. Accepts JSON‑RPC
2.0 requests (MCP messages) and returns JSON‑RPC 2.0 responses.
Headers
- Authorization: Bearer <SESSION_TOKEN> (required)
- Content-Type: application/json
- X-Instance-Zuid: <INSTANCE_ZUID> (optional — enables instance-scoped tools)
Body
A valid JSON‑RPC 2.0 request object. Example:
{
"jsonrpc": "2.0",
"method": "tools/list",
"params": {},
"id": 1
}
Authentication & Sessions
Authentication is stateless: clients present a session token each request. The server verifies the token (via Content.one session verification or an auth adapter) and returns an error for invalid tokens.
Tip: protect session tokens in transit (HTTPS) and limit token scope/TTL where possible.
Available Tools
Accounts
- get-instances
- get-instance
- get-instance-users
Auth
- verify-session
Media
- get-bins
- get-bin
- get-groups
- get-files
Instances / Content
- get-audit-logs / get-audit-log
- get-fields / get-field
- get-items / search-content-item
- get-item-versions
- get-models / get-model
- get-labels / get-settings
- get-stylesheets
New tools can be added — design tools to be narrow and predictable so LLMs can rely on structured outputs.
Content.one MCP Remote Client
A dedicated MCP client for Content.one to utilize the MCP Server by accepting a prompt + optional system instruction and forwarding that to a Gemini model.
Text generation (example request)
POST /client
{
"prompt": "Write a release note for the updated homepage",
"systemInstruction": "You are a concise technical writer",
"temperature": 0.7
}
Image generation
Image generation requests require a prompt that indicates an
image should be generated. The system uses
gemini-2.5-flash-image.
Gemini Models
Text generation: gemini-2.5-flash
Image generation: gemini-2.5-flash-image
Use lower temperature for deterministic outputs and higher for creative output.
Examples
Listing tools (JSON-RPC)
curl -X POST https://mcp.content.one/mcp \
-H "Authorization: Bearer $SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","params":{},"id":1}'
Text generation (client)
curl -X POST https://mcp.content.one/client \
-H "Authorization: Bearer $SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{"prompt":"Generate a 3-bullet summary...","temperature":0.6}'
Security & Best Practices
- Always use TLS (HTTPS) to protect tokens and responses in transit.
- Validate and scope session tokens server-side; apply least privilege.
- Sanitize and limit tool outputs before feeding them into LLM prompts.
- Rate-limit MCP calls and monitor usage.