Skip to content

MCP server

Survey Coder Pro runs a hosted Model Context Protocol server. Connect it to Claude.ai, Cursor, or any MCP-aware client and code surveys conversationally — no glue code required.

https://api.surveycoder.io/mcp

Authentication is the same API key you use elsewhere (x-api-key header). The server speaks SSE-over-HTTP and is reachable from any MCP transport that supports remote servers.

ToolWhat it does
code_responsesOne-call coding — verbatims in, codes out. Mirrors POST /v1/code.
list_projectsList projects in your org.
get_questionRead a question’s codebook and coded responses.
generate_codebookGenerate a codebook from a sample of responses without coding everything.
import_codebookPush a codebook into an existing question.
estimate_costFree estimate of how many credits a job will cost.
wait_for_jobLong-poll an async job until it finishes.
read_usageCurrent credit balance and recent usage.

The MCP server is read-mostly by default — destructive operations (delete project, revoke key) are not exposed. Use the REST API or dashboard for those.

  1. Open Claude.ai and go to Settings → Connectors.
  2. Click Add custom connector (or Add remote MCP server).
  3. Name it Survey Coder Pro and paste the URL https://api.surveycoder.io/mcp.
  4. When prompted, paste your API key — Claude will send it as x-api-key on every tool call.
  5. Start a new chat and ask: “Code these 8 verbatims about laundry detergent…”
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';
const transport = new SSEClientTransport(
new URL('https://api.surveycoder.io/mcp'),
{ requestInit: { headers: { 'x-api-key': process.env.SCP_API_KEY! } } }
);
const client = new Client(
{ name: 'my-app', version: '1.0.0' },
{ capabilities: {} }
);
await client.connect(transport);
const tools = await client.listTools();
const result = await client.callTool({
name: 'code_responses',
arguments: {
question: 'Which brand of laundry detergent do you prefer and why?',
responses: ['Tide because it removes stains better', '...'],
},
});

A typical chat with the MCP server connected:

You: Code these 8 verbatims about toothpaste preference: [pastes list]

Claude: I’ll use code_responses(calls the tool). Here’s the codebook — 5 codes across 2 categories. The dominant theme is whitening (4 of 8), followed by mint flavor (3 of 8). One verbatim mentioned sensitivity. Want me to drill into any category?

The model can chain calls — for example, estimate_cost before code_responses, or wait_for_job after kicking off a large batch.

The MCP server returns the same structured errors as the REST API. The full error code and doc_url come through in the tool response so the model can correct itself or escalate to you.

API reference

Every MCP tool maps to a REST endpoint. See the spec.

Coding pipeline

What happens under the hood. Read more.