← Back to cookbooks

Intermediate · 1-2 weeks

Building APIs & Backends with AI Agents

Design and build robust APIs and backend services with AI coding agents, from REST to GraphQL.

Last reviewed Feb 27, 2026

Overview

This cookbook covers building REST and GraphQL APIs with AI coding agents — from schema design and endpoint implementation to testing, security review, and deployment. AI agents have transformed backend development: tasks that used to take days now take hours. Covers: API design, implementation, testing, and deployment Target audience: Backend developers, full-stack developers


What You'll Need

Tool Purpose
Claude Code CLI AI coding agent in terminal
Cursor AI-powered IDE
Supabase PostgreSQL + Auth + Storage
Postgres MCP Direct DB access from AI
GitHub MCP PR management from AI
Node.js or Python environment Runtime

Claude Code API Development Workflow

1. Create CLAUDE.md with Project Architecture

# Project: [API NAME]

## Tech Stack
- Runtime: Node.js 20 + TypeScript
- Framework: Fastify (or Express)
- Database: PostgreSQL via Supabase
- Auth: JWT + Supabase Auth
- Validation: Zod
- Testing: Vitest + Supertest

## Architecture
src/
  routes/       # Route handlers
  services/     # Business logic
  repositories/ # Database queries
  middleware/   # Auth, rate limiting, logging
  schemas/      # Zod validation schemas
  types/        # TypeScript interfaces

## Code Style
- TypeScript strict mode, no any
- Repository pattern for all DB access
- All endpoints must have OpenAPI JSDoc
- Error responses: { error: string, code: string }

## Bash Commands
- Dev: npm run dev
- Test: npm run test
- Migrate: npm run db:migrate
- Generate types: npm run db:types

## API Conventions
- REST: noun-based routes, plural
- HTTP methods: GET/POST/PUT/DELETE only
- Versioning: /api/v1/
- Auth: Bearer token in Authorization header

2. Plan Mode — Design Schema and Endpoints

Press Shift+Tab twice in Claude Code:

I'm building a [TYPE] API. Here's the domain:
[DESCRIBE YOUR PRODUCT AND DATA]

In Plan Mode:
1. Design the complete database schema (tables, columns, relationships, indexes)
2. List all REST endpoints with HTTP method, path, request body, and response
3. Identify auth requirements per endpoint
4. Flag any performance considerations

Do not write code yet.

3. Implement Incrementally

Always follow this order — never skip ahead:

  1. Model — TypeScript interfaces and Zod schemas
  2. Migration — SQL migration file
  3. Repository — Database query functions
  4. Endpoints — Route handlers
  5. Auth middleware — JWT validation
  6. Validation middleware — Request body validation
  7. Error handling — Centralized error handler

4. Generate Tests

For the [ENDPOINT] endpoint I just built:
1. Write unit tests for the service layer (mock the repository)
2. Write integration tests with Supertest (use test DB)
3. Write edge case tests: invalid input, auth failures, not found

5. Code Review Prompts

Run these after each feature:

Review this code for:
- Security vulnerabilities (SQL injection, XSS, IDOR)
- Performance issues (N+1 queries, missing indexes)
- REST convention violations
- Missing input validation
- Error handling gaps

6. Deploy with Readiness Report

Before I deploy, generate a production readiness checklist for this API:
- Environment variables needed
- Database migrations to run
- Rate limiting configuration
- CORS settings
- Health check endpoint
- Monitoring setup

Cursor API Development Workflow

Cursor's Agent mode is ideal for cross-cutting backend features. Use a three-phase loop:

  1. Ask (planning) — Describe the feature in chat, get an implementation plan
  2. Agent (execution) — Switch to Agent mode, let Cursor implement across files
  3. Developer review — Read the diff carefully, test, then commit

Practical Examples

Task Time with AI Manual estimate
Add rate limiting to 23 endpoints 25 min 3–4 hours
Generate GraphQL schema from REST models 40 min 1–2 days
Add request logging middleware 10 min 45 min
Generate OpenAPI spec from routes 20 min 4–8 hours
Cross-cutting feature prompt pattern:
I need to add [FEATURE] to all endpoints in src/routes/.
Requirements:
- [REQUIREMENT 1]
- [REQUIREMENT 2]

Start with a plan showing every file you'll touch.
Do not write code until I approve the plan.

MCP: Extending AI Agents

The Model Context Protocol (MCP) lets AI agents connect to external tools — databases, GitHub, internal APIs — instead of just editing files.

Adding MCP Servers to Claude Code

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres",
               "postgresql://localhost/mydb"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {"GITHUB_PERSONAL_ACCESS_TOKEN": "YOUR_TOKEN"}
    }
  }
}

With Postgres MCP connected, Claude can:

  • Read your live schema before writing migrations
  • Query data to understand relationships
  • Run test queries before generating code

2025 MCP Spec Updates

  • Sampling with tools — Servers can now request AI completions mid-flow
  • OAuth support — MCP servers can use OAuth for third-party auth
  • Parallel tool calls — Agents can call multiple tools simultaneously MCP spec →

OpenAPI → MCP Conversion

Convert any existing REST API into MCP tools automatically using FastMCP:

from fastmcp import FastMCP
from fastmcp.contrib.openapi import OpenAPILoader
import httpx

# Load your OpenAPI spec
loader = OpenAPILoader(
    spec_url="https://api.yourservice.com/openapi.json"
)

# Create MCP server
mcp = FastMCP("Your API")

# Auto-generate tools from all endpoints
for tool in loader.get_tools():
    mcp.add_tool(tool)

# Run the server
if __name__ == "__main__":
    mcp.run()

This makes every API endpoint callable by any MCP-compatible AI agent in under 100 lines. Other OpenAPI → MCP tools:

Tool Description
FastMCP Python-first, auto-converts OpenAPI
Gentoro Enterprise API orchestration
Speakeasy Generate SDKs + MCP from OpenAPI
Apollo MCP Server GraphQL → MCP bridge

Spec-Driven Development

The highest-leverage AI backend workflow: write a complete spec first, then have AI implement it.

SPEC → PLAN → TASKS → IMPLEMENT

**Step 1: Write **SPEC.md

# API Spec: [NAME]

## Endpoints
POST /api/v1/users — Create user
GET /api/v1/users/:id — Get user by ID
PUT /api/v1/users/:id — Update user
DELETE /api/v1/users/:id — Delete user

## Data Models
User: { id, email, name, created_at }

## Auth
JWT Bearer token required on all endpoints except POST /users

## Constraints
- Email must be unique
- Name max 100 chars

**Step 2: Generate **PLAN.md — Paste SPEC.md and ask Claude to generate a step-by-step implementation plan **Step 3: Generate **TASKS.md — Break PLAN.md into atomic 30-min tasks with acceptance criteria Step 4: Implement — Execute tasks one by one, either manually or with sub-agents

Sub-Agent Patterns for Parallel Implementation

Run 3 Claude Code sub-agents in parallel:

agent-1: Implement User CRUD endpoints (tasks 1-5 in TASKS.md)
agent-2: Implement Auth middleware (tasks 6-8 in TASKS.md)
agent-3: Generate test suite (tasks 9-12 in TASKS.md)

Merge results with: git merge agent-1 && git merge agent-2 && git merge agent-3

Data Pipeline & Automation

CrewAI for Multi-Agent ETL

CrewAI enables multi-agent data pipelines:

from crewai import Agent, Task, Crew

ingestor = Agent(role="Data Ingestor",
                 goal="Fetch raw data from source APIs",
                 tools=[api_fetch_tool])

cleaner = Agent(role="Data Cleaner",
                goal="Normalize and deduplicate records",
                tools=[pandas_tool])

validator = Agent(role="Data Validator",
                  goal="Ensure schema compliance and flag anomalies",
                  tools=[schema_check_tool])

crew = Crew(
    agents=[ingestor, cleaner, validator],
    tasks=[ingest_task, clean_task, validate_task],
    process="sequential"
)

result = crew.kickoff()

n8n for Workflow Automation

n8n provides visual workflow automation with AI agent nodes:

  • Trigger on webhooks, schedules, or database changes
  • AI Agent node: connect to Claude, GPT-4, or local models
  • Built-in nodes for 400+ services
  • Self-hostable for full data control

Make.com AI Capabilities

Make.com (formerly Integromat) added AI modules:

  • OpenAI module: generate text, analyze images, embed documents
  • AI Router: classify incoming data and route to different flows
  • Vector store integration for RAG pipelines

Supabase + PostgreSQL via MCP

With Supabase MCP connected:

Hey Claude, look at the users table schema and write a migration
that adds a stripe_customer_id column and creates an index on it.
Then generate the corresponding TypeScript repository method.

Claude reads the real schema, writes a correct migration, and generates typed code — no copy-pasting schemas manually.


Building AI-Powered Features

Vercel AI SDK (Chatbot in ~50 lines)

import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: anthropic('claude-3-5-sonnet-20241022'),
    system: 'You are a helpful assistant.',
    messages,
  });

  return result.toDataStreamResponse();
}

Vercel AI SDK docs →

LlamaIndex RAG Pipeline with ChromaDB

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore
import chromadb

# Load documents
docs = SimpleDirectoryReader("./data").load_data()

# Connect ChromaDB
client = chromadb.PersistentClient(path="./chroma_db")
collection = client.get_or_create_collection("knowledge_base")

# Build index
vector_store = ChromaVectorStore(chroma_collection=collection)
index = VectorStoreIndex.from_documents(docs,
                                        vector_store=vector_store)

# Query
query_engine = index.as_query_engine()
response = query_engine.query("What is our refund policy?")
print(response)

LlamaIndex docs →

LangChain vs LlamaIndex

Feature LangChain LlamaIndex
Primary use case Chains and agents Data ingestion and RAG
Learning curve Moderate Low
RAG performance Good Excellent
Agent capabilities Excellent Good
Community size Very large Large
Best for Complex agent flows Document QA and search

5 Core Agentic Design Patterns

Pattern Description When to Use
Tool Use Agent calls external tools (APIs, DBs) Any time agent needs real-world data
Reflection Agent critiques and revises its own output Code review, content quality
Planning Agent creates a task plan before executing Multi-step tasks
Multi-Agent Specialized agents collaborate Parallel workstreams
ReAct Reason + Act loop: think, act, observe, repeat Complex research tasks
Anthropic agentic patterns →

Common Pitfalls

  1. Putting API calls in components — Always use a service/repository layer. Direct API calls in route handlers make code untestable.
  2. Not writing OpenAPI specs — AI agents (and human developers) need machine-readable API contracts. Generate specs with fastify-swagger or tsoa.
  3. Skipping test generation — Ask Claude to generate tests immediately after each endpoint. Tests written after the fact are tests that never get written.
  4. Not using MCP for database access — Without Postgres MCP, Claude guesses at your schema. With it, Claude reads the real thing and generates correct migrations every time.
  5. Ignoring context window limits — After 30+ minutes in Claude Code, run /clear and re-paste CLAUDE.md. Stale context causes regressions.
  6. No rate limiting — Every public API endpoint needs rate limiting from day one. One prompt: "Add rate limiting to all routes using the token bucket algorithm, 100 req/min per IP."