OpenAI Agents SDK Cold Email: Production Outbound Agents with FoxReach
The OpenAI Agents SDK is the clean-room 2025 rewrite of multi-agent orchestration for the OpenAI stack. Native MCP, typed handoffs, tracing in the OpenAI dashboard, and guardrails as first-class primitives. Wire it to FoxReach in under 10 minutes and ship production cold email pipelines today.
Usama Navid
Founder, FoxReach
Why OpenAI Agents SDK
OpenAI shipped the Agents SDK in 2025 as a ground-up replacement for the Assistants API. It is lighter than LangChain - no abstraction layer between your code and the OpenAI Responses API - while adding first-class support for handoffs, guardrails, and MCP. Tracing is automatic in the OpenAI dashboard.
For cold email, the Agents SDK shines on two things: typed handoffs between specialist agents, and guardrails that reject outputs containing suppression- listed domains or pattern-violating content before they hit FoxReach. Both reduce the class of agent-driven mistakes that require human cleanup.
Prerequisites
Python 3.10+
openai-agents SDK requires 3.10 or newer.
An OpenAI API key
Default models are gpt-4o, gpt-4o-mini. o-series supported for complex reasoning steps.
A FoxReach API key
Free plan works.
Register FoxReach MCP
The Agents SDK has a built-in MCP client. One import, one config block, all 23 FoxReach tools are available to the agent.
pip install openai-agentsimport os
from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHttp
async def main():
foxreach_mcp = MCPServerStreamableHttp(
params={
"url": "https://api.foxreach.io/mcp",
"headers": {
"Authorization": f"Bearer {os.environ['FOXREACH_API_KEY']}",
},
},
name="foxreach",
)
async with foxreach_mcp:
agent = Agent(
name="Outbound Manager",
instructions="You manage cold email campaigns. Use the foxreach MCP tools.",
mcp_servers=[foxreach_mcp],
model="gpt-4o-mini",
)
result = await Runner.run(
agent,
"Create a draft campaign 'Tier 2 Agencies' with a 3-step sequence. 3 days between each.",
)
print(result.final_output)
import asyncio
asyncio.run(main())Build the pipeline
Split the workflow into three specialist agents - Researcher, Copywriter, Sender - with typed handoffs between them. Each agent is tight on its own job; handoffs carry structured output.
Research agent
from pydantic import BaseModel
from agents import Agent
class ResearchResult(BaseModel):
name: str
company: str
recent_signal: str
source_url: str
researcher = Agent(
name="Researcher",
instructions="Find one specific, recent, citable fact about the lead. Return structured ResearchResult.",
model="gpt-4o-mini",
output_type=ResearchResult,
)Copywriter agent
class EmailDraft(BaseModel):
subject: str
body: str
copywriter = Agent(
name="Copywriter",
instructions="Write a 70-word cold email using the research. Open with the research fact. Soft ask at the end.",
model="gpt-4o",
output_type=EmailDraft,
)Sender agent with handoffs
sender = Agent(
name="Sender",
instructions="Take the EmailDraft + lead info. Call foxreach MCP tools to create a draft campaign with one sequence step. Return the campaign_id.",
mcp_servers=[foxreach_mcp],
model="gpt-4o-mini",
handoffs=[], # Terminal agent; no further handoffs.
)
# Orchestrator hands off sequentially:
orchestrator = Agent(
name="Orchestrator",
instructions="Research the lead, hand off to the copywriter, then hand off to the sender.",
model="gpt-4o-mini",
handoffs=[researcher, copywriter, sender],
)
result = await Runner.run(
orchestrator,
"Lead: Alex Rivera, Head of Growth at Nuvoform, alex@nuvoform.ai",
max_turns=10,
)
print(result.final_output)Tracing with the OpenAI dashboard
Every Runner.run() streams a trace to the OpenAI dashboard by default. The trace shows each agent turn, each handoff, each MCP tool call, inputs and outputs at every step. For production agents debugging a weird lead or a mis-routed handoff, this is close to invaluable. For teams with data-sensitivity requirements, enable tracing redaction at the Runner level or disable tracing entirely per environment.
Guardrails for cold email
Guardrails run before or after agent output. For cold email, two rules pay off consistently: reject drafts that fabricate numbers, and reject drafts that target suppression-listed domains. Both prevent agent-driven mistakes that otherwise require manual cleanup.
from agents import output_guardrail, GuardrailFunctionOutput
@output_guardrail
async def no_fabricated_numbers(ctx, agent, output: EmailDraft) -> GuardrailFunctionOutput:
# Reject obvious statistics claims without citations.
suspicious = ["%", "x increase", "million users", "faster than"]
if any(s in output.body.lower() for s in suspicious):
return GuardrailFunctionOutput(
output_info={"reason": "unsourced claim detected"},
tripwire_triggered=True,
)
return GuardrailFunctionOutput(output_info=None, tripwire_triggered=False)
copywriter = Agent(
name="Copywriter",
instructions="Write a 70-word cold email. No statistics, no percentages.",
model="gpt-4o",
output_type=EmailDraft,
output_guardrails=[no_fabricated_numbers],
)Common pitfalls
Letting the sender agent hallucinate lead data
Agent handoffs pass context, not structured objects. If the copywriter produces a natural-language summary, the sender can misread email addresses. Use Pydantic output_type on every handoff to force structured data.
Missing tracing on production agents
The OpenAI Agents SDK streams full traces to the OpenAI dashboard by default, which is great - but sensitive data (email bodies, lead PII) is visible to anyone with dashboard access. Use the tracing disable/redact options for production teams.
Running handoffs in a loop without termination
Agent A hands off to B, B hands off back to A. Without a max_turns limit, this can loop forever. Set Runner.run(..., max_turns=10) as a safety net.
Mixing Assistants API with Agents SDK
The older Assistants API and the newer Agents SDK are different products. Agents SDK is the modern path - stay there unless you have a specific Assistants API dependency.
Frequently asked questions
Is the OpenAI Agents SDK the same as the Assistants API?
No. The Agents SDK (openai-agents) is a newer Python package released in 2025 for building production multi-agent workflows. The Assistants API (beta) is the older product with persistent threads and file search. Agents SDK is the path forward - it supports MCP natively and integrates with the OpenAI Responses API.
Does the OpenAI Agents SDK support MCP directly?
Yes. Register FoxReach's MCP server via MCPServerStreamableHttp and every tool appears to the agent. No adapter library needed.
How do agent handoffs work for cold email?
Handoffs let one agent transfer control to another with structured context. A researcher agent finishes, hands off a typed ResearchResult to the copywriter. The copywriter writes the email, hands off a typed EmailDraft to the sender. The sender calls FoxReach. Each agent has a tight prompt and a narrow tool set; handoffs carry the typed output forward.
Can I use the Agents SDK with models other than GPT?
The SDK is OpenAI-focused, but can route through LiteLLM or similar proxies to hit Anthropic, Gemini, and open-weights models. Native support is best for GPT-4o, GPT-4o-mini, and o-series. For Claude-native agents, use the Claude Agent SDK directly.
How does pricing compare to LangChain + MCP?
The SDK itself is free; you pay OpenAI token costs. For the same workflow, LangChain + MCP with a cheaper provider (Claude Haiku, Gemini Flash) can cost 40-60% less at scale. The Agents SDK wins on developer experience and tracing integration if you are OpenAI-primary already.
See also
Ship your OpenAI Agents SDK pipeline today
Free plan, no credit card. API key in under 60 seconds.