FoxReach
Framework Integration

LangGraph Cold Email: Multi-Step Outreach Agents with FoxReach

Build stateful cold email agents with explicit graph nodes for research, draft, send, wait-for-reply, and classify. LangGraph earns its complexity when your agent needs loops, webhook-driven waits, or human approval gates - the exact shape of a real cold outbound workflow.

Usama Navid

Founder, FoxReach

Why LangGraph for cold email

Plain LangChain agents are great when the flow is single-shot - "research a lead, draft an email, send it." Cold email in production is rarely single-shot. You draft. You send. You wait three days. A reply arrives. You classify it. You branch to hot-reply handling, cold follow-up, or unsubscribe. The flow has loops, waits, and explicit branches.

LangGraph models that flow explicitly as a state machine: nodes are steps, edges are transitions, state carries the lead + campaign context between nodes. You get checkpointing for long waits, conditional edges for branching, and interrupt points for human approval. FoxReach webhooks trigger the wait nodes that move the graph forward.

Prerequisites

  • Python 3.10+

    LangGraph 0.2+ and LangChain 0.3+.

  • A FoxReach account + API key

    Free plan works for graph development. Generate from Settings → API Keys.

  • A LangGraph checkpointer

    Postgres recommended for production. In-memory MemorySaver works for development.

  • A webhook receiver

    FoxReach fires reply_received, bounce_received, unsubscribed events. Point them at a public HTTPS endpoint (Vercel, Cloudflare Workers, Railway).

The graph shape

A minimal production cold email graph has six nodes:

  1. research - fetch context for the lead
  2. draft - LLM writes the personalized email
  3. send - call FoxReach to ship via campaign + sequence
  4. wait_for_reply - pause until FoxReach webhook fires, or N days elapse
  5. classify_reply - LLM labels intent (hot / cold / unsubscribe / OOO)
  6. branch - conditional edges to reply drafting, follow-up scheduling, or terminal state

Every node reads and mutates shared state. The state dict carries the lead, the campaign, the latest message, the classification, and a retry counter.

Install and setup

bashshell
pip install langgraph langchain langchain-openai foxreach

Define the state

state.pypython
from typing import TypedDict, Literal, Optional

ReplyIntent = Literal["hot", "cold", "unsubscribe", "ooo", "unknown"]

class CampaignState(TypedDict):
 lead_id: str
 lead_email: str
 lead_name: str
 lead_company: str
 research_notes: Optional[str]
 draft_subject: Optional[str]
 draft_body: Optional[str]
 foxreach_campaign_id: Optional[str]
 last_reply: Optional[str]
 reply_intent: Optional[ReplyIntent]
 retries: int
 status: Literal["researching", "drafting", "sent", "waiting", "replied", "done"]

Define the nodes

nodes.pypython
from foxreach import FoxReach
from langchain_openai import ChatOpenAI
import os

client = FoxReach(api_key=os.environ["FOXREACH_API_KEY"])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
copywriter_llm = ChatOpenAI(model="gpt-4o", temperature=0.3)

def research_node(state: CampaignState) -> dict:
 # Call your research tool here (Tavily, Serper, etc.)
 notes = f"Found recent signal about {state['lead_company']}..."
 return {"research_notes": notes, "status": "drafting"}

def draft_node(state: CampaignState) -> dict:
 prompt = f"Write a 70-word cold email to {state['lead_name']} at {state['lead_company']}. Research: {state['research_notes']}"
 result = copywriter_llm.invoke(prompt)
 # Assume result.content is "Subject: ...\n\nBody:..." - parse accordingly
 subject, body = result.content.split("\n\n", 1)
 return {"draft_subject": subject.replace("Subject: ", ""), "draft_body": body}

def send_node(state: CampaignState) -> dict:
 campaign = client.campaigns.create(
 name=f"{state['lead_company']} - Auto",
 status="draft",
 )
 client.sequences.create(
 campaign_id=campaign.id,
 subject=state["draft_subject"],
 body=state["draft_body"],
 )
 client.leads.create(
 campaign_id=campaign.id,
 email=state["lead_email"],
 first_name=state["lead_name"],
 company=state["lead_company"],
 )
 client.campaigns.start(campaign.id)
 return {"foxreach_campaign_id": campaign.id, "status": "waiting"}

def classify_reply_node(state: CampaignState) -> dict:
 prompt = f"Classify this cold email reply as hot, cold, unsubscribe, ooo, or unknown. Reply:\n{state['last_reply']}"
 intent = llm.invoke(prompt).content.strip().lower()
 return {"reply_intent": intent, "status": "replied"}

def route_reply(state: CampaignState) -> str:
 if state["reply_intent"] == "hot":
 return "draft_reply"
 if state["reply_intent"] == "unsubscribe":
 return "done"
 if state["retries"] >= 3:
 return "done"
 return "draft_follow_up"

Wire the graph

graph.pypython
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver

workflow = StateGraph(CampaignState)

workflow.add_node("research", research_node)
workflow.add_node("draft", draft_node)
workflow.add_node("send", send_node)
workflow.add_node("classify_reply", classify_reply_node)
workflow.add_node("draft_reply", draft_reply_node) # hot reply handler
workflow.add_node("draft_follow_up", draft_follow_up_node) # cold reply handler

workflow.set_entry_point("research")
workflow.add_edge("research", "draft")
workflow.add_edge("draft", "send")
# send → wait state handled externally by the webhook receiver
# When the webhook fires, it resumes the graph and routes to classify_reply.
workflow.add_conditional_edges(
 "classify_reply",
 route_reply,
 {
 "draft_reply": "draft_reply",
 "draft_follow_up": "draft_follow_up",
 "done": END,
 },
)
workflow.add_edge("draft_reply", "send")
workflow.add_edge("draft_follow_up", "send")

graph = workflow.compile(checkpointer=MemorySaver())

Webhook-driven wait nodes

The send node completes; the graph pauses. A FoxReach webhook for reply_received fires at your HTTPS endpoint. Your receiver looks up the graph by lead_id, updates state with the reply body, and resumes. LangGraph's checkpointer keeps the graph state consistent across the pause.

webhook.py (FastAPI)python
from fastapi import FastAPI, Request

app = FastAPI()

@app.post("/foxreach/webhook")
async def foxreach_webhook(req: Request):
 payload = await req.json()
 if payload["event"] != "reply_received":
 return {"ok": True}
 lead_id = payload["lead_id"]
 reply_text = payload["message"]["body"]

 # Resume the graph for this lead. The thread_id binds state to the lead.
 result = graph.invoke(
 {"last_reply": reply_text},
 config={"configurable": {"thread_id": lead_id}},
 )
 return {"ok": True, "next_status": result["status"]}

Human-in-the-loop gates

For regulated industries or named-account motion, gate the send node with an explicit human approval. LangGraph's interrupt_before pauses the graph before the node runs. Your approval UI (Slack, email, custom) surfaces the draft. An approve call resumes; a reject call routes to a revise node that regenerates and re-gates.

hitl_graph.pypython
graph = workflow.compile(
 checkpointer=MemorySaver(),
 interrupt_before=["send"], # Pause before send runs.
)

# Run until the interrupt:
result = graph.invoke(initial_state, config={"configurable": {"thread_id": lead_id}})
# result contains the drafted subject + body. Post it to Slack for approval.

# When the human approves:
graph.invoke(None, config={"configurable": {"thread_id": lead_id}})
# LangGraph resumes from the send node and continues.

Observability with LangSmith

Every graph run traces to LangSmith by default if you set LANGCHAIN_TRACING_V2=true. Each node appears as a span with inputs, outputs, latency, and errors. The full graph run shows as a tree - useful for debugging a lead that got stuck in a classify loop. Filter by lead_id in LangSmith to pull one lead's full journey.

Common pitfalls

Infinite loops in classify → send branches

If your reply-classifier routes back to send on ambiguous replies, the graph can loop. Add a max_iterations counter in state and a conditional edge that terminates after 3 retries per lead.

State growing unbounded

Each node can append to state. On a long campaign sequence, state bloats to thousands of tokens and slows every LLM call. Serialize state to FoxReach as campaign metadata, keep only the deltas in graph state.

Losing state across webhook-driven pauses

A graph paused waiting for a reply webhook needs persistence. Use LangGraph checkpointers (Postgres, Redis, SQLite) - without a checkpointer the graph resets when the process restarts.

Over-engineering the graph

Three nodes often beats eight. Start with research → draft → send. Add classify / wait / branch nodes only when the outcomes demand them. A 12-node graph is a red flag; a human wrote more nodes than the workflow actually has.

Frequently asked questions

When should I use LangGraph instead of plain LangChain for cold email?

Use LangGraph when you need loops (retry classify on ambiguous replies), conditional branches (hot vs cold routing), long-running waits (webhook-triggered next steps), or explicit human approval gates. Plain LangChain agents handle single-shot workflows (research → draft → send) better. If your flowchart has no loops and no waits, stay in LangChain.

How do I persist graph state across FoxReach webhook triggers?

Use a LangGraph checkpointer - PostgresSaver, MemorySaver (dev only), or a custom one backed by your database. The checkpointer stores state between node executions. When a FoxReach webhook fires (reply_received, bounce_received, unsubscribed), your webhook handler looks up the graph by lead_id, loads state from the checkpointer, and resumes from the wait node.

Can I use LangGraph with the FoxReach MCP server instead of the SDK?

Yes. langchain-mcp-adapters turns the MCP server into LangChain tools, which LangGraph nodes can invoke via ToolNode. The state flows through the graph the same way; only the tool-call layer changes. SDK is lower latency (direct Python calls vs MCP protocol overhead), MCP gives you tool auto-discovery when FoxReach ships new capabilities.

How does human-in-the-loop work with LangGraph + FoxReach?

LangGraph supports interrupt_before or interrupt_after on specific nodes. Before a send node, call interrupt_before. The graph pauses at that node. Your approval UI (Slack, email, custom) surfaces the draft. When an operator approves, resume the graph - LangGraph calls the send node and moves on. If rejected, route to a revise node instead.

Does LangGraph work with Claude instead of GPT?

Yes. Any LangChain-compatible LLM works in LangGraph nodes - ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI, Ollama, etc. The graph structure is LLM-agnostic. Route different nodes to different models - Claude for copywriter (better tone), GPT-4o-mini for classifier (cheaper at volume).

See also

Ship your LangGraph outbound agent today

Free plan, no credit card. API key in under 60 seconds.