Beginner

Project Setup

In this first lesson, you will understand the architecture of a multi-agent system, learn the key LangGraph concepts, set up the project structure, and install all dependencies. By the end, you will have a working development environment ready to build agents.

Architecture Overview

A multi-agent workflow is a system where multiple AI agents, each with specialized capabilities, collaborate to solve complex tasks. Instead of one monolithic agent trying to do everything, you split responsibilities across focused agents coordinated by a supervisor.

User Request
    |
    v
[Supervisor Agent]
    |
    +------ Routes to ------+------------------+
    |                        |                  |
    v                        v                  v
[Research Agent]     [Coder Agent]      [Analyst Agent]
  - Web search        - Code execution    - Data analysis
  - Summarization     - File I/O          - Visualization
  - Fact-checking     - Debugging         - Report generation
    |                        |                  |
    +--------+---------------+------------------+
             |
             v
    [Aggregated Result]
             |
             v
    [Human Approval Gate]  (optional)
             |
             v
       Final Response

Why Multi-Agent?

Single agents hit practical limits quickly. Here is why multi-agent architectures solve real problems:

  • Specialization: Each agent has a focused system prompt and toolset. A research agent does not need code execution tools cluttering its context.
  • Reliability: If one agent fails, the supervisor can retry with a different agent or strategy. The system degrades gracefully.
  • Scalability: Add new agents without changing existing ones. Need a translator agent? Plug it in and update the supervisor routing.
  • Cost control: Route simple tasks to cheaper models. Only use GPT-4o for complex reasoning; use GPT-4o-mini for tool calls and summarization.
  • Observability: Trace individual agent steps. Debug the research agent without wading through unrelated code execution logs.

LangGraph Fundamentals

LangGraph is a framework for building stateful, multi-step agent workflows as directed graphs. Here are the core concepts you need:

StateGraph

A StateGraph defines the flow of your workflow. Nodes are functions that process state, and edges define transitions between nodes.

# Core LangGraph concepts
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, Annotated
from operator import add

# 1. Define the state schema - what data flows through the graph
class AgentState(TypedDict):
    messages: Annotated[list, add]     # Chat messages accumulate
    next_agent: str                     # Which agent to call next
    task: str                           # Current task description
    results: dict                       # Collected results from agents

# 2. Create the graph
graph = StateGraph(AgentState)

# 3. Add nodes (each is a function that takes state and returns updates)
graph.add_node("supervisor", supervisor_fn)
graph.add_node("researcher", researcher_fn)
graph.add_node("coder", coder_fn)

# 4. Add edges (how nodes connect)
graph.add_edge(START, "supervisor")
graph.add_conditional_edges("supervisor", route_to_agent)
graph.add_edge("researcher", "supervisor")
graph.add_edge("coder", "supervisor")

# 5. Compile and run
app = graph.compile()
result = app.invoke({"messages": [("user", "Research Python async patterns")]})

Key Patterns

  • Supervisor Pattern: A central agent decides which worker to call next. Best for dynamic task routing.
  • Sequential Pattern: Agents run in a fixed order like a pipeline. Best for predictable workflows.
  • Hierarchical Pattern: Supervisors managing supervisors. Best for complex organizations of agents.
💡
We will use the Supervisor Pattern. It is the most flexible and the most commonly used in production. The supervisor sees all agent outputs and dynamically decides the next step, including when to stop.

Project Structure

Create the following directory structure. Every file will be built throughout this course.

multi-agent-workflow/
├── .env
├── .env.example
├── requirements.txt
├── main.py                    # Entry point - run the workflow
├── agents/
│   ├── __init__.py
│   ├── state.py               # Shared state schema
│   ├── supervisor.py          # Supervisor agent logic
│   ├── researcher.py          # Research agent with web search
│   ├── coder.py               # Coder agent with code execution
│   └── analyst.py             # Analyst agent with data tools
├── tools/
│   ├── __init__.py
│   ├── search.py              # Web search tool (Tavily)
│   ├── code_executor.py       # Sandboxed Python execution
│   ├── file_io.py             # File read/write tools
│   └── api_client.py          # HTTP API call tool
├── graph/
│   ├── __init__.py
│   ├── workflow.py            # LangGraph StateGraph definition
│   ├── routing.py             # Conditional edge logic
│   └── human_review.py        # Human-in-the-loop nodes
├── monitoring/
│   ├── __init__.py
│   ├── tracing.py             # LangSmith integration
│   ├── cost_tracker.py        # Token usage and cost tracking
│   └── error_handler.py       # Structured error handling
└── tests/
    ├── test_agents.py
    ├── test_tools.py
    └── test_workflow.py

Run these commands to create the structure:

# Create project directory
mkdir -p multi-agent-workflow/{agents,tools,graph,monitoring,tests}

# Create __init__.py files
touch multi-agent-workflow/agents/__init__.py
touch multi-agent-workflow/tools/__init__.py
touch multi-agent-workflow/graph/__init__.py
touch multi-agent-workflow/monitoring/__init__.py

# Create placeholder files
touch multi-agent-workflow/main.py
touch multi-agent-workflow/.env.example

Install Dependencies

Create requirements.txt with all required packages:

# requirements.txt
langgraph==0.2.60
langchain==0.3.13
langchain-openai==0.3.0
langchain-community==0.3.13
langsmith==0.2.10
openai==1.58.1
tavily-python==0.5.0
python-dotenv==1.0.1
pydantic==2.10.4
httpx==0.28.1
rich==13.9.4
# Create virtual environment and install
python -m venv venv
source venv/bin/activate   # On Windows: venv\Scripts\activate
pip install -r requirements.txt

Environment Configuration

Create .env.example and copy it to .env:

# .env.example - Copy to .env and fill in your values
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4o-mini

# Optional: Tavily for web search (free tier: 1000 searches/month)
TAVILY_API_KEY=tvly-your-key-here

# Optional: LangSmith for tracing (free tier: 5000 traces/month)
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=lsv2-your-key-here
LANGCHAIN_PROJECT=multi-agent-workflow

Verify the Setup

Create a quick verification script to confirm everything works:

# test_setup.py
"""Verify all dependencies and API connections."""
import os
from dotenv import load_dotenv

load_dotenv()


def test_langgraph():
    """Verify LangGraph is installed and working."""
    from langgraph.graph import StateGraph, START, END
    from typing import TypedDict

    class TestState(TypedDict):
        message: str

    def echo(state: TestState) -> dict:
        return {"message": f"Echo: {state['message']}"}

    graph = StateGraph(TestState)
    graph.add_node("echo", echo)
    graph.add_edge(START, "echo")
    graph.add_edge("echo", END)

    app = graph.compile()
    result = app.invoke({"message": "Hello, LangGraph!"})
    assert result["message"] == "Echo: Hello, LangGraph!"
    print("LangGraph OK - graph compiled and executed successfully")


def test_openai():
    """Verify OpenAI API key works."""
    from openai import OpenAI

    client = OpenAI()
    response = client.chat.completions.create(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        messages=[{"role": "user", "content": "Say 'hello' and nothing else."}],
        max_tokens=10
    )
    print(f"OpenAI OK - response: {response.choices[0].message.content}")


def test_langchain_openai():
    """Verify LangChain-OpenAI integration."""
    from langchain_openai import ChatOpenAI

    llm = ChatOpenAI(model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"), temperature=0)
    response = llm.invoke("Say 'agents ready' and nothing else.")
    print(f"LangChain-OpenAI OK - response: {response.content}")


if __name__ == "__main__":
    test_langgraph()
    test_openai()
    test_langchain_openai()
    print("\nAll setup tests passed! You are ready to build agents.")
# Run the verification
python test_setup.py

# Expected output:
# LangGraph OK - graph compiled and executed successfully
# OpenAI OK - response: hello
# LangChain-OpenAI OK - response: agents ready
#
# All setup tests passed! You are ready to build agents.
📝
Checkpoint: At this point you should have all dependencies installed and the OpenAI API working. If the Tavily or LangSmith tests fail, that is fine — those are optional and we will add them in later lessons. The critical path only requires OpenAI + LangGraph.

Key Takeaways

  • Multi-agent systems split complex tasks across specialized agents coordinated by a supervisor.
  • LangGraph uses a StateGraph with nodes (functions) and edges (transitions) to orchestrate agent workflows.
  • The Supervisor Pattern is the most flexible: a central agent dynamically routes tasks to workers.
  • The project is modular: agents, tools, graph logic, and monitoring are in separate packages.

What Is Next

In the next lesson, you will build your first single ReAct agent — a complete agent with tools, memory, and error handling. This is the fundamental building block that every agent in the system will be based on.