Installation & Setup
Get LangChain installed, configure your API keys, set up a clean project structure, and build your very first LLM chain.
Installing LangChain
LangChain uses a modular package structure. Install only the packages you need:
# Core LangChain package pip install langchain langchain-core # Provider packages (install the ones you need) pip install langchain-openai # OpenAI / GPT models pip install langchain-anthropic # Anthropic / Claude models pip install langchain-google-genai # Google / Gemini models pip install langchain-community # Community integrations # Common extras pip install langchain-chroma # ChromaDB vector store pip install langsmith # Observability & tracing # Or install everything at once pip install langchain langchain-openai langchain-anthropic langchain-community
python -m venv .venv && source .venv/bin/activate (Linux/Mac) or python -m venv .venv && .venv\Scripts\activate (Windows) before installing.Setting Up API Keys
LangChain needs API keys to communicate with LLM providers. Create a .env file in your project root:
# OpenAI OPENAI_API_KEY=sk-proj-your-key-here # Anthropic ANTHROPIC_API_KEY=sk-ant-your-key-here # Google GOOGLE_API_KEY=your-google-key-here # LangSmith (optional, for tracing) LANGCHAIN_TRACING_V2=true LANGCHAIN_API_KEY=lsv2_your-key-here
Load environment variables in your Python code:
from dotenv import load_dotenv load_dotenv() # Loads .env file into environment variables # Or set them directly in Python import os os.environ["OPENAI_API_KEY"] = "sk-proj-your-key-here"
Project Structure
A clean LangChain project structure helps keep your code organized as it grows:
my-langchain-app/ .env # API keys (never commit this!) .gitignore # Include .env requirements.txt # Dependencies app/ __init__.py chains.py # Chain definitions prompts.py # Prompt templates models.py # Model configuration tools.py # Custom tools agents.py # Agent definitions data/ documents/ # Documents for RAG tests/ test_chains.py
Your First Chain — Hello World
Let's build the simplest possible LangChain application: a chain that takes a topic and generates a joke.
from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser # 1. Create a prompt template prompt = ChatPromptTemplate.from_messages([ ("system", "You are a comedian who tells short, clever jokes."), ("human", "Tell me a joke about {topic}"), ]) # 2. Initialize the model model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7) # 3. Create an output parser parser = StrOutputParser() # 4. Chain them together with the pipe operator (LCEL) chain = prompt | model | parser # 5. Run the chain result = chain.invoke({"topic": "programming"}) print(result) # "Why do programmers prefer dark mode? Because light attracts bugs!"
| (pipe) operator chains components together. The prompt creates a formatted message, the model generates a response, and the parser extracts the text string. This is called LCEL — LangChain Expression Language.LCEL Basics
LangChain Expression Language (LCEL) is the modern way to compose chains. Every component is a Runnable with standard methods:
# invoke() - run with a single input result = chain.invoke({"topic": "cats"}) # batch() - run with multiple inputs results = chain.batch([ {"topic": "cats"}, {"topic": "dogs"}, {"topic": "AI"}, ]) # stream() - get output token by token for chunk in chain.stream({"topic": "space"}): print(chunk, end="", flush=True) # ainvoke() - async version result = await chain.ainvoke({"topic": "robots"})
Debugging with Verbose Mode
When things go wrong, enable verbose mode to see what LangChain is doing internally:
import langchain langchain.debug = True # Full debug output langchain.verbose = True # Summary output # Or set via environment variable import os os.environ["LANGCHAIN_VERBOSE"] = "true" # Now run your chain - you'll see detailed logs result = chain.invoke({"topic": "debugging"})
Verify Your Installation
Run this quick test to make sure everything is working:
import langchain_core import langchain print(f"langchain-core: {langchain_core.__version__}") print(f"langchain: {langchain.__version__}") # Quick smoke test from langchain_core.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_template("Say hello to {name}") print(prompt.invoke({"name": "LangChain"})) # messages=[HumanMessage(content='Say hello to LangChain')]
What's Next?
Now that LangChain is installed and working, the next lesson dives into LLMs and Chat Models — how to configure different providers, stream responses, and set up fallbacks.
Lilly Tech Systems