Chains
LCEL (LangChain Expression Language) lets you compose components into powerful pipelines using the pipe operator. Build sequential, parallel, and conditional chains with built-in streaming and batch support.
LCEL — The Pipe Operator
The | operator chains Runnables together. Output from one step becomes input to the next:
from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser prompt = ChatPromptTemplate.from_template("Summarize: {text}") model = ChatOpenAI(model="gpt-4o-mini") parser = StrOutputParser() # Chain with pipe operator chain = prompt | model | parser # dict → messages → AIMessage → str result = chain.invoke({"text": "LangChain is a framework for..."})
RunnableSequence
Under the hood, | creates a RunnableSequence. You can also construct one explicitly:
from langchain_core.runnables import RunnableSequence # These two are equivalent: chain_pipe = prompt | model | parser chain_explicit = RunnableSequence(first=prompt, middle=[model], last=parser) # Both produce the same result result = chain_pipe.invoke({"text": "Hello world"})
RunnableParallel
Run multiple chains in parallel and combine their outputs into a dictionary:
from langchain_core.runnables import RunnableParallel # Two different analysis chains sentiment_chain = ChatPromptTemplate.from_template( "What is the sentiment of: {text}" ) | model | parser summary_chain = ChatPromptTemplate.from_template( "Summarize in one sentence: {text}" ) | model | parser # Run both in parallel parallel = RunnableParallel( sentiment=sentiment_chain, summary=summary_chain, ) result = parallel.invoke({"text": "LangChain makes LLM apps easy to build!"}) print(result["sentiment"]) # "Positive" print(result["summary"]) # "LangChain simplifies LLM app development."
RunnableLambda
Wrap any Python function as a Runnable to use it in a chain:
from langchain_core.runnables import RunnableLambda # Custom processing function def clean_text(text: str) -> str: return text.strip().lower() def add_metadata(result: str) -> dict: return {"output": result, "length": len(result)} # Use lambdas in a chain chain = ( RunnableLambda(clean_text) | prompt | model | parser | RunnableLambda(add_metadata) ) result = chain.invoke(" HELLO WORLD ") print(result) # {"output": "...", "length": 42}
RunnablePassthrough
Pass the input through unchanged, often used to forward data alongside chain results:
from langchain_core.runnables import RunnablePassthrough, RunnableParallel # Pass the original question alongside the answer chain = RunnableParallel( question=RunnablePassthrough(), # Forward input as-is answer=prompt | model | parser, # Generate answer ) result = chain.invoke({"text": "What is LCEL?"}) print(result["question"]) # {"text": "What is LCEL?"} print(result["answer"]) # "LCEL is LangChain Expression Language..." # assign() adds new keys while keeping existing ones chain = RunnablePassthrough.assign( answer=prompt | model | parser ) # Input: {"text": "..."} → Output: {"text": "...", "answer": "..."}
Streaming Chains
LCEL chains support streaming out of the box. Tokens flow through the chain as they are generated:
chain = prompt | model | parser # Stream tokens as they arrive for chunk in chain.stream({"text": "Explain quantum computing"}): print(chunk, end="", flush=True) # Async streaming async for chunk in chain.astream({"text": "Explain quantum computing"}): print(chunk, end="", flush=True) # Stream events (detailed logging of each step) async for event in chain.astream_events({"text": "Hello"}, version="v2"): print(event["event"], event.get("data", {}))
Batch Processing
Process multiple inputs concurrently:
# Process multiple inputs in parallel results = chain.batch([ {"text": "First document..."}, {"text": "Second document..."}, {"text": "Third document..."}, ], config={"max_concurrency": 3}) # Each result corresponds to its input for r in results: print(r)
Error Handling in Chains
Handle errors gracefully with fallbacks and retry logic:
from langchain_core.runnables import RunnableLambda # Fallback chain main_chain = prompt | ChatOpenAI(model="gpt-4o") | parser fallback_chain = prompt | ChatAnthropic(model="claude-sonnet-4-20250514") | parser chain_with_fallback = main_chain.with_fallbacks([fallback_chain]) # Retry with exponential backoff chain_with_retry = chain.with_retry( stop_after_attempt=3, wait_exponential_jitter=True, ) # Custom error handling def handle_error(error): return f"Sorry, an error occurred: {str(error)}" safe_chain = chain.with_fallbacks( [RunnableLambda(lambda x: handle_error(x))] )
Complex Chain Pattern — Multi-Step Analysis
Here is a real-world example combining multiple LCEL features:
from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnableParallel, RunnablePassthrough model = ChatOpenAI(model="gpt-4o-mini") parser = StrOutputParser() # Step 1: Analyze text in parallel analysis = RunnableParallel( text=RunnablePassthrough(), summary=ChatPromptTemplate.from_template( "Summarize in 2 sentences: {text}" ) | model | parser, topics=ChatPromptTemplate.from_template( "List 3 key topics from: {text}" ) | model | parser, ) # Step 2: Generate final report from analysis report_prompt = ChatPromptTemplate.from_template( """Based on this analysis: Summary: {summary} Topics: {topics} Write a one-paragraph report about the original text.""" ) # Full pipeline: parallel analysis → report generation full_chain = analysis | report_prompt | model | parser result = full_chain.invoke({"text": "LangChain is a framework..."}) print(result)
LLMChain, SequentialChain) are deprecated — always use LCEL for new code.What's Next?
The next lesson covers Memory — how to give your chains conversation history so they remember previous interactions.
Lilly Tech Systems