LangChain Quickstart

Build LLM-powered applications

๐Ÿ”— What is LangChain?

LangChain is a framework for developing applications powered by language models. It provides tools for:

โ›“๏ธ

Chains

Combine LLM calls in sequence

๐Ÿค–

Agents

LLMs that use tools

๐Ÿง 

Memory

Persist conversation state

๐Ÿ“š

RAG

Retrieval augmented generation

๐Ÿ“ฆ Installation

# Core package pip install langchain # LLM providers pip install langchain-openai # OpenAI pip install langchain-anthropic # Claude # Vector stores pip install langchain-chroma # Chroma DB pip install langchain-pinecone # Pinecone # Community integrations pip install langchain-community

๐Ÿš€ Basic LLM Call

OpenAI

from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o") response = llm.invoke("What is Python?") print(response.content)

Anthropic Claude

from langchain_anthropic import ChatAnthropic llm = ChatAnthropic(model="claude-sonnet-4-20250514") response = llm.invoke("What is Python?") print(response.content)

๐Ÿ“ Prompt Templates

Create reusable prompts with variables.

from langchain_core.prompts import ChatPromptTemplate # Create template prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant that speaks like {style}."), ("user", "{question}") ]) # Fill in variables messages = prompt.invoke({ "style": "a pirate", "question": "What is Python?" }) # Send to LLM response = llm.invoke(messages)

โ›“๏ธ Chains (LCEL)

LangChain Expression Language (LCEL) lets you compose components with the | operator.

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser # Define components prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}") llm = ChatOpenAI(model="gpt-4o") parser = StrOutputParser() # Create chain with pipe operator chain = prompt | llm | parser # Run chain result = chain.invoke({"topic": "programming"}) print(result)

LCEL Benefits: Streaming, async, batching, and tracing all work automatically with LCEL chains.

๐Ÿ“š RAG (Retrieval Augmented Generation)

Load documents, embed them, and retrieve relevant context for your LLM.

1. Load Documents

from langchain_community.document_loaders import TextLoader, WebBaseLoader # Load from file loader = TextLoader("document.txt") docs = loader.load() # Load from web loader = WebBaseLoader("https://example.com/page") docs = loader.load()

2. Split into Chunks

from langchain_text_splitters import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200 ) chunks = splitter.split_documents(docs)

3. Create Vector Store

from langchain_openai import OpenAIEmbeddings from langchain_chroma import Chroma embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(chunks, embeddings)

4. Create RAG Chain

from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnablePassthrough # Retriever retriever = vectorstore.as_retriever() # RAG prompt prompt = ChatPromptTemplate.from_template(""" Answer based on the context: Context: {context} Question: {question} """) # RAG chain rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) # Ask question answer = rag_chain.invoke("What does the document say about X?")

๐Ÿง  Conversation Memory

from langchain_core.chat_history import InMemoryChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory # Store for chat histories store = {} def get_session_history(session_id: str): if session_id not in store: store[session_id] = InMemoryChatMessageHistory() return store[session_id] # Wrap chain with memory chain_with_memory = RunnableWithMessageHistory( chain, get_session_history, input_messages_key="question", history_messages_key="history" ) # Use with session response = chain_with_memory.invoke( {"question": "My name is Bob"}, config={"configurable": {"session_id": "user123"}} )

๐Ÿค– Agents with Tools

Let the LLM decide which tools to use to complete a task.

from langchain_core.tools import tool from langgraph.prebuilt import create_react_agent # Define tools @tool def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b @tool def add(a: int, b: int) -> int: """Add two numbers.""" return a + b # Create agent tools = [multiply, add] agent = create_react_agent(llm, tools) # Run agent result = agent.invoke({ "messages": [{"role": "user", "content": "What is 5 * 3 + 2?"}] })

Note: For agents, install LangGraph: pip install langgraph

๐Ÿ“‹ Quick Reference

Task Import
OpenAI LLM from langchain_openai import ChatOpenAI
Claude LLM from langchain_anthropic import ChatAnthropic
Prompts from langchain_core.prompts import ChatPromptTemplate
Output Parser from langchain_core.output_parsers import StrOutputParser
Embeddings from langchain_openai import OpenAIEmbeddings
Chroma DB from langchain_chroma import Chroma