TokenMix Research Lab · 2026-04-25

LangChain JS: Complete Getting Started Guide (2026)
LangChain.js v1 is the TypeScript/JavaScript sibling of LangChain Python — the same LLM framework ecosystem for building RAG, agents, and complex chains, adapted for Node.js environments. Requires Node.js 20+ (Node 18 reached EOL March 2025). Core install: npm install langchain @langchain/openai @langchain/core @langchain/langgraph. The v1 migration moved legacy features to @langchain/classic and introduced ContentBlock types for strong typing. This guide covers setup, first chain, agent with LangGraph, provider routing, and v1 migration concerns. All verified against LangChain.js v1 documentation as of April 2026.
Table of Contents
- What LangChain.js Is
- Installation
- First Application: Simple Chain
- Building an Agent with LangGraph
- v1 Migration Notes
- Supported LLM Providers and Model Routing
- RAG with LangChain.js
- Observability Integration
- Common Patterns
- Known Limitations
- FAQ
What LangChain.js Is
LangChain.js is the TypeScript/JavaScript implementation of the LangChain framework. Key capabilities:
- Unified LLM interface across OpenAI, Anthropic, Google, and more
- Chain composition — link LLM calls with preprocessing, tool use, output parsing
- Agent orchestration via LangGraph (graph-based agent workflows)
- RAG primitives — document loaders, text splitters, embeddings, vector stores
- Streaming support natively
As of April 2026, v1 is the current stable line. Previous v0.x is maintained but not recommended for new projects.
Installation
Node.js 20+ required. Check your version:
node --version # must be v20.x or higher
If below 20, upgrade via nvm, asdf, or your OS package manager.
Standard install (LangChain + LangGraph + OpenAI provider):
npm install langchain @langchain/openai @langchain/core @langchain/langgraph
With additional providers:
npm install @langchain/anthropic # Claude support
npm install @langchain/google-genai # Gemini support
npm install @langchain/community # community integrations
Package manager alternatives:
pnpm install langchain @langchain/openai @langchain/core @langchain/langgraph
yarn add langchain @langchain/openai @langchain/core @langchain/langgraph
First Application: Simple Chain
Minimal setup with OpenAI:
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({
model: "gpt-5.4",
temperature: 0,
apiKey: process.env.OPENAI_API_KEY,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["user", "{input}"],
]);
const chain = prompt.pipe(model);
const response = await chain.invoke({
input: "Explain LangChain in one paragraph",
});
console.log(response.content);
What's happening:
ChatOpenAIwraps the OpenAI API with LangChain's unified interfaceChatPromptTemplatecreates a reusable prompt structure with variables.pipe()composes the prompt and model into a chain.invoke()runs the chain with input variables
Result: same API shape across providers — swap ChatOpenAI for ChatAnthropic and everything else stays.
Building an Agent with LangGraph
For multi-step agents with tools, use LangGraph (ships as @langchain/langgraph):
import { StateGraph, END, START } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
// Define a tool
const getWeather = tool(
async ({ city }: { city: string }) => `Weather in ${city}: sunny, 72°F`,
{
name: "get_weather",
description: "Get current weather for a city",
schema: z.object({ city: z.string() }),
}
);
// Create model with tool
const model = new ChatOpenAI({ model: "gpt-5.4" }).bindTools([getWeather]);
// Build graph
const graph = new StateGraph({
channels: { messages: { reducer: (x, y) => x.concat(y) } },
})
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
})
.addEdge(START, "agent")
.addEdge("agent", END)
.compile();
const result = await graph.invoke({
messages: [{ role: "user", content: "What's the weather in SF?" }],
});
LangGraph scales to complex multi-agent, multi-step workflows. See CrewAI to LangGraph migration guide for deep-dive patterns.
v1 Migration Notes
Key changes from v0.x to v1:
1. Node.js 20 required. Node 18 is not supported.
2. Legacy functionality in @langchain/classic. If you were using older abstractions not focused on standard interfaces and agents, they're now in a separate package:
npm install @langchain/classic
3. ContentBlock types for strong typing. New TypeScript types under ContentBlock for message content. Enables better type inference on multimodal inputs.
4. Optional v1 output format:
# Via env var
LC_OUTPUT_VERSION=v1
# Or in code
outputVersion: "v1"
Migration effort: for most apps, the migration is:
- Upgrade Node.js to 20+
- Update all
@langchain/*packages to v1 - If using legacy features → install
@langchain/classicand migrate imports - Test; most APIs are backward compatible
Typical migration time: a few hours for small-medium projects.
Supported LLM Providers and Model Routing
LangChain.js supports many providers via @langchain/* packages:
@langchain/openai— GPT-5.5, GPT-5.4, GPT-5.4-mini, etc.@langchain/anthropic— Claude Opus 4.7, Sonnet 4.6, Haiku 4.5@langchain/google-genai— Gemini 3.1 Pro, 2.5 Flash variants@langchain/aws— AWS Bedrock models@langchain/mistralai— Mistral family@langchain/ollama— local Ollama models
For multi-provider routing with one API key, use ChatOpenAI with a custom base URL pointing at an aggregator:
import { ChatOpenAI } from "@langchain/openai";
const claude = new ChatOpenAI({
model: "claude-opus-4-7",
apiKey: process.env.TOKENMIX_API_KEY,
configuration: {
baseURL: "https://api.tokenmix.ai/v1",
},
});
const deepseek = new ChatOpenAI({
model: "deepseek-v4-pro",
apiKey: process.env.TOKENMIX_API_KEY,
configuration: {
baseURL: "https://api.tokenmix.ai/v1",
},
});
Through TokenMix.ai, you access Claude Opus 4.7, GPT-5.5, DeepSeek V4-Pro, Kimi K2.6, Gemini 3.1 Pro, and 300+ other models via one API key. Simpler than installing @langchain/anthropic, @langchain/google-genai, etc. separately.
RAG with LangChain.js
Standard RAG pipeline:
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
// Split documents
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await splitter.splitDocuments(documents);
// Embed and store
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const vectorStore = await MemoryVectorStore.fromDocuments(splits, embeddings);
// Retrieve
const results = await vectorStore.similaritySearch("user query", 4);
For production, swap MemoryVectorStore for Qdrant, Pinecone, Weaviate, or Chroma via @langchain/qdrant etc.
Observability Integration
LangSmith (LangChain's own tracing) integration:
export LANGCHAIN_API_KEY=your-key
export LANGCHAIN_TRACING_V2=true
That's it — every chain execution automatically traces to LangSmith.
For Langfuse, Helicone, or other observability platforms, each has LangChain.js integration packages. See the specific platform's docs.
Common Patterns
Streaming responses:
const stream = await chain.stream({ input: "Write a story" });
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
Structured output:
import { z } from "zod";
const Structured = z.object({
name: z.string(),
age: z.number(),
});
const modelWithStructure = model.withStructuredOutput(Structured);
const result = await modelWithStructure.invoke("Extract: John is 30.");
// result.name === "John", result.age === 30
Tool use:
See LangGraph section above.
Error handling:
try {
const result = await chain.invoke(input);
} catch (error) {
if (error.name === "RateLimitError") {
// Retry logic
}
}
Known Limitations
1. Node.js 20+ requirement. Older environments (some serverless runtimes) may not support.
2. Browser support limited. LangChain.js is Node-focused. Some features don't work in browser (filesystem access, etc.).
3. v1 migration friction. If on v0.x with legacy features, migration to v1 takes work.
4. Streaming can be tricky with agents. LangGraph streaming has nuances; test carefully.
5. Bundle size considerations. LangChain + providers can add 10+ MB to bundle. Relevant for edge/serverless deployments with size limits.
6. TypeScript inference imperfect. Some complex generic chains lose type information. Expect occasional any.
FAQ
Is LangChain.js a fork of Python LangChain?
No, it's a parallel implementation sharing the same concepts. APIs are similar but idiomatic TypeScript.
Should I use LangChain.js or Vercel AI SDK?
LangChain.js: more comprehensive, includes LangGraph for complex agents, better for RAG.
Vercel AI SDK: simpler, optimized for chat UI, smaller bundle, better for Next.js apps.
Can I use LangChain.js on Cloudflare Workers?
Limited. Some dependencies require Node-specific APIs. Cloudflare has a Workers-compatible subset, but full LangChain.js is easier on full Node.
How does LangChain.js v1 compare to LangGraph JS?
LangChain.js provides primitives (models, prompts, chains). LangGraph builds agent orchestration on top. Use together for complex agents, just LangChain.js for simple chains.
What's the latest major feature?
LangGraph maturation. v1 cleaned up the legacy vs modern API split, making LangGraph the canonical agent pattern.
Does LangChain.js support multimodal models?
Yes. Pass image URLs in message content; models that support vision (GPT-5.5, Claude Opus 4.7, Qwen2.5-VL-72B) process them correctly.
Can I use ContentBlocks with any model?
ContentBlocks are the v1 standardized format. All v1-compatible providers support them. Legacy providers may require migration.
How do I handle rate limits gracefully?
Use built-in retry logic or wrap models with custom retry adapters. For production, route through TokenMix.ai with automatic failover — if Anthropic 529s, automatically routes to GPT-5.5 or DeepSeek V4-Pro.
Is Python LangChain better?
Feature parity is high. Python has slightly more ecosystem (older, more integrations). TypeScript has better type safety and fits JavaScript stacks naturally.
Where can I find production LangChain.js apps?
LangChain's official showcase, GitHub search for langchain-ts, Vercel templates, and community projects.
Related Articles
- Ultimate LLM Comparison Hub 2026: Every Major Model Benchmarked
- DeepSeek R1-0528-Qwen3-8B & Chat V3 Free: Usage Guide (2026)
- qwen2.5-vl-72b-instruct: Vision Model Developer Guide (2026)
- UI-TARS-2: ByteDance's Autonomous GUI Agent Walkthrough (2026)
- text-embedding-3-small: $0.02/MTok, 1536 Dims, MTEB 62.26 Guide
Author: TokenMix Research Lab | Last Updated: April 25, 2026 | Data Sources: LangChain v1 Migration Guide (JS), langchain npm package, LangChain.js Reference, Microsoft LangChain.js for Beginners, LangChain.js GitHub, TokenMix.ai LangChain integration