Introducing the ChatRoutes SDK: Build Branching AI Conversations in Minutes
Launch announcement for the ChatRoutes TypeScript and Python SDKs - enabling developers to build multi-path AI conversations with GPT-5, Claude Opus 4.1, and more.
Introducing the ChatRoutes SDK: Build Branching AI Conversations in Minutes
The Problem Every Developer Faces
You're building an AI-powered feature. You craft the perfect prompt, send it to GPT-5, and... the response is good, but not quite right. Maybe it's too verbose. Maybe you wonder if Claude Opus 4.1 would handle it better. Or perhaps you want to try a different temperature setting.
What do you do? Start a new chat? Lose your conversation history? Manually track different variations in separate threads?
There has to be a better way.
Introducing ChatRoutes SDK
Today, we're excited to announce the ChatRoutes SDK for TypeScript/JavaScript and Python - the first developer toolkit purpose-built for branching AI conversations.
With ChatRoutes, you can:
- 🌳 Branch conversations at any point to explore multiple paths
- 🤖 Switch between AI models (GPT-5, Claude Opus 4.1, Claude Sonnet 4) mid-conversation
- 🔄 Stream responses in real-time with automatic retry logic
- 📊 Compare results across different models, temperatures, and prompts
- 🎯 Maintain context across branches or start fresh - your choice
All with just a few lines of code.
Quick Start
Installation
# TypeScript/JavaScript
npm install chatroutes
# Python
pip install chatroutes
Your First Branching Conversation
Here's how to create a conversation, explore two different approaches, and pick the best one:
TypeScript
import { ChatRoutes } from 'chatroutes';
const client = new ChatRoutes({
apiKey: process.env.CHATROUTES_API_KEY
});
// Start a conversation
const conversation = await client.conversations.create({
title: "Code Review Assistant"
});
// Send a message
const response = await client.messages.create({
conversationId: conversation.id,
content: "Review this function for performance issues",
model: "gpt-5",
temperature: 1.0
});
console.log(response.content);
// Try a different approach with branching
const branch = await client.branches.create({
conversationId: conversation.id,
branchFromMessageId: response.id,
title: "Claude's Perspective",
contextMode: "FULL" // Include all history
});
// Get Claude's take on the same code
const claudeResponse = await client.messages.create({
conversationId: conversation.id,
branchId: branch.id,
content: "Review this function for performance issues",
model: "claude-opus-4.1",
temperature: 1.0
});
console.log(claudeResponse.content);
Python
from chatroutes import ChatRoutes
client = ChatRoutes(
api_key=os.environ["CHATROUTES_API_KEY"]
)
# Start a conversation
conversation = client.conversations.create(
title="Code Review Assistant"
)
# Send a message
response = client.messages.create(
conversation_id=conversation.id,
content="Review this function for performance issues",
model="gpt-5",
temperature=1.0
)
print(response.content)
# Try a different approach with branching
branch = client.branches.create(
conversation_id=conversation.id,
branch_from_message_id=response.id,
title="Claude's Perspective",
context_mode="FULL" # Include all history
)
# Get Claude's take on the same code
claude_response = client.messages.create(
conversation_id=conversation.id,
branch_id=branch.id,
content="Review this function for performance issues",
model="claude-opus-4.1",
temperature=1.0
)
print(claude_response.content)
In just a few lines, you've created a conversation, gotten GPT-5's analysis, branched to try Claude Opus 4.1, and compared both responses. No conversation history lost. No manual tracking. No complexity.
Key Features
🌳 Intelligent Conversation Branching
Branch at any point to explore alternative paths. Each branch maintains its own history while preserving the parent conversation.
// Three context modes to control history
const branch = await client.branches.create({
conversationId: conv.id,
branchFromMessageId: message.id,
contextMode: "FULL" // All history up to branch point
// contextMode: "PARTIAL" // Only last exchange (saves tokens)
// contextMode: "NONE" // Fresh start, no context
});
🤖 Multi-Model Support
Seamlessly switch between leading AI models:
- GPT-5 - OpenAI's latest flagship model
- Claude Opus 4.1 - Anthropic's most capable model
- Claude Sonnet 4 - Fast, balanced performance
# Compare models on the same prompt
models = ["gpt-5", "claude-opus-4.1", "claude-sonnet-4"]
for model in models:
branch = client.branches.create(
conversation_id=conv.id,
title=f"{model} Analysis"
)
response = client.messages.create(
conversation_id=conv.id,
branch_id=branch.id,
model=model,
content="Analyze this dataset"
)
🔄 Real-Time Streaming
Stream responses as they're generated with built-in error handling:
const stream = await client.messages.stream({
conversationId: conv.id,
content: "Explain quantum computing",
model: "claude-opus-4.1"
});
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
}
🛡️ Auto-Retry & Error Handling
Built-in retry logic handles transient failures automatically:
# Automatically retries on rate limits, timeouts, and network errors
response = client.messages.create(
conversation_id=conv.id,
content="Complex analysis task",
max_retries=3, # Configurable
timeout=30 # Per-request timeout
)
Real-World Use Cases
1. Research & Analysis
Explore different research angles simultaneously:
const researchConv = await client.conversations.create({
title: "Market Research: AI Tools"
});
// Main research path
const overview = await client.messages.create({
conversationId: researchConv.id,
content: "What are the top AI development tools in 2025?",
model: "gpt-5"
});
// Branch 1: Deep dive into cost analysis
const costBranch = await client.branches.create({
conversationId: researchConv.id,
branchFromMessageId: overview.id,
title: "Cost Analysis"
});
await client.messages.create({
conversationId: researchConv.id,
branchId: costBranch.id,
content: "Focus on pricing models and ROI"
});
// Branch 2: Technical capabilities
const techBranch = await client.branches.create({
conversationId: researchConv.id,
branchFromMessageId: overview.id,
title: "Technical Deep Dive"
});
await client.messages.create({
conversationId: researchConv.id,
branchId: techBranch.id,
content: "Compare technical capabilities and integration options"
});
2. Creative Writing
Experiment with different narrative styles:
# Start with a story premise
story = client.conversations.create(title="Sci-Fi Story")
premise = client.messages.create(
conversation_id=story.id,
content="A developer discovers their code is sentient",
model="claude-opus-4.1"
)
# Try different tones
tones = ["thriller", "comedy", "philosophical"]
for tone in tones:
branch = client.branches.create(
conversation_id=story.id,
branch_from_message_id=premise.id,
title=f"{tone.capitalize()} Version"
)
client.messages.create(
conversation_id=story.id,
branch_id=branch.id,
content=f"Continue this story with a {tone} tone"
)
3. Code Review & Debugging
Get multiple perspectives on code quality:
const codeReview = await client.conversations.create({
title: "API Endpoint Review"
});
// Initial review
const code = `
async function getUserData(userId: string) {
const user = await db.query('SELECT * FROM users WHERE id = ' + userId);
return user;
}
`;
const initialReview = await client.messages.create({
conversationId: codeReview.id,
content: `Review this code:\n\n${code}`,
model: "gpt-5"
});
// Branch: Security-focused review
const securityBranch = await client.branches.create({
conversationId: codeReview.id,
branchFromMessageId: initialReview.id,
title: "Security Analysis"
});
await client.messages.create({
conversationId: codeReview.id,
branchId: securityBranch.id,
content: "Focus specifically on security vulnerabilities"
});
// Branch: Performance optimization
const perfBranch = await client.branches.create({
conversationId: codeReview.id,
branchFromMessageId: initialReview.id,
title: "Performance Review"
});
await client.messages.create({
conversationId: codeReview.id,
branchId: perfBranch.id,
content: "Suggest performance optimizations"
});
Getting Started
1. Sign Up for Free
Get started with 100,000 free tokens per month - no credit card required:
👉 https://chatroutes.com/register
2. Get Your API Key
After signing up, create an API key from your dashboard:
# Add to your environment
export CHATROUTES_API_KEY="your-api-key-here"
3. Install the SDK
# TypeScript/JavaScript
npm install chatroutes
# Python
pip install chatroutes
4. Explore the Documentation
- TypeScript SDK: github.com/chatroutes/chatroutes-ts
- Python SDK: github.com/chatroutes/chatroutes-python
- API Reference: docs.chatroutes.com
- Examples: github.com/chatroutes/examples
Pricing
ChatRoutes offers flexible pricing to match your needs:
| Plan | Tokens/Month | API Keys | Price | |------|--------------|----------|-------| | Free | 100,000 | 1 | $0 | | Starter | 500,000 | 5 | $29/mo | | Pro | 5,000,000 | 20 | $99/mo | | Enterprise | Custom | Unlimited | Contact us |
All plans include:
- ✅ Full branching capabilities
- ✅ All AI models (GPT-5, Claude Opus 4.1, Claude Sonnet 4)
- ✅ Streaming responses
- ✅ Real-time analytics
- ✅ 99.9% uptime SLA
What's Next?
The ChatRoutes SDK is just the beginning. We're working on:
- 🔗 Webhook support for async conversation events
- 📊 Advanced analytics for conversation insights
- 🌐 More AI models including specialized models
- 🔐 Enterprise SSO and team collaboration features
- 🎨 Pre-built UI components for React and Vue
Join the Community
We're building ChatRoutes in the open and would love your feedback:
- GitHub: Star our repos and contribute
- Discord: Join our developer community
- Twitter: Follow @chatroutes for updates
- Blog: Stay tuned for tutorials and best practices
Start Building Today
The best way to explore multiple AI solutions isn't to choose one model or one approach - it's to try them all, branch your conversations, and pick the best path forward.
With the ChatRoutes SDK, you can do exactly that in just a few lines of code.
Get started now: chatroutes.com/register
Happy branching! 🌳
Have questions? Reach out to us at support@chatroutes.com or check out our documentation.