Getting Started
Integrate RyanChat's powerful multi-LLM API into your application in minutes.
Quick Start
1. Get Your API Key
Sign up for a developer account and create an API key in your dashboard.
Get API Key2. Make Your First Request
Use your API key to make requests to the chat completions endpoint.
curl https://dev.ryanchat.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.1",
"messages": [
{
"role": "user",
"content": "Hello! How are you?"
}
]
}'
# For vision-enabled models with uploaded images:
curl https://dev.ryanchat.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.1",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://dev.ryanchat.com/v1/files/550e8400-e29b-41d4-a716-446655440000.jpg"
}
}
]
}
]
}'3. Handle the Response
The API returns a standardized response format compatible with OpenAI's API.
{
"id": "chatcmpl-1234567890",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-5.1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 20,
"total_tokens": 33
}
}API Endpoints
/v1/chat/completionsChat Completions
Generate chat completions using any supported model.
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | The model to use for completion (e.g., "gpt-5.1", "claude-sonnet-4-5", "gemini-2.5-pro") |
messages | array | Required | A list of messages comprising the conversation. For vision-enabled models, include image URLs from file uploads. |
max_tokens | integer | Optional | Maximum number of tokens to generate |
temperature | number | Optional | Controls randomness in the response (0.0 to 2.0) |
response_format | object | Optional | Specify the response format for structured output |
Example Response
{
"id": "chatcmpl-1234567890",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-5.1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 20,
"total_tokens": 33
}
}Using Files with Vision Models
Vision-enabled models like GPT-5.1, Claude Sonnet 3.5, and Gemini Pro Vision can analyze uploaded images. Use the file upload endpoint to upload images, then reference them in your messages using the returned file URL. Some models also support document analysis for PDFs and other file types.
{
"model": "gpt-5.1",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://dev.ryanchat.com/v1/files/550e8400-e29b-41d4-a716-446655440000.jpg"
}
}
]
}
]
}/v1/files/uploadUpload Files
Upload files for AI processing. Files can be used with vision-enabled models or document analysis.
| Parameter | Type | Required | Description |
|---|---|---|---|
files | file[] | Required | One or more files to upload. Supported formats: Images (JPEG, PNG, GIF, WebP), Documents (PDF, DOC, DOCX, XLS, XLSX, TXT), and other common file types. |
Supported File Types
Example Request
curl -X POST https://dev.ryanchat.com/v1/files/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "files=@document.pdf" \
-F "files=@image.jpg"Example Response
{
"success": true,
"files": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"filename": "550e8400-e29b-41d4-a716-446655440000.pdf",
"originalName": "document.pdf",
"mimetype": "application/pdf",
"size": 245760,
"url": "/api/v1/files/550e8400-e29b-41d4-a716-446655440000.pdf",
"uploadDate": "2025-11-27T04:30:00.000Z"
},
{
"id": "550e8400-e29b-41d4-a716-446655440001",
"filename": "550e8400-e29b-41d4-a716-446655440001.jpg",
"originalName": "image.jpg",
"mimetype": "image/jpeg",
"size": 512000,
"url": "/api/v1/files/550e8400-e29b-41d4-a716-446655440001.jpg",
"uploadDate": "2025-11-27T04:30:00.000Z"
}
],
"message": "Successfully uploaded 2 file(s)"
}/v1/files/:filenameServe File
Retrieve an uploaded file by its filename.
| Parameter | Type | Required | Description |
|---|---|---|---|
filename | string | Required | The filename returned from the upload endpoint (UUID-based filename). |
Example Request
curl https://dev.ryanchat.com/v1/files/550e8400-e29b-41d4-a716-446655440000.pdf \
-H "Authorization: Bearer YOUR_API_KEY"Response
Returns the file content with appropriate Content-Type headers. Files are cached for 1 year.
/v1/chat/completions (Structured Output)Chat Completions with Structured Output
Generate chat completions with guaranteed JSON response format using JSON Schema.
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | The model to use for completion |
messages | array | Required | A list of messages comprising the conversation |
response_format | object | Required | Defines the structure of the response |
Structured Output Example
Use JSON Schema to define the exact structure you need for the response. The AI will strictly follow the schema you provide.
{
"model": "gpt-5.1",
"messages": [
{
"role": "user",
"content": "Extract information about a person from this text: John Smith is 35 years old and works as a software engineer in San Francisco."
}
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "person_info",
"schema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The person's full name"
},
"age": {
"type": "integer",
"description": "The person's age in years"
},
"occupation": {
"type": "string",
"description": "The person's job title"
},
"location": {
"type": "string",
"description": "The city where the person lives"
}
},
"required": ["name", "age", "occupation", "location"],
"additionalProperties": false
},
"strict": true
}
}
}Guaranteed Response Format
{
"id": "chatcmpl-structured-1234567890",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-5.1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "{\"name\": \"John Smith\", \"age\": 35, \"occupation\": \"software engineer\", \"location\": \"San Francisco\"}"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 45,
"completion_tokens": 25,
"total_tokens": 70
}
}The content field contains valid JSON that strictly conforms to your schema. Parse it directly in your application without additional validation.
/v1/chat/completions (JSON Mode)Chat Completions with JSON Mode
Generate chat completions that respond with valid JSON (no schema required).
Simple JSON Response
{
"model": "gpt-5.1",
"messages": [
{
"role": "user",
"content": "Return a JSON object with current weather information for San Francisco"
}
],
"response_format": {
"type": "json_object"
}
}Use "type": "json_object" for simple JSON responses without a specific schema. The AI will respond with valid JSON but without structural guarantees.
/v1/modelsList Models
Retrieve a list of all available models.
Example Response
{
"models": [
{
"id": "gpt-5.1",
"name": "GPT-5.1",
"provider": "openai",
"description": "Most advanced OpenAI model with enhanced reasoning",
"capabilities": ["text", "code", "reasoning", "vision"],
"contextLength": "200000",
"available": true
}
],
"providers": {
"openai": true,
"anthropic": true,
"google": false,
"deepseek": true,
"kimi": false,
"grok": true
},
"defaultModelId": "gpt-5.1"
}/v1/usageUsage Statistics
Get your API usage statistics and billing information.
| Parameter | Type | Required | Description |
|---|---|---|---|
period | string | Optional | Optional time window such as "7d", "30d", "90d", "1y" (default: 30d). |
Example Response
{
"totalConversations": 42,
"totalMessages": 310,
"userMessages": 150,
"assistantMessages": 160,
"messagesToday": 12,
"tokenUsage": 123456,
"inputTokens": 0,
"outputTokens": 0,
"costSavings": 27,
"modelBreakdown": [
{ "model": "gpt-5.1", "count": 200, "percentage": 65 },
{ "model": "claude-sonnet-4-5", "count": 110, "percentage": 35 }
]
}/v1/providersSupported Providers
Dynamically list all LLM providers currently enabled for your account.
Example Response
{
"providers": {
"openai": true,
"anthropic": true,
"google": false,
"deepseek": true,
"kimi": false,
"grok": true
}
}/v1/providers/{provider}/modelsProvider Models
Retrieve only the models for a specific provider (e.g. openai, anthropic, deepseek).
| Parameter | Type | Required | Description |
|---|---|---|---|
provider | string | Required | Path parameter. One of "openai", "anthropic", "google", "deepseek", "kimi", "grok". |
Example Response
{
"provider": "openai",
"count": 3,
"models": [
{ "id": "gpt-5.1", "name": "GPT-5.1", "provider": "openai" },
{ "id": "gpt-5", "name": "GPT-5", "provider": "openai" },
{ "id": "gpt-4.1", "name": "GPT-4.1", "provider": "openai" }
]
}/v1/models/recommendationSmart Model Recommendation
Let RyanChat select the best model for you based on cost, speed, or a balanced strategy.
| Parameter | Type | Required | Description |
|---|---|---|---|
strategy | string | Optional | One of "cheapest", "fastest", or "balanced" (default: balanced). |
provider | string | Optional | Optional provider to constrain the recommendation (e.g. "openai"). |
Example Response
{
"strategy": "cheapest",
"provider": "deepseek",
"model": {
"id": "deepseek-chat",
"name": "DeepSeek Chat",
"provider": "deepseek",
"tier": "budget",
"customPricing": {
"inputCost": 0.15,
"outputCost": 0.15,
"currency": "USD"
},
"capabilities": ["text", "code", "reasoning"],
"maxContextLength": 16000
}
}Supported Models
Access the most advanced AI models from leading providers through a single API. We continuously add new models and can enable additional ones upon request.
Supported Chat Models
OpenAI Models
GPT-5.1
LatestMost advanced OpenAI model with enhanced reasoning
GPT-5
PopularNext-generation OpenAI model with advanced capabilities
GPT-4.1
ReliableProven OpenAI model with excellent reasoning
O1 Models
ReasoningSpecialized models for complex reasoning tasks
Supported Image Generation Models
Image models are not currently enabled on this environment. Contact support if you need image generation capabilities enabled for your account.
Need a Specific Model?
Don't see the model you need? We can enable additional models from any supported provider upon request. Contact us to add new models to your account.
Rate Limits & Billing
Rate Limits
- 10,000 requests per minute
- 1,000,000 requests per day
- 10,000,000 tokens per day
- Unlimited for Enterprise
Higher limits available upon request for enterprise customers.
Billing & Pricing
- Flat 20% margin over underlying LLM provider costsExample: if the upstream model is $1.00 per 1M tokens, RyanChat bills $1.20 per 1M tokens.
- Prepaid balance system
- Pay only for tokens used
- No hidden fees
Volume Discounts: For high-volume or enterprise usage, we can negotiate custom margins and limits. Contact us for enterprise pricing.
Real-time Usage Monitoring
Comprehensive dashboards, real-time alerts, and detailed usage analytics to help you optimize your AI spending and performance.
SDKs & Libraries
Python (REST Example)
Use the RyanChat REST API directly from Python today using requests or httpx. Official SDKs will be added here once released.
import requests
resp = requests.post(
"https://dev.ryanchat.com/v1/chat/completions",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"model": "gpt-5.1",
"messages": [{"role": "user", "message": "Hello from Python!"}],
},
)
print(resp.json())Node.js (Axios / Fetch)
Call the RyanChat API from Node.js using axios, fetch, or any HTTP client. Official Node.js SDK is planned but not yet released.
import axios from "axios";
const resp = await axios.post(
"https://dev.ryanchat.com/v1/chat/completions",
{
model: "gpt-5.1",
messages: [{ role: "user", message: "Hello from Node!" }],
},
{
headers: { Authorization: "Bearer YOUR_API_KEY" },
}
);
console.log(resp.data);REST API
Direct HTTP API access with comprehensive documentation, multiple authentication methods, and works with any programming language.
curl -X POST https://api.ryanchat.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json"More SDKs Coming Soon
Advanced Multi-Provider Patterns
Leverage RyanChat's multi-LLM infrastructure for advanced use cases that go beyond simple completions. These patterns combine multiple providers and models to achieve superior results.
Hallucination Killer
Run the same prompt through multiple models from different providers (e.g., GPT-5, Claude Opus, Gemini Pro), then compare and validate their responses. Identify inconsistencies and hallucinations by cross-referencing outputs. Perfect for fact-checking, research, and high-stakes decisions.
Consensus GPT
Query 3–5 different models in parallel, aggregate their responses, and synthesize a consensus answer. Use voting, weighted averaging, or a final "referee" model to combine insights. Reduces bias from any single provider and produces more robust, well-rounded responses.
Smart Failover
Automatically switch to a backup model if your primary choice is unavailable, rate-limited, or returns an error. Define fallback chains (e.g., GPT-5 → Claude Opus → Gemini Pro) to ensure 99.9% uptime. RyanChat handles retries and model selection transparently.
Cost-Aware Routing
Use the /v1/models/recommendation?strategy=cheapest endpoint to automatically route requests to the most cost-effective model that meets your requirements. Balance quality and price by setting constraints (e.g., "cheapest model with 100k+ context").
Hybrid Flows (Multi-Stage Processing)
Chain multiple models together for complex workflows. For example: use a fast, cheap model (DeepSeek) for initial drafts, then refine with a professional model (GPT-5 or Claude Opus) for final polish. Or use a reasoning-focused model (O1) for planning, then a code-specialized model (Codex) for implementation.
Ready to Build Advanced AI Workflows?
Start experimenting with multi-provider patterns today. No minimum commitment, pay only for what you use.