Endpoint: POST /chat/completions
Use this endpoint for most model API integrations. Send a model ID and an array of messages; YouRouter returns an OpenAI-compatible completion response. For models that support vision, this endpoint can also accept image inputs in messages[].content.
Request
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Reply with exactly: connected"
}
]
}'
OpenAI SDK
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["YOUROUTER_API_KEY"],
base_url="https://api.yourouter.ai/v1",
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Reply with exactly: connected"}],
)
print(completion.choices[0].message.content)
| Header | Required | Description |
|---|
Authorization | Yes | Bearer <YOUROUTER_API_KEY> |
Content-Type | Yes | Use application/json. |
vendor | No | Use auto or omit for automatic routing. Use values like openai, anthropic, or google to pin a provider. |
Body Parameters
The model ID to call, such as gpt-4o, claude-sonnet-4-20250514, or gemini-2.5-flash.
Ordered conversation messages. Each message includes a role and content.
If true, the response is returned as server-sent event chunks.
Controls randomness when supported by the selected model.
Limits the number of output tokens when supported by the selected model.
Tool definitions for models that support function calling or tool use.
For vision models, send content as an array of blocks. Use a text block for the instruction and an image_url block for the image.
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
]
}
]
}'
For private images, use a base64 data URL:
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,<BASE64_IMAGE>"
}
}
If the target model and upstream provider support OpenAI-compatible file content blocks, you can pass a PDF in messages[].content. The file_data value should be the base64-encoded raw PDF bytes.
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{
"type": "file",
"file": {
"filename": "document.pdf",
"file_data": "<BASE64_PDF>"
}
},
{
"type": "text",
"text": "Extract a summary and key conclusions from this PDF."
}
]
}
]
}'
PDF input is not supported by every model. If you need Gemini-native or Claude-native PDF behavior, see the provider-native examples in the Multimodal guide.
See the Multimodal guide for more examples, including Gemini-native and Claude-native formats.
Response
Successful responses follow the OpenAI Chat Completions shape.
{
"id": "chatcmpl_example",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "connected"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 1,
"total_tokens": 13
}
}
Read the assistant text from:
choices[0].message.content
Provider Routing
Automatic routing is the default:
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Pin a provider with vendor:
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-H "vendor: openai" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Streaming
Set stream to true to receive incremental server-sent events.
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"stream": true,
"messages": [
{
"role": "user",
"content": "Explain model routing in two sentences."
}
]
}'
Common Errors
| Status | Cause | What to check |
|---|
400 | Invalid request body or unsupported parameter | Confirm model, messages, and JSON formatting. |
401 | Missing or invalid API key | Confirm the Authorization header. |
429 | Provider rate limit or concurrency limit | Retry with backoff or use automatic routing. |
500 | Gateway or upstream provider error | Retry safely and preserve request IDs for support. |
For model IDs and provider pinning examples, see Models and the Router guide.