Skip to main content
Endpoint: POST /chat/completions Use this endpoint for most model API integrations. Send a model ID and an array of messages; YouRouter returns an OpenAI-compatible completion response. For models that support vision, this endpoint can also accept image inputs in messages[].content.

Request

curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "Reply with exactly: connected"
      }
    ]
  }'

OpenAI SDK

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["YOUROUTER_API_KEY"],
    base_url="https://api.yourouter.ai/v1",
)

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Reply with exactly: connected"}],
)

print(completion.choices[0].message.content)

Headers

HeaderRequiredDescription
AuthorizationYesBearer <YOUROUTER_API_KEY>
Content-TypeYesUse application/json.
vendorNoUse auto or omit for automatic routing. Use values like openai, anthropic, or google to pin a provider.

Body Parameters

model
string
required
The model ID to call, such as gpt-4o, claude-sonnet-4-20250514, or gemini-2.5-flash.
messages
array
required
Ordered conversation messages. Each message includes a role and content.
stream
boolean
default:"false"
If true, the response is returned as server-sent event chunks.
temperature
number
Controls randomness when supported by the selected model.
max_tokens
integer
Limits the number of output tokens when supported by the selected model.
tools
array
Tool definitions for models that support function calling or tool use.

Multimodal Image Input

For vision models, send content as an array of blocks. Use a text block for the instruction and an image_url block for the image.
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "Describe this image in one sentence."
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://example.com/image.jpg"
            }
          }
        ]
      }
    ]
  }'
For private images, use a base64 data URL:
{
  "type": "image_url",
  "image_url": {
    "url": "data:image/jpeg;base64,<BASE64_IMAGE>"
  }
}

PDF File Input

If the target model and upstream provider support OpenAI-compatible file content blocks, you can pass a PDF in messages[].content. The file_data value should be the base64-encoded raw PDF bytes.
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "file",
            "file": {
              "filename": "document.pdf",
              "file_data": "<BASE64_PDF>"
            }
          },
          {
            "type": "text",
            "text": "Extract a summary and key conclusions from this PDF."
          }
        ]
      }
    ]
  }'
PDF input is not supported by every model. If you need Gemini-native or Claude-native PDF behavior, see the provider-native examples in the Multimodal guide.
See the Multimodal guide for more examples, including Gemini-native and Claude-native formats.

Response

Successful responses follow the OpenAI Chat Completions shape.
{
  "id": "chatcmpl_example",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "connected"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 1,
    "total_tokens": 13
  }
}
Read the assistant text from:
choices[0].message.content

Provider Routing

Automatic routing is the default:
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Pin a provider with vendor:
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -H "vendor: openai" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'

Streaming

Set stream to true to receive incremental server-sent events.
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "stream": true,
    "messages": [
      {
        "role": "user",
        "content": "Explain model routing in two sentences."
      }
    ]
  }'

Common Errors

StatusCauseWhat to check
400Invalid request body or unsupported parameterConfirm model, messages, and JSON formatting.
401Missing or invalid API keyConfirm the Authorization header.
429Provider rate limit or concurrency limitRetry with backoff or use automatic routing.
500Gateway or upstream provider errorRetry safely and preserve request IDs for support.
For model IDs and provider pinning examples, see Models and the Router guide.