Skip to main content
Use this guide to connect an application to YouRouter, make a test request, and verify the response shape. YouRouter exposes OpenAI-compatible endpoints at https://api.yourouter.ai/v1, so most existing OpenAI SDK integrations only need a base URL and API key change.
Need an API key? Create one in the YouRouter Dashboard.

Integration Basics

ItemValue
Base URLhttps://api.yourouter.ai/v1
Auth headerAuthorization: Bearer <YOUROUTER_API_KEY>
Content typeContent-Type: application/json
Model fieldSend the target model ID in model
Multimodal inputSend text and image blocks in messages[].content
Default routingOmit vendor, or send vendor: auto
Pinned routingSend vendor: openai, vendor: anthropic, vendor: google, or another supported provider

1. Set Your API Key

Store your API key in an environment variable before running the examples below.
export YOUROUTER_API_KEY="your-api-key-here"
Never expose your API key in browser-side code, mobile apps, public repositories, or client logs.

2. Make a Test Request

The fastest integration smoke test is a direct HTTP request to the Chat Completions endpoint. A successful response includes choices[0].message.content. Use cURL or any standard HTTP client; the examples below cover Python, Node.js, Go, Java, PHP, and Rust.
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "Reply with exactly: connected"
      }
    ]
  }'

3. Migrate Existing OpenAI SDK Code

If your app already uses the OpenAI SDK, keep your request body shape and update two fields: api_key and base_url.
pip install openai
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["YOUROUTER_API_KEY"],
    base_url="https://api.yourouter.ai/v1",
)

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "user",
            "content": "Reply with exactly: connected",
        }
    ],
)

print(completion.choices[0].message.content)
For Go, Java, PHP, and Rust, use the same custom base URL pattern when your OpenAI-compatible SDK supports it. If the SDK does not expose a base URL option, call the /v1/chat/completions endpoint directly with the HTTP examples above.

4. Choose a Model and Route

Choose a model by setting the model field. You can call many model families through the same API shape, such as OpenAI GPT models, Claude, Gemini, DeepSeek, Grok, Doubao, and Kimi. For vision and other multimodal inputs, see the Multimodal guide.
If an example model is not enabled for your account, replace it with any available model ID from the YouRouter Dashboard.
By default, YouRouter uses automatic routing. Omit the vendor header, or set vendor: auto, when you want YouRouter to choose the best available provider for the requested model. Pin a request to a provider only when your integration depends on a specific upstream behavior, model variant, account, or compliance path.
curl https://api.yourouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $YOUROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -H "vendor: openai" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello from a pinned provider."}]
  }'
See Models for model API examples and the Router guide for provider IDs, automatic failover behavior, and production recommendations.

5. Handle Responses and Errors

For OpenAI-compatible endpoints, successful responses follow the OpenAI response format. Read generated text from choices[0].message.content.
{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "connected"
      }
    }
  ]
}
Common integration status codes:
StatusMeaningIntegration action
200Request succeededParse the response body
401Missing or invalid API keyCheck the Authorization header
429Upstream rate limitRetry with backoff or adjust routing
500Provider or gateway errorRetry safely and log the request ID

6. Stream Responses

For chat UIs and agents, set stream to true to receive incremental chunks as the model generates text.
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain automatic routing in two sentences."}],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta.content
    if delta:
        print(delta, end="")
See Create Chat Completion for the full request shape.

Next Steps

API Reference

Review endpoints, parameters, and response formats.

Models

See how to pass model IDs and switch models through the API.

Multimodal

Send image inputs and call provider-native multimodal APIs.

Router Guide

Learn when to use automatic routing or pin a provider.

Chat Completions

Build conversations, streaming UIs, tools, and multimodal flows.