Use this guide to connect an application to YouRouter, make a test request, and verify the response shape. YouRouter exposes OpenAI-compatible endpoints at https://api.yourouter.ai/v1, so most existing OpenAI SDK integrations only need a base URL and API key change.
Integration Basics
Item Value Base URL https://api.yourouter.ai/v1Auth header Authorization: Bearer <YOUROUTER_API_KEY>Content type Content-Type: application/jsonModel field Send the target model ID in model Multimodal input Send text and image blocks in messages[].content Default routing Omit vendor, or send vendor: auto Pinned routing Send vendor: openai, vendor: anthropic, vendor: google, or another supported provider
1. Set Your API Key
Store your API key in an environment variable before running the examples below.
export YOUROUTER_API_KEY = "your-api-key-here"
set YOUROUTER_API_KEY = your - api - key - here
YOUROUTER_API_KEY = your-api-key-here
Never expose your API key in browser-side code, mobile apps, public repositories, or client logs.
2. Make a Test Request
The fastest integration smoke test is a direct HTTP request to the Chat Completions endpoint. A successful response includes choices[0].message.content. Use cURL or any standard HTTP client; the examples below cover Python, Node.js, Go, Java, PHP, and Rust.
cURL
Python requests
Node.js 18+ fetch
Go
Java 11+
PHP
Rust
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY " \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Reply with exactly: connected"
}
]
}'
import os
import requests
response = requests.post(
"https://api.yourouter.ai/v1/chat/completions" ,
headers = {
"Authorization" : f "Bearer { os.environ[ 'YOUROUTER_API_KEY' ] } " ,
"Content-Type" : "application/json" ,
},
json = {
"model" : "gpt-4o" ,
"messages" : [
{
"role" : "user" ,
"content" : "Reply with exactly: connected" ,
}
],
},
)
response.raise_for_status()
print (response.json()[ "choices" ][ 0 ][ "message" ][ "content" ])
// Save as quickstart.mjs and run with Node.js 18+.
const response = await fetch ( 'https://api.yourouter.ai/v1/chat/completions' , {
method: 'POST' ,
headers: {
Authorization: `Bearer ${ process . env . YOUROUTER_API_KEY } ` ,
'Content-Type' : 'application/json' ,
},
body: JSON . stringify ({
model: 'gpt-4o' ,
messages: [
{
role: 'user' ,
content: 'Reply with exactly: connected' ,
},
],
}),
});
if ( ! response . ok ) {
throw new Error ( await response . text ());
}
const data = await response . json ();
console . log ( data . choices [ 0 ]. message . content );
package main
import (
" bytes "
" encoding/json "
" fmt "
" io "
" net/http "
" os "
)
func main () {
payload , _ := json . Marshal ( map [ string ] any {
"model" : "gpt-4o" ,
"messages" : [] map [ string ] string {
{ "role" : "user" , "content" : "Reply with exactly: connected" },
},
})
req , err := http . NewRequest (
"POST" ,
"https://api.yourouter.ai/v1/chat/completions" ,
bytes . NewReader ( payload ),
)
if err != nil {
panic ( err )
}
req . Header . Set ( "Authorization" , "Bearer " + os . Getenv ( "YOUROUTER_API_KEY" ))
req . Header . Set ( "Content-Type" , "application/json" )
res , err := http . DefaultClient . Do ( req )
if err != nil {
panic ( err )
}
defer res . Body . Close ()
if res . StatusCode >= 400 {
body , _ := io . ReadAll ( res . Body )
panic ( fmt . Sprintf ( "request failed: %s \n %s " , res . Status , body ))
}
var result struct {
Choices [] struct {
Message struct {
Content string `json:"content"`
} `json:"message"`
} `json:"choices"`
}
if err := json . NewDecoder ( res . Body ). Decode ( & result ); err != nil {
panic ( err )
}
fmt . Println ( result . Choices [ 0 ]. Message . Content )
}
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
public class Quickstart {
public static void main ( String [] args ) throws Exception {
String body = String . join ( " \n " ,
"{" ,
" \" model \" : \" gpt-4o \" ," ,
" \" messages \" : [" ,
" {" ,
" \" role \" : \" user \" ," ,
" \" content \" : \" Reply with exactly: connected \" " ,
" }" ,
" ]" ,
"}"
);
HttpRequest request = HttpRequest . newBuilder ()
. uri ( URI . create ( "https://api.yourouter.ai/v1/chat/completions" ))
. header ( "Authorization" , "Bearer " + System . getenv ( "YOUROUTER_API_KEY" ))
. header ( "Content-Type" , "application/json" )
. POST ( HttpRequest . BodyPublishers . ofString (body))
. build ();
HttpResponse < String > response = HttpClient . newHttpClient (). send (
request,
HttpResponse . BodyHandlers . ofString ()
);
if ( response . statusCode () >= 400 ) {
throw new RuntimeException ( response . body ());
}
System . out . println ( response . body ());
}
}
<? php
$apiKey = getenv ( 'YOUROUTER_API_KEY' );
$payload = json_encode ([
'model' => 'gpt-4o' ,
'messages' => [
[
'role' => 'user' ,
'content' => 'Reply with exactly: connected' ,
],
],
]);
$ch = curl_init ( 'https://api.yourouter.ai/v1/chat/completions' );
curl_setopt_array ( $ch , [
CURLOPT_RETURNTRANSFER => true ,
CURLOPT_POST => true ,
CURLOPT_HTTPHEADER => [
'Authorization: Bearer ' . $apiKey ,
'Content-Type: application/json' ,
],
CURLOPT_POSTFIELDS => $payload ,
]);
$response = curl_exec ( $ch );
$status = curl_getinfo ( $ch , CURLINFO_HTTP_CODE );
if ( $response === false ) {
throw new RuntimeException ( curl_error ( $ch ));
}
curl_close ( $ch );
if ( $status >= 400 ) {
throw new RuntimeException ( $response );
}
$data = json_decode ( $response , true );
echo $data [ 'choices' ][ 0 ][ 'message' ][ 'content' ] . PHP_EOL ;
cargo add reqwest --features blocking,json
cargo add serde_json
use reqwest :: blocking :: Client ;
use serde_json :: json;
use std :: env;
fn main () -> Result <(), Box < dyn std :: error :: Error >> {
let api_key = env :: var ( "YOUROUTER_API_KEY" ) ? ;
let response : serde_json :: Value = Client :: new ()
. post ( "https://api.yourouter.ai/v1/chat/completions" )
. bearer_auth ( api_key )
. json ( & json! ({
"model" : "gpt-4o" ,
"messages" : [
{
"role" : "user" ,
"content" : "Reply with exactly: connected"
}
]
}))
. send () ?
. error_for_status () ?
. json () ? ;
if let Some ( content ) = response [ "choices" ][ 0 ][ "message" ][ "content" ] . as_str () {
println! ( "{content}" );
}
Ok (())
}
3. Migrate Existing OpenAI SDK Code
If your app already uses the OpenAI SDK, keep your request body shape and update two fields: api_key and base_url.
import os
from openai import OpenAI
client = OpenAI(
api_key = os.environ[ "YOUROUTER_API_KEY" ],
base_url = "https://api.yourouter.ai/v1" ,
)
completion = client.chat.completions.create(
model = "gpt-4o" ,
messages = [
{
"role" : "user" ,
"content" : "Reply with exactly: connected" ,
}
],
)
print (completion.choices[ 0 ].message.content)
import OpenAI from 'openai' ;
const openai = new OpenAI ({
apiKey: process . env . YOUROUTER_API_KEY ,
baseURL: 'https://api.yourouter.ai/v1' ,
});
const completion = await openai . chat . completions . create ({
model: 'gpt-4o' ,
messages: [
{
role: 'user' ,
content: 'Reply with exactly: connected' ,
},
],
});
console . log ( completion . choices [ 0 ]. message . content );
For Go, Java, PHP, and Rust, use the same custom base URL pattern when your OpenAI-compatible SDK supports it. If the SDK does not expose a base URL option, call the /v1/chat/completions endpoint directly with the HTTP examples above.
4. Choose a Model and Route
Choose a model by setting the model field. You can call many model families through the same API shape, such as OpenAI GPT models, Claude, Gemini, DeepSeek, Grok, Doubao, and Kimi. For vision and other multimodal inputs, see the Multimodal guide .
If an example model is not enabled for your account, replace it with any available model ID from the YouRouter Dashboard .
By default, YouRouter uses automatic routing. Omit the vendor header, or set vendor: auto, when you want YouRouter to choose the best available provider for the requested model.
Pin a request to a provider only when your integration depends on a specific upstream behavior, model variant, account, or compliance path.
cURL
Python SDK
Node.js SDK
curl https://api.yourouter.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUROUTER_API_KEY " \
-H "Content-Type: application/json" \
-H "vendor: openai" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello from a pinned provider."}]
}'
completion = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Hello from a pinned provider." }],
extra_headers = { "vendor" : "openai" },
)
const completion = await openai . chat . completions . create ({
model: 'gpt-4o' ,
messages: [{ role: 'user' , content: 'Hello from a pinned provider.' }],
}, {
headers: { vendor: 'openai' },
});
See Models for model API examples and the Router guide for provider IDs, automatic failover behavior, and production recommendations.
5. Handle Responses and Errors
For OpenAI-compatible endpoints, successful responses follow the OpenAI response format. Read generated text from choices[0].message.content.
{
"choices" : [
{
"message" : {
"role" : "assistant" ,
"content" : "connected"
}
}
]
}
Common integration status codes:
Status Meaning Integration action 200Request succeeded Parse the response body 401Missing or invalid API key Check the Authorization header 429Upstream rate limit Retry with backoff or adjust routing 500Provider or gateway error Retry safely and log the request ID
6. Stream Responses
For chat UIs and agents, set stream to true to receive incremental chunks as the model generates text.
stream = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Explain automatic routing in two sentences." }],
stream = True ,
)
for chunk in stream:
delta = chunk.choices[ 0 ].delta.content
if delta:
print (delta, end = "" )
See Create Chat Completion for the full request shape.
Next Steps
API Reference Review endpoints, parameters, and response formats.
Models See how to pass model IDs and switch models through the API.
Multimodal Send image inputs and call provider-native multimodal APIs.
Router Guide Learn when to use automatic routing or pin a provider.
Chat Completions Build conversations, streaming UIs, tools, and multimodal flows.