Skip to main content
Endpoint: POST /chat/completions Send a list of messages and receive a model-generated response.
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.yourouter.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Parameters

model
string
required
The model ID to use for the completion.
messages
array
required
Array of chat messages describing the conversation so far.
stream
boolean
default:"false"
If true, results are returned as server-sent events.

Streaming

Enable the stream parameter to receive incremental updates. Each event contains a partial message chunk until the conversation is complete.
I