Skip to main content
POST
https://api.operator.io
/
chat
/
completions
Chat completions
curl --request POST \
  --url https://api.operator.io/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "messages": [
    {}
  ]
}
'
{
  "id": "<string>",
  "choices": [
    {}
  ],
  "usage": {}
}

Chat Completions

HTTP endpoint

POST https://api.operator.io/chat/completions
This endpoint follows the OpenAI Chat Completions API format.

Authentication

Provide your key as a Bearer token:
Authorization: Bearer $OPERATOR_API_KEY

Request body

The API follows the OpenAI Chat Completions schema. Common fields:
model
string
required
Chat model ID. Use “Operator”.
messages
array
required
Ordered list of chat messages with role (system, user, assistant) and content.
temperature
number
Sampling temperature between 0 and 2. Higher values make output more random.
max_tokens
integer
Maximum number of tokens to generate in the completion.
stream
boolean
When true, streams partial message deltas as server-sent events.

Example request body

{
  "model": "Operator",
  "messages": [
    {
      "role": "system",
      "content": "You are an adaptive search agent that finds accurate, up-to-date information across multiple sources. Return clear, actionable answers with code examples when relevant."
    },
    {
      "role": "user",
      "content": "How do I configure ESLint flat config with TypeScript and React in 2024? The old .eslintrc format is deprecated."
    }
  ],
  "temperature": 0.2
}

Response

The response matches the OpenAI Chat Completions format.
id
string
required
Unique identifier for the completion.
choices
array
required
List of completion choices. Each choice includes a message with the model’s reply and a finish_reason.
usage
object
Token usage for this request, including prompt_tokens, completion_tokens, and total_tokens.