Skip to main content
POST
/
v1
/
chat
/
completions
Chat completions
curl --request POST \
  --url https://api.operator.io/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "user",
      "content": "Hello from Operator!"
    }
  ]
}
'
{
  "id": "chatcmpl-123",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Here's how to configure ESLint flat config for TypeScript and React..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 120,
    "completion_tokens": 80,
    "total_tokens": 200
  }
}

Chat Completions

This endpoint follows the OpenAI Chat Completions API format and is OpenAI-compatible. Use the opr-1 model and pass a list of messages with role (system, user, assistant, or tool) and content. The interactive panel above is generated from the OpenAPI schema and includes valid defaults for a minimal request.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
messages
object[]
required

Ordered list of chat messages in the conversation.

model
string
default:opr-1

Chat model ID.

Example:

"opr-1"

temperature
number
default:0

Sampling temperature between 0 and 2. Higher values make output more random.

Required range: 0 <= x <= 2
Example:

0

max_tokens
integer
default:2000

Maximum number of tokens to generate in the completion.

Example:

2000

stream
boolean
default:false

When true, streams partial message deltas as server-sent events.

Example:

false

Response

Successful chat completion.

id
string
required

Unique identifier for the completion.

choices
object[]
required

List of completion choices.

usage
object

Token usage for this request.