/chat/completions
Creates a chat completion for the given conversation messages. This endpoint is compatible with the OpenAI Chat API .
Endpoint
POST /v1/chat/completions
Headers
Name | Type | Description |
---|---|---|
Authorization | string | Required. Your API key prefixed with “Bearer “ |
Request Body
{
// ID of the model to use. Required.
"model": string,
// The messages to generate chat completions for. Required.
"messages": [
{
"role": string, // The role of the message sender. Can be "system", "user", or "assistant"
"content": string // The content of the message
}
],
// Whether to stream the response. Optional.
"stream": boolean,
// The maximum number of tokens to generate. Optional.
"max_tokens": number,
// Sampling temperature between 0 and 2. Optional.
"temperature": number,
// Alternative to temperature. Optional.
"top_p": number
}
Response
Returns a chat completion object with the model’s response:
{
"id": string,
"object": "chat.completion",
"created": number,
"model": string,
"choices": [
{
"index": number,
"message": {
"role": "assistant",
"content": string
},
"finish_reason": string
}
],
"usage": {
"prompt_tokens": number,
"completion_tokens": number,
"total_tokens": number
}
}
When stream
is true
, the response is sent as server-sent events in the same format.
Error Codes
400
- Invalid request (missing required fields or invalid model)401
- Invalid API key429
- Rate limit exceeded or insufficient credit500
- Server error
Example
curl -X POST https://api.octora.io/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'