Gunakan CometAPI POST /v1/chat/completions untuk mengirim percakapan multi-pesan dan mendapatkan balasan LLM dengan kontrol streaming, temperature, dan max_tokens.
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
completion = client.chat.completions.create(
model="gpt-5.4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)
print(completion.choices[0].message){
"id": "chatcmpl-DNA27oKtBUL8TmbGpBM3B3zhWgYfZ",
"object": "chat.completion",
"created": 1774412483,
"model": "gpt-4.1-nano-2025-04-14",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Four",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 29,
"completion_tokens": 2,
"total_tokens": 31,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": "fp_490a4ad033"
}model.
base_url menjadi https://api.cometapi.com/v1.reasoning_effort hanya berlaku untuk model reasoning (o-series, GPT-5.1+), dan beberapa model mungkin tidak mendukung logprobs atau n > 1.o1-pro), gunakan endpoint responses sebagai gantinya.| Role | Description |
|---|---|
system | Mengatur perilaku dan kepribadian asisten. Ditempatkan di awal percakapan. |
developer | Menggantikan system untuk model yang lebih baru (o1+). Memberikan instruksi yang harus diikuti model terlepas dari input pengguna. |
user | Pesan dari pengguna akhir. |
assistant | Respons model sebelumnya, digunakan untuk mempertahankan riwayat percakapan. |
tool | Hasil dari pemanggilan tool/function. Harus menyertakan tool_call_id yang cocok dengan pemanggilan tool asli. |
developer alih-alih system untuk pesan instruksi. Keduanya berfungsi, tetapi developer memberikan perilaku yang lebih kuat dalam mengikuti instruksi.content guna mengirim pesan multimodal:
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image"},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png",
"detail": "high"
}
}
]
}
detail mengontrol kedalaman analisis gambar:
low — lebih cepat, menggunakan lebih sedikit token (biaya tetap)high — analisis mendetail, lebih banyak token dikonsumsiauto — model yang memutuskan (default)stream disetel ke true, respons dikirimkan sebagai Server-Sent Events (SSE). Setiap event berisi objek chat.completion.chunk dengan konten bertahap:
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
stream_options.include_usage ke true. Data usage akan muncul pada chunk terakhir sebelum [DONE].response_format:
{
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "result",
"strict": true,
"schema": {
"type": "object",
"properties": {
"answer": {"type": "string"},
"confidence": {"type": "number"}
},
"required": ["answer", "confidence"],
"additionalProperties": false
}
}
}
}
json_schema) menjamin output cocok persis dengan schema Anda. Mode JSON Object (json_object) hanya menjamin JSON yang valid — strukturnya tidak dipaksakan.{
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}
finish_reason: "tool_calls" dan array message.tool_calls akan berisi nama fungsi serta argumennya. Anda kemudian mengeksekusi fungsi tersebut dan mengirimkan hasilnya kembali sebagai pesan tool dengan tool_call_id yang sesuai.
| Field | Deskripsi |
|---|---|
id | Pengidentifikasi completion yang unik (mis. chatcmpl-abc123). |
object | Selalu chat.completion. |
model | Model yang menghasilkan respons (dapat mencakup sufiks versi). |
choices | Array pilihan completion (biasanya 1 kecuali n > 1). |
choices[].message | Pesan respons assistant dengan role, content, dan opsional tool_calls. |
choices[].finish_reason | Alasan model berhenti: stop, length, tool_calls, atau content_filter. |
usage | Rincian konsumsi Token: prompt_tokens, completion_tokens, total_tokens, dan sub-jumlah terperinci. |
system_fingerprint | Fingerprint konfigurasi backend untuk debugging reproduktibilitas. |
Dukungan parameter di berbagai provider
| Parameter | OpenAI GPT | Claude (via compat) | Gemini (via compat) |
|---|---|---|---|
temperature | 0–2 | 0–1 | 0–2 |
top_p | 0–1 | 0–1 | 0–1 |
n | 1–128 | hanya 1 | 1–8 |
stop | Hingga 4 | Hingga 4 | Hingga 5 |
tools | ✅ | ✅ | ✅ |
response_format | ✅ | ✅ (json_schema) | ✅ |
logprobs | ✅ | ❌ | ❌ |
reasoning_effort | o-series, GPT-5.1+ | ❌ | ❌ (gunakan thinking untuk Gemini native) |
max_tokens vs max_completion_tokens
max_tokens — Parameter lama. Berfungsi dengan sebagian besar model tetapi sudah deprecated untuk model OpenAI yang lebih baru.max_completion_tokens — Parameter yang direkomendasikan untuk model GPT-4.1, seri GPT-5, dan o-series. Wajib untuk model reasoning karena mencakup output tokens dan reasoning tokens.role system vs developer
system — Role instruksi tradisional. Berfungsi dengan semua model.developer — Diperkenalkan bersama model o1. Memberikan kepatuhan instruksi yang lebih kuat untuk model yang lebih baru. Akan fallback ke perilaku system pada model lama.developer untuk proyek baru yang menargetkan model GPT-4.1+ atau o-series.429 Too Many Requests, terapkan exponential backoff:
import time
import random
from openai import OpenAI, RateLimitError
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
def chat_with_retry(messages, max_retries=3):
for i in range(max_retries):
try:
return client.chat.completions.create(
model="gpt-5.4",
messages=messages,
)
except RateLimitError:
if i < max_retries - 1:
wait_time = (2 ** i) + random.random()
time.sleep(wait_time)
else:
raise
messages:
messages = [
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a high-level programming language..."},
{"role": "user", "content": "What are its main advantages?"},
]
finish_reason?| Value | Arti |
|---|---|
stop | Selesai secara alami atau mencapai stop sequence. |
length | Mencapai batas max_tokens atau max_completion_tokens. |
tool_calls | Model memanggil satu atau lebih tool/function call. |
content_filter | Output difilter karena kebijakan konten. |
max_completion_tokens untuk membatasi panjang output.gpt-5.4-mini atau gpt-5.4-nano untuk tugas yang lebih sederhana).usage.Bearer token authentication. Use your CometAPI key.
Model ID to use for this request. See the Models page for current options.
"gpt-4.1"
A list of messages forming the conversation. Each message has a role (system, user, assistant, or developer) and content (text string or multimodal content array).
Show child attributes
If true, partial response tokens are delivered incrementally via server-sent events (SSE). The stream ends with a data: [DONE] message.
Sampling temperature between 0 and 2. Higher values (e.g., 0.8) produce more random output; lower values (e.g., 0.2) make output more focused and deterministic. Recommended to adjust this or top_p, but not both.
0 <= x <= 2Nucleus sampling parameter. The model considers only the tokens whose cumulative probability reaches top_p. For example, 0.1 means only the top 10% probability tokens are considered. Recommended to adjust this or temperature, but not both.
0 <= x <= 1Number of completion choices to generate for each input message. Defaults to 1.
Up to 4 sequences where the API will stop generating further tokens. Can be a string or an array of strings.
Maximum number of tokens to generate in the completion. The total of input + output tokens is capped by the model's context length.
Number between -2.0 and 2.0. Positive values penalize tokens based on whether they have already appeared, encouraging the model to explore new topics.
-2 <= x <= 2Number between -2.0 and 2.0. Positive values penalize tokens proportionally to how often they have appeared, reducing verbatim repetition.
-2 <= x <= 2A JSON object mapping token IDs to bias values from -100 to 100. The bias is added to the model's logits before sampling. Values between -1 and 1 subtly adjust likelihood; -100 or 100 effectively ban or force selection of a token.
A unique identifier for your end-user. Helps with abuse detection and monitoring.
An upper bound for the number of tokens to generate, including visible output tokens and reasoning tokens. Use this instead of max_tokens for GPT-4.1+, GPT-5 series, and o-series models.
Specifies the output format. Use {"type": "json_object"} for JSON mode, or {"type": "json_schema", "json_schema": {...}} for strict structured output.
Show child attributes
A list of tools the model may call. Currently supports function type tools.
Show child attributes
Controls how the model selects tools. auto (default): model decides. none: no tools. required: must call a tool.
Whether to return log probabilities of the output tokens.
Number of most likely tokens to return at each position (0-20). Requires logprobs to be true.
0 <= x <= 20Controls the reasoning effort for o-series and GPT-5.1+ models.
low, medium, high Options for streaming. Only valid when stream is true.
Show child attributes
Specifies the processing tier.
auto, default, flex, priority Successful chat completion response.
Unique completion identifier.
"chatcmpl-abc123"
chat.completion "chat.completion"
Unix timestamp of creation.
1774412483
The model used (may include version suffix).
"gpt-5.4-2025-07-16"
Array of completion choices.
Show child attributes
Show child attributes
"default"
"fp_490a4ad033"
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
completion = client.chat.completions.create(
model="gpt-5.4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)
print(completion.choices[0].message){
"id": "chatcmpl-DNA27oKtBUL8TmbGpBM3B3zhWgYfZ",
"object": "chat.completion",
"created": 1774412483,
"model": "gpt-4.1-nano-2025-04-14",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Four",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 29,
"completion_tokens": 2,
"total_tokens": 31,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": "fp_490a4ad033"
}