Langsung ke konten utama
POST
/
v1
/
messages
import anthropic

client = anthropic.Anthropic(
    base_url="https://api.cometapi.com",
    api_key="<COMETAPI_KEY>",
)

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "Hello, world"}
    ],
)

print(message.content[0].text)
{
  "id": "<string>",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "<string>",
      "thinking": "<string>",
      "signature": "<string>",
      "id": "<string>",
      "name": "<string>",
      "input": {}
    }
  ],
  "model": "<string>",
  "stop_reason": "end_turn",
  "stop_sequence": "<string>",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123,
    "cache_creation_input_tokens": 123,
    "cache_read_input_tokens": 123,
    "cache_creation": {
      "ephemeral_5m_input_tokens": 123,
      "ephemeral_1h_input_tokens": 123
    }
  }
}

Gambaran Umum

CometAPI mendukung Anthropic Messages API secara native, memberi Anda akses langsung ke model Claude dengan semua fitur khusus Anthropic. Gunakan endpoint ini untuk kapabilitas eksklusif Claude seperti extended thinking, prompt caching, dan effort control.

Mulai Cepat

Gunakan SDK Anthropic resmi — cukup atur base URL ke CometAPI:
import anthropic

client = anthropic.Anthropic(
    base_url="https://api.cometapi.com",
    api_key="<COMETAPI_KEY>",
)

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)
Header x-api-key dan Authorization: Bearer sama-sama didukung untuk autentikasi. SDK Anthropic resmi menggunakan x-api-key secara default.

Extended Thinking

Aktifkan penalaran langkah demi langkah Claude dengan parameter thinking. Respons mencakup blok konten thinking yang menampilkan penalaran internal Claude sebelum jawaban akhir.
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000,
    },
    messages=[
        {"role": "user", "content": "Prove that there are infinitely many primes."}
    ],
)

for block in message.content:
    if block.type == "thinking":
        print(f"Thinking: {block.thinking[:200]}...")
    elif block.type == "text":
        print(f"Answer: {block.text}")
Thinking memerlukan budget_tokens minimum sebesar 1,024. Thinking tokens dihitung ke dalam batas max_tokens Anda — atur max_tokens cukup tinggi untuk menampung thinking dan respons.

Prompt Caching

Cache prompt system yang besar atau prefix percakapan untuk mengurangi latensi dan biaya pada request berikutnya. Tambahkan cache_control ke blok konten yang ingin di-cache:
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system=[
        {
            "type": "text",
            "text": "You are an expert code reviewer. [Long detailed instructions...]",
            "cache_control": {"type": "ephemeral"},
        }
    ],
    messages=[{"role": "user", "content": "Review this code..."}],
)
Penggunaan cache dilaporkan dalam field usage pada respons:
  • cache_creation_input_tokens — token yang ditulis ke cache (ditagih dengan tarif lebih tinggi)
  • cache_read_input_tokens — token yang dibaca dari cache (ditagih dengan tarif lebih rendah)
Prompt caching memerlukan minimum 1,024 tokens dalam blok konten yang di-cache. Konten yang lebih pendek dari ini tidak akan di-cache.

Streaming

Stream respons menggunakan Server-Sent Events (SSE) dengan menetapkan stream: true. Event datang dalam urutan berikut:
  1. message_start — berisi metadata message dan usage awal
  2. content_block_start — menandai awal setiap blok content
  3. content_block_delta — potongan teks bertahap (text_delta)
  4. content_block_stop — menandai akhir setiap blok content
  5. message_deltastop_reason final dan usage lengkap
  6. message_stop — menandakan akhir stream
with client.messages.stream(
    model="claude-sonnet-4-6",
    max_tokens=256,
    messages=[{"role": "user", "content": "Hello"}],
) as stream:
    for text in stream.text_stream:
        print(text, end="")

Kontrol Effort

Atur seberapa besar effort yang Claude gunakan untuk menghasilkan respons dengan output_config.effort:
message = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=4096,
    messages=[
        {"role": "user", "content": "Summarize this briefly."}
    ],
    output_config={"effort": "low"},  # "low", "medium", or "high"
)

Server Tools

Claude mendukung server-side tools yang berjalan di infrastruktur Anthropic:
Ambil dan analisis content dari URL:
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Analyze the content at https://arxiv.org/abs/1512.03385"}
    ],
    tools=[
        {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 5}
    ],
)

Contoh Respons

Respons umum dari endpoint Anthropic CometAPI:
{
  "id": "msg_bdrk_01UjHdmSztrL7QYYm7CKBDFB",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello!"
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 19,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0,
    "cache_creation": {
      "ephemeral_5m_input_tokens": 0,
      "ephemeral_1h_input_tokens": 0
    },
    "output_tokens": 4
  }
}

Perbedaan Utama dari Endpoint yang Kompatibel dengan OpenAI

FiturAnthropic Messages (/v1/messages)OpenAI-Compatible (/v1/chat/completions)
Extended thinkingparameter thinking dengan budget_tokensTidak tersedia
Prompt cachingcache_control pada blok contentTidak tersedia
Kontrol effortoutput_config.effortTidak tersedia
Web fetch/searchServer tools (web_fetch, web_search)Tidak tersedia
Header authx-api-key atau BearerHanya Bearer
Format responsformat Anthropic (content blocks)format OpenAI (choices, message)
ModelClaude sajaMulti-provider (GPT, Claude, Gemini, dll.)

Otorisasi

x-api-key
string
header
wajib

Your CometAPI key passed via the x-api-key header. Authorization: Bearer <key> is also supported.

Header

anthropic-version
string
default:2023-06-01

The Anthropic API version to use. Defaults to 2023-06-01.

Contoh:

"2023-06-01"

anthropic-beta
string

Comma-separated list of beta features to enable. Examples: max-tokens-3-5-sonnet-2024-07-15, pdfs-2024-09-25, output-128k-2025-02-19.

Body

application/json
model
string
wajib

The Claude model to use. See the Models page for current Claude model IDs.

Contoh:

"claude-sonnet-4-6"

messages
object[]
wajib

The conversation messages. Must alternate between user and assistant roles. Each message's content can be a string or an array of content blocks (text, image, document, tool_use, tool_result). There is a limit of 100,000 messages per request.

max_tokens
integer
wajib

The maximum number of tokens to generate. The model may stop before reaching this limit. When using thinking, the thinking tokens count towards this limit.

Rentang yang diperlukan: x >= 1
Contoh:

1024

system

System prompt providing context and instructions to Claude. Can be a plain string or an array of content blocks (useful for prompt caching).

temperature
number
default:1

Controls randomness in the response. Range: 0.0–1.0. Use lower values for analytical tasks and higher values for creative tasks. Defaults to 1.0.

Rentang yang diperlukan: 0 <= x <= 1
top_p
number

Nucleus sampling threshold. Only tokens with cumulative probability up to this value are considered. Range: 0.0–1.0. Use either temperature or top_p, not both.

Rentang yang diperlukan: 0 <= x <= 1
top_k
integer

Only sample from the top K most probable tokens. Recommended for advanced use cases only.

Rentang yang diperlukan: x >= 0
stream
boolean
default:false

If true, stream the response incrementally using Server-Sent Events (SSE). Events include message_start, content_block_start, content_block_delta, content_block_stop, message_delta, and message_stop.

stop_sequences
string[]

Custom strings that cause the model to stop generating when encountered. The stop sequence is not included in the response.

thinking
object

Enable extended thinking — Claude's step-by-step reasoning process. When enabled, the response includes thinking content blocks before the answer. Requires a minimum budget_tokens of 1,024.

tools
object[]

Tools the model may use. Supports client-defined functions, web search (web_search_20250305), web fetch (web_fetch_20250910), code execution (code_execution_20250522), and more.

tool_choice
object

Controls how the model uses tools.

metadata
object

Request metadata for tracking and analytics.

output_config
object

Configuration for output behavior.

service_tier
enum<string>

The service tier to use. auto tries priority capacity first, standard_only uses only standard capacity.

Opsi yang tersedia:
auto,
standard_only

Respons

200 - application/json

Successful response. When stream is true, the response is a stream of SSE events.

id
string

Unique identifier for this message (e.g., msg_01XFDUDYJgAACzvnptvVoYEL).

type
enum<string>

Always message.

Opsi yang tersedia:
message
role
enum<string>

Always assistant.

Opsi yang tersedia:
assistant
content
object[]

The response content blocks. May include text, thinking, tool_use, and other block types.

model
string

The specific model version that generated this response (e.g., claude-sonnet-4-6).

stop_reason
enum<string>

Why the model stopped generating.

Opsi yang tersedia:
end_turn,
max_tokens,
stop_sequence,
tool_use,
pause_turn
stop_sequence
string | null

The stop sequence that caused the model to stop, if applicable.

usage
object

Token usage statistics.