Naar hoofdinhoud gaan
POST
/
v1
/
messages
import anthropic

client = anthropic.Anthropic(
    base_url="https://api.cometapi.com",
    api_key="<COMETAPI_KEY>",
)

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "Hello, world"}
    ],
)

print(message.content[0].text)
{
  "id": "<string>",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "<string>",
      "thinking": "<string>",
      "signature": "<string>",
      "id": "<string>",
      "name": "<string>",
      "input": {}
    }
  ],
  "model": "<string>",
  "stop_reason": "end_turn",
  "stop_sequence": "<string>",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123,
    "cache_creation_input_tokens": 123,
    "cache_read_input_tokens": 123,
    "cache_creation": {
      "ephemeral_5m_input_tokens": 123,
      "ephemeral_1h_input_tokens": 123
    }
  }
}

Overzicht

CometAPI ondersteunt de Anthropic Messages API native en geeft je directe toegang tot Claude-modellen met alle Anthropic-specifieke functies. Gebruik dit endpoint voor Claude-exclusieve mogelijkheden zoals extended thinking, prompt caching en effort control.

Snel starten

Gebruik de officiële Anthropic SDK — stel alleen de base URL in op CometAPI:
import anthropic

client = anthropic.Anthropic(
    base_url="https://api.cometapi.com",
    api_key="<COMETAPI_KEY>",
)

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)
Zowel x-api-key als Authorization: Bearer headers worden ondersteund voor authenticatie. De officiële Anthropic SDK’s gebruiken standaard x-api-key.

Extended Thinking

Schakel Claude’s stapsgewijze redenering in met de parameter thinking. De response bevat thinking-contentblokken die Claude’s interne redenering tonen vóór het definitieve antwoord.
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000,
    },
    messages=[
        {"role": "user", "content": "Prove that there are infinitely many primes."}
    ],
)

for block in message.content:
    if block.type == "thinking":
        print(f"Thinking: {block.thinking[:200]}...")
    elif block.type == "text":
        print(f"Answer: {block.text}")
Thinking vereist minimaal 1,024 budget_tokens. Thinking tokens tellen mee voor je max_tokens-limiet — stel max_tokens hoog genoeg in om zowel thinking als de response te kunnen bevatten.

Prompt Caching

Cache grote system prompts of gespreksvoorvoegsels om de latency en kosten bij volgende requests te verlagen. Voeg cache_control toe aan contentblokken die gecachet moeten worden:
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system=[
        {
            "type": "text",
            "text": "You are an expert code reviewer. [Long detailed instructions...]",
            "cache_control": {"type": "ephemeral"},
        }
    ],
    messages=[{"role": "user", "content": "Review this code..."}],
)
Cachegebruik wordt gerapporteerd in het usage-veld van de response:
  • cache_creation_input_tokens — tokens die naar de cache zijn geschreven (tegen een hoger tarief gefactureerd)
  • cache_read_input_tokens — tokens die uit de cache zijn gelezen (tegen een lager tarief gefactureerd)
Prompt caching vereist minimaal 1,024 tokens in het gecachete contentblok. Kortere content wordt niet gecachet.

Streaming

Stream responses met Server-Sent Events (SSE) door stream: true in te stellen. Events komen in deze volgorde binnen:
  1. message_start — bevat de metadata van het bericht en het initiële verbruik
  2. content_block_start — markeert het begin van elk contentblok
  3. content_block_delta — incrementele tekstfragmenten (text_delta)
  4. content_block_stop — markeert het einde van elk contentblok
  5. message_delta — definitieve stop_reason en volledig usage
  6. message_stop — geeft het einde van de stream aan
with client.messages.stream(
    model="claude-sonnet-4-6",
    max_tokens=256,
    messages=[{"role": "user", "content": "Hello"}],
) as stream:
    for text in stream.text_stream:
        print(text, end="")

Inspanningscontrole

Bepaal hoeveel inspanning Claude levert bij het genereren van een response met output_config.effort:
message = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=4096,
    messages=[
        {"role": "user", "content": "Summarize this briefly."}
    ],
    output_config={"effort": "low"},  # "low", "medium", or "high"
)

Servertools

Claude ondersteunt server-side tools die draaien op de infrastructuur van Anthropic:
Haal content op van URL’s en analyseer die:
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Analyze the content at https://arxiv.org/abs/1512.03385"}
    ],
    tools=[
        {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 5}
    ],
)

Response-voorbeeld

Een typische response van het Anthropic-endpoint van CometAPI:
{
  "id": "msg_bdrk_01UjHdmSztrL7QYYm7CKBDFB",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello!"
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 19,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0,
    "cache_creation": {
      "ephemeral_5m_input_tokens": 0,
      "ephemeral_1h_input_tokens": 0
    },
    "output_tokens": 4
  }
}

Belangrijkste verschillen met het OpenAI-compatibele endpoint

FunctionaliteitAnthropic Messages (/v1/messages)OpenAI-Compatible (/v1/chat/completions)
Extended thinkingthinking parameter met budget_tokensNiet beschikbaar
Prompt cachingcache_control op contentblokkenNiet beschikbaar
Effort controloutput_config.effortNiet beschikbaar
Web fetch/searchServertools (web_fetch, web_search)Niet beschikbaar
Auth headerx-api-key of BearerAlleen Bearer
Response formatAnthropic-indeling (content-blokken)OpenAI-indeling (choices, message)
ModelsAlleen ClaudeMulti-provider (GPT, Claude, Gemini, enz.)

Autorisaties

x-api-key
string
header
vereist

Your CometAPI key passed via the x-api-key header. Authorization: Bearer <key> is also supported.

Headers

anthropic-version
string
standaard:2023-06-01

The Anthropic API version to use. Defaults to 2023-06-01.

Voorbeeld:

"2023-06-01"

anthropic-beta
string

Comma-separated list of beta features to enable. Examples: max-tokens-3-5-sonnet-2024-07-15, pdfs-2024-09-25, output-128k-2025-02-19.

Body

application/json
model
string
vereist

The Claude model to use. See the Models page for current Claude model IDs.

Voorbeeld:

"claude-sonnet-4-6"

messages
object[]
vereist

The conversation messages. Must alternate between user and assistant roles. Each message's content can be a string or an array of content blocks (text, image, document, tool_use, tool_result). There is a limit of 100,000 messages per request.

max_tokens
integer
vereist

The maximum number of tokens to generate. The model may stop before reaching this limit. When using thinking, the thinking tokens count towards this limit.

Vereist bereik: x >= 1
Voorbeeld:

1024

system

System prompt providing context and instructions to Claude. Can be a plain string or an array of content blocks (useful for prompt caching).

temperature
number
standaard:1

Controls randomness in the response. Range: 0.0–1.0. Use lower values for analytical tasks and higher values for creative tasks. Defaults to 1.0.

Vereist bereik: 0 <= x <= 1
top_p
number

Nucleus sampling threshold. Only tokens with cumulative probability up to this value are considered. Range: 0.0–1.0. Use either temperature or top_p, not both.

Vereist bereik: 0 <= x <= 1
top_k
integer

Only sample from the top K most probable tokens. Recommended for advanced use cases only.

Vereist bereik: x >= 0
stream
boolean
standaard:false

If true, stream the response incrementally using Server-Sent Events (SSE). Events include message_start, content_block_start, content_block_delta, content_block_stop, message_delta, and message_stop.

stop_sequences
string[]

Custom strings that cause the model to stop generating when encountered. The stop sequence is not included in the response.

thinking
object

Enable extended thinking — Claude's step-by-step reasoning process. When enabled, the response includes thinking content blocks before the answer. Requires a minimum budget_tokens of 1,024.

tools
object[]

Tools the model may use. Supports client-defined functions, web search (web_search_20250305), web fetch (web_fetch_20250910), code execution (code_execution_20250522), and more.

tool_choice
object

Controls how the model uses tools.

metadata
object

Request metadata for tracking and analytics.

output_config
object

Configuration for output behavior.

service_tier
enum<string>

The service tier to use. auto tries priority capacity first, standard_only uses only standard capacity.

Beschikbare opties:
auto,
standard_only

Respons

200 - application/json

Successful response. When stream is true, the response is a stream of SSE events.

id
string

Unique identifier for this message (e.g., msg_01XFDUDYJgAACzvnptvVoYEL).

type
enum<string>

Always message.

Beschikbare opties:
message
role
enum<string>

Always assistant.

Beschikbare opties:
assistant
content
object[]

The response content blocks. May include text, thinking, tool_use, and other block types.

model
string

The specific model version that generated this response (e.g., claude-sonnet-4-6).

stop_reason
enum<string>

Why the model stopped generating.

Beschikbare opties:
end_turn,
max_tokens,
stop_sequence,
tool_use,
pause_turn
stop_sequence
string | null

The stop sequence that caused the model to stop, if applicable.

usage
object

Token usage statistics.