Naar hoofdinhoud gaan
POST
/
v1beta
/
models
/
{model}
:
{operator}
from google import genai client = genai.Client( api_key="<COMETAPI_KEY>", http_options={"api_version": "v1beta", "base_url": "https://api.cometapi.com"}, ) response = client.models.generate_content( model="gemini-2.5-flash", contents="Explain how AI works in a few words", ) print(response.text)
{
  "candidates": [
    {
      "content": {
        "role": "<string>",
        "parts": [
          {
            "text": "<string>",
            "functionCall": {
              "name": "<string>",
              "args": {}
            },
            "inlineData": {
              "mimeType": "<string>",
              "data": "<string>"
            },
            "thought": true
          }
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [
        {
          "category": "<string>",
          "probability": "<string>",
          "blocked": true
        }
      ],
      "citationMetadata": {
        "citationSources": [
          {
            "startIndex": 123,
            "endIndex": 123,
            "uri": "<string>",
            "license": "<string>"
          }
        ]
      },
      "tokenCount": 123,
      "avgLogprobs": 123,
      "groundingMetadata": {
        "groundingChunks": [
          {
            "web": {
              "uri": "<string>",
              "title": "<string>"
            }
          }
        ],
        "groundingSupports": [
          {
            "groundingChunkIndices": [
              123
            ],
            "confidenceScores": [
              123
            ],
            "segment": {
              "startIndex": 123,
              "endIndex": 123,
              "text": "<string>"
            }
          }
        ],
        "webSearchQueries": [
          "<string>"
        ]
      },
      "index": 123
    }
  ],
  "promptFeedback": {
    "blockReason": "SAFETY",
    "safetyRatings": [
      {
        "category": "<string>",
        "probability": "<string>",
        "blocked": true
      }
    ]
  },
  "usageMetadata": {
    "promptTokenCount": 123,
    "candidatesTokenCount": 123,
    "totalTokenCount": 123,
    "trafficType": "<string>",
    "thoughtsTokenCount": 123,
    "promptTokensDetails": [
      {
        "modality": "<string>",
        "tokenCount": 123
      }
    ],
    "candidatesTokensDetails": [
      {
        "modality": "<string>",
        "tokenCount": 123
      }
    ]
  },
  "modelVersion": "<string>",
  "createTime": "<string>",
  "responseId": "<string>"
}

Overzicht

CometAPI ondersteunt het native API-formaat van Gemini, waardoor je volledige toegang krijgt tot Gemini-specifieke functies zoals controle over thinking, Google Search-grounding, native modalities voor afbeeldingsgeneratie en meer. Gebruik dit endpoint wanneer je mogelijkheden nodig hebt die niet beschikbaar zijn via het OpenAI-compatible chat endpoint.

Snelstart

Vervang de base URL en API key in elke Gemini SDK of HTTP-client:
InstellingGoogle-standaardCometAPI
Base URLgenerativelanguage.googleapis.comapi.cometapi.com
API Key$GEMINI_API_KEY$COMETAPI_KEY
Zowel x-goog-api-key als Authorization: Bearer-headers worden ondersteund voor authenticatie.

Thinking (Reasoning)

Gemini-modellen kunnen interne reasoning uitvoeren voordat ze een antwoord genereren. De manier van aansturen hangt af van de modelgeneratie.
Gemini 3-modellen gebruiken thinkingLevel om de diepte van reasoning te regelen. Beschikbare niveaus: MINIMAL, LOW, MEDIUM, HIGH.
curl "https://api.cometapi.com/v1beta/models/gemini-3.1-pro-preview:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "Explain quantum physics simply."}]}],
    "generationConfig": {
      "thinkingConfig": {"thinkingLevel": "LOW"}
    }
  }'
Het gebruik van thinkingLevel met Gemini 2.5-modellen (of thinkingBudget met Gemini 3-modellen) kan fouten veroorzaken. Gebruik de juiste parameter voor jouw modelversie.

Streaming

Gebruik streamGenerateContent?alt=sse als operator om Server-Sent Events te ontvangen terwijl het model content genereert. Elke SSE-event bevat een data:-regel met een JSON-GenerateContentResponse-object.
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:streamGenerateContent?alt=sse" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  --no-buffer \
  -d '{
    "contents": [{"parts": [{"text": "Write a short poem about the stars"}]}]
  }'

Systeeminstructies

Stuur het gedrag van het model gedurende het hele gesprek aan met systemInstruction:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "What is 2+2?"}]}],
    "systemInstruction": {
      "parts": [{"text": "You are a math tutor. Always show your work."}]
    }
  }'

JSON Mode

Forceer gestructureerde JSON-uitvoer met responseMimeType. Geef optioneel een responseSchema op voor strikte schema-validatie:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "List 3 planets with their distances from the sun"}]}],
    "generationConfig": {
      "responseMimeType": "application/json"
    }
  }'

Google Search Grounding

Schakel realtime zoeken op het web in door een googleSearch tool toe te voegen:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "Who won the euro 2024?"}]}],
    "tools": [{"google_search": {}}]
  }'
De response bevat groundingMetadata met bron-URL’s en betrouwbaarheidsscores.

Response Example

Een typische response van CometAPI’s Gemini-endpoint:
{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [{"text": "Hello"}]
      },
      "finishReason": "STOP",
      "avgLogprobs": -0.0023
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 5,
    "candidatesTokenCount": 1,
    "totalTokenCount": 30,
    "trafficType": "ON_DEMAND",
    "thoughtsTokenCount": 24,
    "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 5}],
    "candidatesTokensDetails": [{"modality": "TEXT", "tokenCount": 1}]
  },
  "modelVersion": "gemini-2.5-flash",
  "createTime": "2026-03-25T04:21:43.756483Z",
  "responseId": "CeynaY3LDtvG4_UP0qaCuQY"
}
Het veld thoughtsTokenCount in usageMetadata laat zien hoeveel tokens het model heeft besteed aan interne redenering, zelfs wanneer de denkuitvoer niet in de response is opgenomen.

Belangrijkste verschillen met het OpenAI-Compatible endpoint

FunctieGemini Native (/v1beta/models/...)OpenAI-Compatible (/v1/chat/completions)
Thinking-besturingthinkingConfig met thinkingLevel / thinkingBudgetNiet beschikbaar
Google Search-groundingtools: [\{"google_search": \{\}\}]Niet beschikbaar
Google Maps-groundingtools: [\{"googleMaps": \{\}\}]Niet beschikbaar
Modality voor afbeeldingsgeneratieresponseModalities: ["IMAGE"]Niet beschikbaar
Auth-headerx-goog-api-key of BearerAlleen Bearer
Response-indelingGemini native (candidates, parts)OpenAI-indeling (choices, message)

Autorisaties

x-goog-api-key
string
header
vereist

Your CometAPI key passed via the x-goog-api-key header. Bearer token authentication (Authorization: Bearer <key>) is also supported.

Padparameters

model
string
vereist

The Gemini model ID to use. See the Models page for current Gemini model IDs.

Voorbeeld:

"gemini-2.5-flash"

operator
enum<string>
vereist

The operation to perform. Use generateContent for synchronous responses, or streamGenerateContent?alt=sse for Server-Sent Events streaming.

Beschikbare opties:
generateContent,
streamGenerateContent?alt=sse
Voorbeeld:

"generateContent"

Body

application/json
contents
object[]
vereist

The conversation history and current input. For single-turn queries, provide a single item. For multi-turn conversations, include all previous turns.

systemInstruction
object

System instructions that guide the model's behavior across the entire conversation. Text only.

tools
object[]

Tools the model may use to generate responses. Supports function declarations, Google Search, Google Maps, and code execution.

toolConfig
object

Configuration for tool usage, such as function calling mode.

safetySettings
object[]

Safety filter settings. Override default thresholds for specific harm categories.

generationConfig
object

Configuration for model generation behavior including temperature, output length, and response format.

cachedContent
string

The name of cached content to use as context. Format: cachedContents/{id}. See the Gemini context caching documentation for details.

Respons

200 - application/json

Successful response. For streaming requests, the response is a stream of SSE events, each containing a GenerateContentResponse JSON object prefixed with data: .

candidates
object[]

The generated response candidates.

promptFeedback
object

Feedback on the prompt, including safety blocking information.

usageMetadata
object

Token usage statistics for the request.

modelVersion
string

The model version that generated this response.

createTime
string

The timestamp when this response was created (ISO 8601 format).

responseId
string

Unique identifier for this response.