Saltar al contenido principal
POST
/
v1beta
/
models
/
{model}
:
{operator}
from google import genai

client = genai.Client(
    api_key="<COMETAPI_KEY>",
    http_options={"api_version": "v1beta", "base_url": "https://api.cometapi.com"},
)

response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Explain how AI works in a few words",
)

print(response.text)
{
  "candidates": [
    {
      "content": {
        "role": "<string>",
        "parts": [
          {
            "text": "<string>",
            "functionCall": {
              "name": "<string>",
              "args": {}
            },
            "inlineData": {
              "mimeType": "<string>",
              "data": "<string>"
            },
            "thought": true
          }
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [
        {
          "category": "<string>",
          "probability": "<string>",
          "blocked": true
        }
      ],
      "citationMetadata": {
        "citationSources": [
          {
            "startIndex": 123,
            "endIndex": 123,
            "uri": "<string>",
            "license": "<string>"
          }
        ]
      },
      "tokenCount": 123,
      "avgLogprobs": 123,
      "groundingMetadata": {
        "groundingChunks": [
          {
            "web": {
              "uri": "<string>",
              "title": "<string>"
            }
          }
        ],
        "groundingSupports": [
          {
            "groundingChunkIndices": [
              123
            ],
            "confidenceScores": [
              123
            ],
            "segment": {
              "startIndex": 123,
              "endIndex": 123,
              "text": "<string>"
            }
          }
        ],
        "webSearchQueries": [
          "<string>"
        ]
      },
      "index": 123
    }
  ],
  "promptFeedback": {
    "blockReason": "SAFETY",
    "safetyRatings": [
      {
        "category": "<string>",
        "probability": "<string>",
        "blocked": true
      }
    ]
  },
  "usageMetadata": {
    "promptTokenCount": 123,
    "candidatesTokenCount": 123,
    "totalTokenCount": 123,
    "trafficType": "<string>",
    "thoughtsTokenCount": 123,
    "promptTokensDetails": [
      {
        "modality": "<string>",
        "tokenCount": 123
      }
    ],
    "candidatesTokensDetails": [
      {
        "modality": "<string>",
        "tokenCount": 123
      }
    ]
  },
  "modelVersion": "<string>",
  "createTime": "<string>",
  "responseId": "<string>"
}

Resumen

CometAPI es compatible con el formato nativo de la API de Gemini, lo que te brinda acceso completo a funciones específicas de Gemini como control de thinking, grounding con Google Search, modalidades nativas de generación de imágenes y más. Usa este endpoint cuando necesites capacidades que no están disponibles a través del endpoint de chat compatible con OpenAI.

Inicio rápido

Reemplaza la URL base y la clave API en cualquier SDK de Gemini o cliente HTTP:
ConfiguraciónPredeterminado de GoogleCometAPI
URL basegenerativelanguage.googleapis.comapi.cometapi.com
Clave API$GEMINI_API_KEY$COMETAPI_KEY
Se admiten tanto los encabezados x-goog-api-key como Authorization: Bearer para la autenticación.

Thinking (Reasoning)

Los modelos Gemini pueden realizar razonamiento interno antes de generar una respuesta. El método de control depende de la generación del modelo.
Los modelos Gemini 3 usan thinkingLevel para controlar la profundidad del razonamiento. Niveles disponibles: MINIMAL, LOW, MEDIUM, HIGH.
curl "https://api.cometapi.com/v1beta/models/gemini-3.1-pro-preview:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "Explain quantum physics simply."}]}],
    "generationConfig": {
      "thinkingConfig": {"thinkingLevel": "LOW"}
    }
  }'
Usar thinkingLevel con modelos Gemini 2.5 (o thinkingBudget con modelos Gemini 3) puede causar errores. Usa el parámetro correcto para la versión de tu modelo.

Streaming

Usa streamGenerateContent?alt=sse como operador para recibir Server-Sent Events a medida que el modelo genera contenido. Cada evento SSE contiene una línea data: con un objeto JSON GenerateContentResponse.
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:streamGenerateContent?alt=sse" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  --no-buffer \
  -d '{
    "contents": [{"parts": [{"text": "Write a short poem about the stars"}]}]
  }'

Instrucciones del sistema

Guía el comportamiento del modelo durante toda la conversación con systemInstruction:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "What is 2+2?"}]}],
    "systemInstruction": {
      "parts": [{"text": "You are a math tutor. Always show your work."}]
    }
  }'

Modo JSON

Fuerza una salida JSON estructurada con responseMimeType. Opcionalmente, proporciona un responseSchema para una validación estricta del esquema:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "List 3 planets with their distances from the sun"}]}],
    "generationConfig": {
      "responseMimeType": "application/json"
    }
  }'

Google Search Grounding

Habilita la búsqueda web en tiempo real añadiendo una herramienta googleSearch:
curl "https://api.cometapi.com/v1beta/models/gemini-2.5-flash:generateContent" \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: $COMETAPI_KEY" \
  -d '{
    "contents": [{"parts": [{"text": "Who won the euro 2024?"}]}],
    "tools": [{"google_search": {}}]
  }'
La respuesta incluye groundingMetadata con URLs de origen y puntuaciones de confianza.

Ejemplo de respuesta

Una respuesta típica del endpoint Gemini de CometAPI:
{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [{"text": "Hello"}]
      },
      "finishReason": "STOP",
      "avgLogprobs": -0.0023
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 5,
    "candidatesTokenCount": 1,
    "totalTokenCount": 30,
    "trafficType": "ON_DEMAND",
    "thoughtsTokenCount": 24,
    "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 5}],
    "candidatesTokensDetails": [{"modality": "TEXT", "tokenCount": 1}]
  },
  "modelVersion": "gemini-2.5-flash",
  "createTime": "2026-03-25T04:21:43.756483Z",
  "responseId": "CeynaY3LDtvG4_UP0qaCuQY"
}
El campo thoughtsTokenCount en usageMetadata muestra cuántos tokens gastó el modelo en razonamiento interno, incluso cuando la salida de pensamiento no se incluye en la respuesta.

Diferencias clave con el endpoint compatible con OpenAI

CaracterísticaGemini nativo (/v1beta/models/...)Compatible con OpenAI (/v1/chat/completions)
Control de pensamientothinkingConfig con thinkingLevel / thinkingBudgetNo disponible
Google Search groundingtools: [\{"google_search": \{\}\}]No disponible
Google Maps groundingtools: [\{"googleMaps": \{\}\}]No disponible
Modalidad de generación de imágenesresponseModalities: ["IMAGE"]No disponible
Encabezado de autenticaciónx-goog-api-key o BearerSolo Bearer
Formato de respuestaFormato nativo de Gemini (candidates, parts)Formato de OpenAI (choices, message)

Autorizaciones

x-goog-api-key
string
header
requerido

Your CometAPI key passed via the x-goog-api-key header. Bearer token authentication (Authorization: Bearer <key>) is also supported.

Parámetros de ruta

model
string
requerido

The Gemini model ID to use. See the Models page for current Gemini model IDs.

Ejemplo:

"gemini-2.5-flash"

operator
enum<string>
requerido

The operation to perform. Use generateContent for synchronous responses, or streamGenerateContent?alt=sse for Server-Sent Events streaming.

Opciones disponibles:
generateContent,
streamGenerateContent?alt=sse
Ejemplo:

"generateContent"

Cuerpo

application/json
contents
object[]
requerido

The conversation history and current input. For single-turn queries, provide a single item. For multi-turn conversations, include all previous turns.

systemInstruction
object

System instructions that guide the model's behavior across the entire conversation. Text only.

tools
object[]

Tools the model may use to generate responses. Supports function declarations, Google Search, Google Maps, and code execution.

toolConfig
object

Configuration for tool usage, such as function calling mode.

safetySettings
object[]

Safety filter settings. Override default thresholds for specific harm categories.

generationConfig
object

Configuration for model generation behavior including temperature, output length, and response format.

cachedContent
string

The name of cached content to use as context. Format: cachedContents/{id}. See the Gemini context caching documentation for details.

Respuesta

200 - application/json

Successful response. For streaming requests, the response is a stream of SSE events, each containing a GenerateContentResponse JSON object prefixed with data:.

candidates
object[]

The generated response candidates.

promptFeedback
object

Feedback on the prompt, including safety blocking information.

usageMetadata
object

Token usage statistics for the request.

modelVersion
string

The model version that generated this response.

createTime
string

The timestamp when this response was created (ISO 8601 format).

responseId
string

Unique identifier for this response.