メインコンテンツへスキップ
POST
/
v1
/
audio
/
transcriptions
Python (OpenAI SDK)
from openai import OpenAI

client = OpenAI(
    api_key="<COMETAPI_KEY>",
    base_url="https://api.cometapi.com/v1"
)

audio_file = open("audio.mp3", "rb")
transcription = client.audio.transcriptions.create(
    model="whisper-1",
    file=audio_file
)
print(transcription.text)
{
  "text": "Hello, welcome to CometAPI."
}

承認

Authorization
string
header
必須

Bearer token authentication. Use your CometAPI key.

ボディ

multipart/form-data
file
file
必須

The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.

model
string
デフォルト:whisper-1
必須

The speech-to-text model to use. Choose a current speech model from the Models page.

language
string

The language of the input audio in ISO-639-1 format (e.g., en, zh, ja). Supplying the language improves accuracy and latency.

prompt
string

Optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

response_format
enum<string>
デフォルト:json

The output format for the transcription.

利用可能なオプション:
json,
text,
srt,
verbose_json,
vtt
temperature
number
デフォルト:0

Sampling temperature between 0 and 1. Higher values produce more random output; lower values are more focused. When set to 0, the model auto-adjusts temperature using log probability.

必須範囲: 0 <= x <= 1

レスポンス

200 - application/json

The transcription result.

text
string
必須

The transcribed text.