增加环绕侦察场景适配
This commit is contained in:
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -72,7 +72,7 @@ class Speech(SyncAPIResource):
|
||||
|
||||
model:
|
||||
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
|
||||
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
|
||||
`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`, or `gpt-4o-mini-tts-2025-12-15`.
|
||||
|
||||
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
|
||||
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
|
||||
@@ -168,7 +168,7 @@ class AsyncSpeech(AsyncAPIResource):
|
||||
|
||||
model:
|
||||
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
|
||||
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
|
||||
`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`, or `gpt-4o-mini-tts-2025-12-15`.
|
||||
|
||||
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
|
||||
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
|
||||
|
||||
@@ -91,8 +91,9 @@ class Transcriptions(SyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model).
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
chunking_strategy: Controls how the audio is cut into chunks. When set to `"auto"`, the server
|
||||
first normalizes loudness and then uses voice activity detection (VAD) to choose
|
||||
@@ -102,8 +103,9 @@ class Transcriptions(SyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
language: The language of the input audio. Supplying the input language in
|
||||
[ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) (e.g. `en`)
|
||||
@@ -239,8 +241,9 @@ class Transcriptions(SyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
stream: If set to true, the model response data will be streamed to the client as it is
|
||||
generated using
|
||||
@@ -261,9 +264,9 @@ class Transcriptions(SyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`. This field is not supported when using
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
|
||||
`known_speaker_references[]`. Each entry should be a short identifier (for
|
||||
@@ -346,8 +349,9 @@ class Transcriptions(SyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
stream: If set to true, the model response data will be streamed to the client as it is
|
||||
generated using
|
||||
@@ -368,9 +372,9 @@ class Transcriptions(SyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`. This field is not supported when using
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
|
||||
`known_speaker_references[]`. Each entry should be a short identifier (for
|
||||
@@ -535,8 +539,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
chunking_strategy: Controls how the audio is cut into chunks. When set to `"auto"`, the server
|
||||
first normalizes loudness and then uses voice activity detection (VAD) to choose
|
||||
@@ -548,9 +553,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`. This field is not supported when using
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
|
||||
`known_speaker_references[]`. Each entry should be a short identifier (for
|
||||
@@ -679,8 +684,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
stream: If set to true, the model response data will be streamed to the client as it is
|
||||
generated using
|
||||
@@ -701,9 +707,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`. This field is not supported when using
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
|
||||
`known_speaker_references[]`. Each entry should be a short identifier (for
|
||||
@@ -786,8 +792,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
|
||||
|
||||
model: ID of the model to use. The options are `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
|
||||
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
|
||||
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
|
||||
(which is powered by our open source Whisper V2 model), and
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
|
||||
stream: If set to true, the model response data will be streamed to the client as it is
|
||||
generated using
|
||||
@@ -808,9 +815,9 @@ class AsyncTranscriptions(AsyncAPIResource):
|
||||
include: Additional information to include in the transcription response. `logprobs` will
|
||||
return the log probabilities of the tokens in the response to understand the
|
||||
model's confidence in the transcription. `logprobs` only works with
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe` and
|
||||
`gpt-4o-mini-transcribe`. This field is not supported when using
|
||||
`gpt-4o-transcribe-diarize`.
|
||||
response_format set to `json` and only with the models `gpt-4o-transcribe`,
|
||||
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
|
||||
not supported when using `gpt-4o-transcribe-diarize`.
|
||||
|
||||
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
|
||||
`known_speaker_references[]`. Each entry should be a short identifier (for
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -98,9 +98,9 @@ class Assistants(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -108,6 +108,7 @@ class Assistants(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -312,9 +313,9 @@ class Assistants(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -322,6 +323,7 @@ class Assistants(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -565,9 +567,9 @@ class AsyncAssistants(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -575,6 +577,7 @@ class AsyncAssistants(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -779,9 +782,9 @@ class AsyncAssistants(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -789,6 +792,7 @@ class AsyncAssistants(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -169,9 +169,9 @@ class Runs(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -179,6 +179,7 @@ class Runs(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -330,9 +331,9 @@ class Runs(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -340,6 +341,7 @@ class Runs(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -487,9 +489,9 @@ class Runs(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -497,6 +499,7 @@ class Runs(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -1620,9 +1623,9 @@ class AsyncRuns(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -1630,6 +1633,7 @@ class AsyncRuns(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -1781,9 +1785,9 @@ class AsyncRuns(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -1791,6 +1795,7 @@ class AsyncRuns(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
@@ -1938,9 +1943,9 @@ class AsyncRuns(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -1948,6 +1953,7 @@ class AsyncRuns(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: Specifies the format that the model must output. Compatible with
|
||||
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -411,9 +411,9 @@ class Completions(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -421,6 +421,7 @@ class Completions(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
@@ -721,9 +722,9 @@ class Completions(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -731,6 +732,7 @@ class Completions(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
@@ -1022,9 +1024,9 @@ class Completions(SyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -1032,6 +1034,7 @@ class Completions(SyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
@@ -1894,9 +1897,9 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -1904,6 +1907,7 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
@@ -2204,9 +2208,9 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -2214,6 +2218,7 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
@@ -2505,9 +2510,9 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
|
||||
reasoning_effort: Constrains effort on reasoning for
|
||||
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
|
||||
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
|
||||
reasoning effort can result in faster responses and fewer tokens used on
|
||||
reasoning in a response.
|
||||
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
|
||||
Reducing reasoning effort can result in faster responses and fewer tokens used
|
||||
on reasoning in a response.
|
||||
|
||||
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
|
||||
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
|
||||
@@ -2515,6 +2520,7 @@ class AsyncCompletions(AsyncAPIResource):
|
||||
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
|
||||
support `none`.
|
||||
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
|
||||
- `xhigh` is supported for all models after `gpt-5.1-codex-max`.
|
||||
|
||||
response_format: An object specifying the format that the model must output.
|
||||
|
||||
|
||||
Binary file not shown.
Binary file not shown.
@@ -60,6 +60,7 @@ class Containers(SyncAPIResource):
|
||||
name: str,
|
||||
expires_after: container_create_params.ExpiresAfter | Omit = omit,
|
||||
file_ids: SequenceNotStr[str] | Omit = omit,
|
||||
memory_limit: Literal["1g", "4g", "16g", "64g"] | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
# The extra values given here take precedence over values defined on the client or passed to this method.
|
||||
extra_headers: Headers | None = None,
|
||||
@@ -77,6 +78,8 @@ class Containers(SyncAPIResource):
|
||||
|
||||
file_ids: IDs of files to copy to the container.
|
||||
|
||||
memory_limit: Optional memory limit for the container. Defaults to "1g".
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
extra_query: Add additional query parameters to the request
|
||||
@@ -92,6 +95,7 @@ class Containers(SyncAPIResource):
|
||||
"name": name,
|
||||
"expires_after": expires_after,
|
||||
"file_ids": file_ids,
|
||||
"memory_limit": memory_limit,
|
||||
},
|
||||
container_create_params.ContainerCreateParams,
|
||||
),
|
||||
@@ -256,6 +260,7 @@ class AsyncContainers(AsyncAPIResource):
|
||||
name: str,
|
||||
expires_after: container_create_params.ExpiresAfter | Omit = omit,
|
||||
file_ids: SequenceNotStr[str] | Omit = omit,
|
||||
memory_limit: Literal["1g", "4g", "16g", "64g"] | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
# The extra values given here take precedence over values defined on the client or passed to this method.
|
||||
extra_headers: Headers | None = None,
|
||||
@@ -273,6 +278,8 @@ class AsyncContainers(AsyncAPIResource):
|
||||
|
||||
file_ids: IDs of files to copy to the container.
|
||||
|
||||
memory_limit: Optional memory limit for the container. Defaults to "1g".
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
extra_query: Add additional query parameters to the request
|
||||
@@ -288,6 +295,7 @@ class AsyncContainers(AsyncAPIResource):
|
||||
"name": name,
|
||||
"expires_after": expires_after,
|
||||
"file_ids": file_ids,
|
||||
"memory_limit": memory_limit,
|
||||
},
|
||||
container_create_params.ContainerCreateParams,
|
||||
),
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -151,19 +151,20 @@ class Images(SyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -178,18 +179,18 @@ class Images(SyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -200,17 +201,17 @@ class Images(SyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
@@ -264,23 +265,24 @@ class Images(SyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -295,18 +297,18 @@ class Images(SyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -317,17 +319,17 @@ class Images(SyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
user: A unique identifier representing your end-user, which can help OpenAI to monitor
|
||||
and detect abuse.
|
||||
@@ -377,23 +379,24 @@ class Images(SyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -408,18 +411,18 @@ class Images(SyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -430,17 +433,17 @@ class Images(SyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
user: A unique identifier representing your end-user, which can help OpenAI to monitor
|
||||
and detect abuse.
|
||||
@@ -555,33 +558,34 @@ class Images(SyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -594,23 +598,23 @@ class Images(SyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
@@ -665,37 +669,38 @@ class Images(SyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -708,19 +713,19 @@ class Images(SyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
@@ -775,37 +780,38 @@ class Images(SyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -818,19 +824,19 @@ class Images(SyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
@@ -1038,19 +1044,20 @@ class AsyncImages(AsyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -1065,18 +1072,18 @@ class AsyncImages(AsyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -1087,17 +1094,17 @@ class AsyncImages(AsyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
@@ -1151,23 +1158,24 @@ class AsyncImages(AsyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -1182,18 +1190,18 @@ class AsyncImages(AsyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -1204,17 +1212,17 @@ class AsyncImages(AsyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
user: A unique identifier representing your end-user, which can help OpenAI to monitor
|
||||
and detect abuse.
|
||||
@@ -1264,23 +1272,24 @@ class AsyncImages(AsyncAPIResource):
|
||||
Args:
|
||||
image: The image(s) to edit. Must be a supported image file or an array of images.
|
||||
|
||||
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
|
||||
For the GPT image models (`gpt-image-1`, `gpt-image-1-mini`, and
|
||||
`gpt-image-1.5`), each image should be a `png`, `webp`, or `jpg` file less than
|
||||
50MB. You can provide up to 16 images.
|
||||
|
||||
For `dall-e-2`, you can only provide one image, and it should be a square `png`
|
||||
file less than 4MB.
|
||||
|
||||
prompt: A text description of the desired image(s). The maximum length is 1000
|
||||
characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.
|
||||
characters for `dall-e-2`, and 32000 characters for the GPT image models.
|
||||
|
||||
stream: Edit the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
@@ -1295,18 +1304,18 @@ class AsyncImages(AsyncAPIResource):
|
||||
the mask will be applied on the first image. Must be a valid PNG file, less than
|
||||
4MB, and have the same dimensions as `image`.
|
||||
|
||||
model: The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are
|
||||
supported. Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1`
|
||||
is used.
|
||||
model: The model to use for image generation. Only `dall-e-2` and the GPT image models
|
||||
are supported. Defaults to `dall-e-2` unless a parameter specific to the GPT
|
||||
image models is used.
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`. The
|
||||
default value is `png`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
@@ -1317,17 +1326,17 @@ class AsyncImages(AsyncAPIResource):
|
||||
are generated if the full image is generated more quickly.
|
||||
|
||||
quality: The quality of the image that will be generated. `high`, `medium` and `low` are
|
||||
only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
|
||||
Defaults to `auto`.
|
||||
only supported for the GPT image models. `dall-e-2` only supports `standard`
|
||||
quality. Defaults to `auto`.
|
||||
|
||||
response_format: The format in which the generated images are returned. Must be one of `url` or
|
||||
`b64_json`. URLs are only valid for 60 minutes after the image has been
|
||||
generated. This parameter is only supported for `dall-e-2`, as `gpt-image-1`
|
||||
will always return base64-encoded images.
|
||||
generated. This parameter is only supported for `dall-e-2`, as the GPT image
|
||||
models always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.
|
||||
|
||||
user: A unique identifier representing your end-user, which can help OpenAI to monitor
|
||||
and detect abuse.
|
||||
@@ -1442,33 +1451,34 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -1481,23 +1491,23 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
@@ -1552,37 +1562,38 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -1595,19 +1606,19 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
@@ -1662,37 +1673,38 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
Args:
|
||||
prompt: A text description of the desired image(s). The maximum length is 32000
|
||||
characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters
|
||||
for `dall-e-3`.
|
||||
characters for the GPT image models, 1000 characters for `dall-e-2` and 4000
|
||||
characters for `dall-e-3`.
|
||||
|
||||
stream: Generate the image in streaming mode. Defaults to `false`. See the
|
||||
[Image generation guide](https://platform.openai.com/docs/guides/image-generation)
|
||||
for more information. This parameter is only supported for `gpt-image-1`.
|
||||
for more information. This parameter is only supported for the GPT image models.
|
||||
|
||||
background: Allows to set transparency for the background of the generated image(s). This
|
||||
parameter is only supported for `gpt-image-1`. Must be one of `transparent`,
|
||||
`opaque` or `auto` (default value). When `auto` is used, the model will
|
||||
automatically determine the best background for the image.
|
||||
parameter is only supported for the GPT image models. Must be one of
|
||||
`transparent`, `opaque` or `auto` (default value). When `auto` is used, the
|
||||
model will automatically determine the best background for the image.
|
||||
|
||||
If `transparent`, the output format needs to support transparency, so it should
|
||||
be set to either `png` (default value) or `webp`.
|
||||
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or
|
||||
`gpt-image-1`. Defaults to `dall-e-2` unless a parameter specific to
|
||||
`gpt-image-1` is used.
|
||||
model: The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or a GPT
|
||||
image model (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`). Defaults to
|
||||
`dall-e-2` unless a parameter specific to the GPT image models is used.
|
||||
|
||||
moderation: Control the content-moderation level for images generated by `gpt-image-1`. Must
|
||||
be either `low` for less restrictive filtering or `auto` (default value).
|
||||
moderation: Control the content-moderation level for images generated by the GPT image
|
||||
models. Must be either `low` for less restrictive filtering or `auto` (default
|
||||
value).
|
||||
|
||||
n: The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
||||
`n=1` is supported.
|
||||
|
||||
output_compression: The compression level (0-100%) for the generated images. This parameter is only
|
||||
supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
|
||||
supported for the GPT image models with the `webp` or `jpeg` output formats, and
|
||||
defaults to 100.
|
||||
|
||||
output_format: The format in which the generated images are returned. This parameter is only
|
||||
supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`.
|
||||
supported for the GPT image models. Must be one of `png`, `jpeg`, or `webp`.
|
||||
|
||||
partial_images: The number of partial images to generate. This parameter is used for streaming
|
||||
responses that return partial images. Value must be between 0 and 3. When set to
|
||||
@@ -1705,19 +1717,19 @@ class AsyncImages(AsyncAPIResource):
|
||||
|
||||
- `auto` (default value) will automatically select the best quality for the
|
||||
given model.
|
||||
- `high`, `medium` and `low` are supported for `gpt-image-1`.
|
||||
- `high`, `medium` and `low` are supported for the GPT image models.
|
||||
- `hd` and `standard` are supported for `dall-e-3`.
|
||||
- `standard` is the only option for `dall-e-2`.
|
||||
|
||||
response_format: The format in which generated images with `dall-e-2` and `dall-e-3` are
|
||||
returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes
|
||||
after the image has been generated. This parameter isn't supported for
|
||||
`gpt-image-1` which will always return base64-encoded images.
|
||||
after the image has been generated. This parameter isn't supported for the GPT
|
||||
image models, which always return base64-encoded images.
|
||||
|
||||
size: The size of the generated images. Must be one of `1024x1024`, `1536x1024`
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for
|
||||
`gpt-image-1`, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and
|
||||
one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
(landscape), `1024x1536` (portrait), or `auto` (default value) for the GPT image
|
||||
models, one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`, and one of
|
||||
`1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3`.
|
||||
|
||||
style: The style of the generated images. This parameter is only supported for
|
||||
`dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes the model to lean
|
||||
|
||||
@@ -125,8 +125,10 @@ class Calls(SyncAPIResource):
|
||||
"gpt-4o-mini-realtime-preview-2024-12-17",
|
||||
"gpt-realtime-mini",
|
||||
"gpt-realtime-mini-2025-10-06",
|
||||
"gpt-realtime-mini-2025-12-15",
|
||||
"gpt-audio-mini",
|
||||
"gpt-audio-mini-2025-10-06",
|
||||
"gpt-audio-mini-2025-12-15",
|
||||
],
|
||||
]
|
||||
| Omit = omit,
|
||||
@@ -199,15 +201,20 @@ class Calls(SyncAPIResource):
|
||||
limit, the conversation be truncated, meaning messages (starting from the
|
||||
oldest) will not be included in the model's context. A 32k context model with
|
||||
4,096 max output tokens can only include 28,224 tokens in the context before
|
||||
truncation occurs. Clients can configure truncation behavior to truncate with a
|
||||
lower max token limit, which is an effective way to control token usage and
|
||||
cost. Truncation will reduce the number of cached tokens on the next turn
|
||||
(busting the cache), since messages are dropped from the beginning of the
|
||||
context. However, clients can also configure truncation to retain messages up to
|
||||
a fraction of the maximum context size, which will reduce the need for future
|
||||
truncations and thus improve the cache rate. Truncation can be disabled
|
||||
entirely, which means the server will never truncate but would instead return an
|
||||
error if the conversation exceeds the model's input token limit.
|
||||
truncation occurs.
|
||||
|
||||
Clients can configure truncation behavior to truncate with a lower max token
|
||||
limit, which is an effective way to control token usage and cost.
|
||||
|
||||
Truncation will reduce the number of cached tokens on the next turn (busting the
|
||||
cache), since messages are dropped from the beginning of the context. However,
|
||||
clients can also configure truncation to retain messages up to a fraction of the
|
||||
maximum context size, which will reduce the need for future truncations and thus
|
||||
improve the cache rate.
|
||||
|
||||
Truncation can be disabled entirely, which means the server will never truncate
|
||||
but would instead return an error if the conversation exceeds the model's input
|
||||
token limit.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
@@ -445,8 +452,10 @@ class AsyncCalls(AsyncAPIResource):
|
||||
"gpt-4o-mini-realtime-preview-2024-12-17",
|
||||
"gpt-realtime-mini",
|
||||
"gpt-realtime-mini-2025-10-06",
|
||||
"gpt-realtime-mini-2025-12-15",
|
||||
"gpt-audio-mini",
|
||||
"gpt-audio-mini-2025-10-06",
|
||||
"gpt-audio-mini-2025-12-15",
|
||||
],
|
||||
]
|
||||
| Omit = omit,
|
||||
@@ -519,15 +528,20 @@ class AsyncCalls(AsyncAPIResource):
|
||||
limit, the conversation be truncated, meaning messages (starting from the
|
||||
oldest) will not be included in the model's context. A 32k context model with
|
||||
4,096 max output tokens can only include 28,224 tokens in the context before
|
||||
truncation occurs. Clients can configure truncation behavior to truncate with a
|
||||
lower max token limit, which is an effective way to control token usage and
|
||||
cost. Truncation will reduce the number of cached tokens on the next turn
|
||||
(busting the cache), since messages are dropped from the beginning of the
|
||||
context. However, clients can also configure truncation to retain messages up to
|
||||
a fraction of the maximum context size, which will reduce the need for future
|
||||
truncations and thus improve the cache rate. Truncation can be disabled
|
||||
entirely, which means the server will never truncate but would instead return an
|
||||
error if the conversation exceeds the model's input token limit.
|
||||
truncation occurs.
|
||||
|
||||
Clients can configure truncation behavior to truncate with a lower max token
|
||||
limit, which is an effective way to control token usage and cost.
|
||||
|
||||
Truncation will reduce the number of cached tokens on the next turn (busting the
|
||||
cache), since messages are dropped from the beginning of the context. However,
|
||||
clients can also configure truncation to retain messages up to a fraction of the
|
||||
maximum context size, which will reduce the need for future truncations and thus
|
||||
improve the cache rate.
|
||||
|
||||
Truncation can be disabled entirely, which means the server will never truncate
|
||||
but would instead return an error if the conversation exceeds the model's input
|
||||
token limit.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
|
||||
@@ -232,7 +232,7 @@ class AsyncRealtimeWithStreamingResponse:
|
||||
|
||||
|
||||
class AsyncRealtimeConnection:
|
||||
"""Represents a live websocket connection to the Realtime API"""
|
||||
"""Represents a live WebSocket connection to the Realtime API"""
|
||||
|
||||
session: AsyncRealtimeSessionResource
|
||||
response: AsyncRealtimeResponseResource
|
||||
@@ -421,7 +421,7 @@ class AsyncRealtimeConnectionManager:
|
||||
|
||||
|
||||
class RealtimeConnection:
|
||||
"""Represents a live websocket connection to the Realtime API"""
|
||||
"""Represents a live WebSocket connection to the Realtime API"""
|
||||
|
||||
session: RealtimeSessionResource
|
||||
response: RealtimeResponseResource
|
||||
@@ -829,7 +829,7 @@ class RealtimeConversationItemResource(BaseRealtimeConnectionResource):
|
||||
|
||||
class RealtimeOutputAudioBufferResource(BaseRealtimeConnectionResource):
|
||||
def clear(self, *, event_id: str | Omit = omit) -> None:
|
||||
"""**WebRTC Only:** Emit to cut off the current audio response.
|
||||
"""**WebRTC/SIP Only:** Emit to cut off the current audio response.
|
||||
|
||||
This will trigger the server to
|
||||
stop generating audio and emit a `output_audio_buffer.cleared` event. This
|
||||
@@ -1066,7 +1066,7 @@ class AsyncRealtimeConversationItemResource(BaseAsyncRealtimeConnectionResource)
|
||||
|
||||
class AsyncRealtimeOutputAudioBufferResource(BaseAsyncRealtimeConnectionResource):
|
||||
async def clear(self, *, event_id: str | Omit = omit) -> None:
|
||||
"""**WebRTC Only:** Emit to cut off the current audio response.
|
||||
"""**WebRTC/SIP Only:** Emit to cut off the current audio response.
|
||||
|
||||
This will trigger the server to
|
||||
stop generating audio and emit a `output_audio_buffer.cleared` event. This
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from copy import copy
|
||||
from typing import Any, List, Type, Union, Iterable, Optional, cast
|
||||
from functools import partial
|
||||
from typing_extensions import Literal, overload
|
||||
@@ -33,7 +34,11 @@ from .input_tokens import (
|
||||
AsyncInputTokensWithStreamingResponse,
|
||||
)
|
||||
from ..._base_client import make_request_options
|
||||
from ...types.responses import response_create_params, response_retrieve_params
|
||||
from ...types.responses import (
|
||||
response_create_params,
|
||||
response_compact_params,
|
||||
response_retrieve_params,
|
||||
)
|
||||
from ...lib._parsing._responses import (
|
||||
TextFormatT,
|
||||
parse_response,
|
||||
@@ -45,11 +50,13 @@ from ...types.shared_params.metadata import Metadata
|
||||
from ...types.shared_params.reasoning import Reasoning
|
||||
from ...types.responses.parsed_response import ParsedResponse
|
||||
from ...lib.streaming.responses._responses import ResponseStreamManager, AsyncResponseStreamManager
|
||||
from ...types.responses.compacted_response import CompactedResponse
|
||||
from ...types.responses.response_includable import ResponseIncludable
|
||||
from ...types.shared_params.responses_model import ResponsesModel
|
||||
from ...types.responses.response_input_param import ResponseInputParam
|
||||
from ...types.responses.response_prompt_param import ResponsePromptParam
|
||||
from ...types.responses.response_stream_event import ResponseStreamEvent
|
||||
from ...types.responses.response_input_item_param import ResponseInputItemParam
|
||||
from ...types.responses.response_text_config_param import ResponseTextConfigParam
|
||||
|
||||
__all__ = ["Responses", "AsyncResponses"]
|
||||
@@ -1046,6 +1053,7 @@ class Responses(SyncAPIResource):
|
||||
if "format" in text:
|
||||
raise TypeError("Cannot mix and match text.format with text_format")
|
||||
|
||||
text = copy(text)
|
||||
text["format"] = _type_to_text_format_param(text_format)
|
||||
|
||||
api_request: partial[Stream[ResponseStreamEvent]] = partial(
|
||||
@@ -1151,7 +1159,7 @@ class Responses(SyncAPIResource):
|
||||
|
||||
if "format" in text:
|
||||
raise TypeError("Cannot mix and match text.format with text_format")
|
||||
|
||||
text = copy(text)
|
||||
text["format"] = _type_to_text_format_param(text_format)
|
||||
|
||||
tools = _make_tools(tools)
|
||||
@@ -1515,6 +1523,158 @@ class Responses(SyncAPIResource):
|
||||
cast_to=Response,
|
||||
)
|
||||
|
||||
def compact(
|
||||
self,
|
||||
*,
|
||||
model: Union[
|
||||
Literal[
|
||||
"gpt-5.2",
|
||||
"gpt-5.2-2025-12-11",
|
||||
"gpt-5.2-chat-latest",
|
||||
"gpt-5.2-pro",
|
||||
"gpt-5.2-pro-2025-12-11",
|
||||
"gpt-5.1",
|
||||
"gpt-5.1-2025-11-13",
|
||||
"gpt-5.1-codex",
|
||||
"gpt-5.1-mini",
|
||||
"gpt-5.1-chat-latest",
|
||||
"gpt-5",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-nano",
|
||||
"gpt-5-2025-08-07",
|
||||
"gpt-5-mini-2025-08-07",
|
||||
"gpt-5-nano-2025-08-07",
|
||||
"gpt-5-chat-latest",
|
||||
"gpt-4.1",
|
||||
"gpt-4.1-mini",
|
||||
"gpt-4.1-nano",
|
||||
"gpt-4.1-2025-04-14",
|
||||
"gpt-4.1-mini-2025-04-14",
|
||||
"gpt-4.1-nano-2025-04-14",
|
||||
"o4-mini",
|
||||
"o4-mini-2025-04-16",
|
||||
"o3",
|
||||
"o3-2025-04-16",
|
||||
"o3-mini",
|
||||
"o3-mini-2025-01-31",
|
||||
"o1",
|
||||
"o1-2024-12-17",
|
||||
"o1-preview",
|
||||
"o1-preview-2024-09-12",
|
||||
"o1-mini",
|
||||
"o1-mini-2024-09-12",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-audio-preview",
|
||||
"gpt-4o-audio-preview-2024-10-01",
|
||||
"gpt-4o-audio-preview-2024-12-17",
|
||||
"gpt-4o-audio-preview-2025-06-03",
|
||||
"gpt-4o-mini-audio-preview",
|
||||
"gpt-4o-mini-audio-preview-2024-12-17",
|
||||
"gpt-4o-search-preview",
|
||||
"gpt-4o-mini-search-preview",
|
||||
"gpt-4o-search-preview-2025-03-11",
|
||||
"gpt-4o-mini-search-preview-2025-03-11",
|
||||
"chatgpt-4o-latest",
|
||||
"codex-mini-latest",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4o-mini-2024-07-18",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-vision-preview",
|
||||
"gpt-4",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-1106",
|
||||
"gpt-3.5-turbo-0125",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"o1-pro",
|
||||
"o1-pro-2025-03-19",
|
||||
"o3-pro",
|
||||
"o3-pro-2025-06-10",
|
||||
"o3-deep-research",
|
||||
"o3-deep-research-2025-06-26",
|
||||
"o4-mini-deep-research",
|
||||
"o4-mini-deep-research-2025-06-26",
|
||||
"computer-use-preview",
|
||||
"computer-use-preview-2025-03-11",
|
||||
"gpt-5-codex",
|
||||
"gpt-5-pro",
|
||||
"gpt-5-pro-2025-10-06",
|
||||
"gpt-5.1-codex-max",
|
||||
],
|
||||
str,
|
||||
None,
|
||||
],
|
||||
input: Union[str, Iterable[ResponseInputItemParam], None] | Omit = omit,
|
||||
instructions: Optional[str] | Omit = omit,
|
||||
previous_response_id: Optional[str] | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
# The extra values given here take precedence over values defined on the client or passed to this method.
|
||||
extra_headers: Headers | None = None,
|
||||
extra_query: Query | None = None,
|
||||
extra_body: Body | None = None,
|
||||
timeout: float | httpx.Timeout | None | NotGiven = not_given,
|
||||
) -> CompactedResponse:
|
||||
"""
|
||||
Compact conversation
|
||||
|
||||
Args:
|
||||
model: Model ID used to generate the response, like `gpt-5` or `o3`. OpenAI offers a
|
||||
wide range of models with different capabilities, performance characteristics,
|
||||
and price points. Refer to the
|
||||
[model guide](https://platform.openai.com/docs/models) to browse and compare
|
||||
available models.
|
||||
|
||||
input: Text, image, or file inputs to the model, used to generate a response
|
||||
|
||||
instructions: A system (or developer) message inserted into the model's context. When used
|
||||
along with `previous_response_id`, the instructions from a previous response
|
||||
will not be carried over to the next response. This makes it simple to swap out
|
||||
system (or developer) messages in new responses.
|
||||
|
||||
previous_response_id: The unique ID of the previous response to the model. Use this to create
|
||||
multi-turn conversations. Learn more about
|
||||
[conversation state](https://platform.openai.com/docs/guides/conversation-state).
|
||||
Cannot be used in conjunction with `conversation`.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
extra_query: Add additional query parameters to the request
|
||||
|
||||
extra_body: Add additional JSON properties to the request
|
||||
|
||||
timeout: Override the client-level default timeout for this request, in seconds
|
||||
"""
|
||||
return self._post(
|
||||
"/responses/compact",
|
||||
body=maybe_transform(
|
||||
{
|
||||
"model": model,
|
||||
"input": input,
|
||||
"instructions": instructions,
|
||||
"previous_response_id": previous_response_id,
|
||||
},
|
||||
response_compact_params.ResponseCompactParams,
|
||||
),
|
||||
options=make_request_options(
|
||||
extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
|
||||
),
|
||||
cast_to=CompactedResponse,
|
||||
)
|
||||
|
||||
|
||||
class AsyncResponses(AsyncAPIResource):
|
||||
@cached_property
|
||||
@@ -2507,7 +2667,7 @@ class AsyncResponses(AsyncAPIResource):
|
||||
|
||||
if "format" in text:
|
||||
raise TypeError("Cannot mix and match text.format with text_format")
|
||||
|
||||
text = copy(text)
|
||||
text["format"] = _type_to_text_format_param(text_format)
|
||||
|
||||
api_request = self.create(
|
||||
@@ -2617,7 +2777,7 @@ class AsyncResponses(AsyncAPIResource):
|
||||
|
||||
if "format" in text:
|
||||
raise TypeError("Cannot mix and match text.format with text_format")
|
||||
|
||||
text = copy(text)
|
||||
text["format"] = _type_to_text_format_param(text_format)
|
||||
|
||||
tools = _make_tools(tools)
|
||||
@@ -2981,6 +3141,158 @@ class AsyncResponses(AsyncAPIResource):
|
||||
cast_to=Response,
|
||||
)
|
||||
|
||||
async def compact(
|
||||
self,
|
||||
*,
|
||||
model: Union[
|
||||
Literal[
|
||||
"gpt-5.2",
|
||||
"gpt-5.2-2025-12-11",
|
||||
"gpt-5.2-chat-latest",
|
||||
"gpt-5.2-pro",
|
||||
"gpt-5.2-pro-2025-12-11",
|
||||
"gpt-5.1",
|
||||
"gpt-5.1-2025-11-13",
|
||||
"gpt-5.1-codex",
|
||||
"gpt-5.1-mini",
|
||||
"gpt-5.1-chat-latest",
|
||||
"gpt-5",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-nano",
|
||||
"gpt-5-2025-08-07",
|
||||
"gpt-5-mini-2025-08-07",
|
||||
"gpt-5-nano-2025-08-07",
|
||||
"gpt-5-chat-latest",
|
||||
"gpt-4.1",
|
||||
"gpt-4.1-mini",
|
||||
"gpt-4.1-nano",
|
||||
"gpt-4.1-2025-04-14",
|
||||
"gpt-4.1-mini-2025-04-14",
|
||||
"gpt-4.1-nano-2025-04-14",
|
||||
"o4-mini",
|
||||
"o4-mini-2025-04-16",
|
||||
"o3",
|
||||
"o3-2025-04-16",
|
||||
"o3-mini",
|
||||
"o3-mini-2025-01-31",
|
||||
"o1",
|
||||
"o1-2024-12-17",
|
||||
"o1-preview",
|
||||
"o1-preview-2024-09-12",
|
||||
"o1-mini",
|
||||
"o1-mini-2024-09-12",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-audio-preview",
|
||||
"gpt-4o-audio-preview-2024-10-01",
|
||||
"gpt-4o-audio-preview-2024-12-17",
|
||||
"gpt-4o-audio-preview-2025-06-03",
|
||||
"gpt-4o-mini-audio-preview",
|
||||
"gpt-4o-mini-audio-preview-2024-12-17",
|
||||
"gpt-4o-search-preview",
|
||||
"gpt-4o-mini-search-preview",
|
||||
"gpt-4o-search-preview-2025-03-11",
|
||||
"gpt-4o-mini-search-preview-2025-03-11",
|
||||
"chatgpt-4o-latest",
|
||||
"codex-mini-latest",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4o-mini-2024-07-18",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-vision-preview",
|
||||
"gpt-4",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-1106",
|
||||
"gpt-3.5-turbo-0125",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"o1-pro",
|
||||
"o1-pro-2025-03-19",
|
||||
"o3-pro",
|
||||
"o3-pro-2025-06-10",
|
||||
"o3-deep-research",
|
||||
"o3-deep-research-2025-06-26",
|
||||
"o4-mini-deep-research",
|
||||
"o4-mini-deep-research-2025-06-26",
|
||||
"computer-use-preview",
|
||||
"computer-use-preview-2025-03-11",
|
||||
"gpt-5-codex",
|
||||
"gpt-5-pro",
|
||||
"gpt-5-pro-2025-10-06",
|
||||
"gpt-5.1-codex-max",
|
||||
],
|
||||
str,
|
||||
None,
|
||||
],
|
||||
input: Union[str, Iterable[ResponseInputItemParam], None] | Omit = omit,
|
||||
instructions: Optional[str] | Omit = omit,
|
||||
previous_response_id: Optional[str] | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
# The extra values given here take precedence over values defined on the client or passed to this method.
|
||||
extra_headers: Headers | None = None,
|
||||
extra_query: Query | None = None,
|
||||
extra_body: Body | None = None,
|
||||
timeout: float | httpx.Timeout | None | NotGiven = not_given,
|
||||
) -> CompactedResponse:
|
||||
"""
|
||||
Compact conversation
|
||||
|
||||
Args:
|
||||
model: Model ID used to generate the response, like `gpt-5` or `o3`. OpenAI offers a
|
||||
wide range of models with different capabilities, performance characteristics,
|
||||
and price points. Refer to the
|
||||
[model guide](https://platform.openai.com/docs/models) to browse and compare
|
||||
available models.
|
||||
|
||||
input: Text, image, or file inputs to the model, used to generate a response
|
||||
|
||||
instructions: A system (or developer) message inserted into the model's context. When used
|
||||
along with `previous_response_id`, the instructions from a previous response
|
||||
will not be carried over to the next response. This makes it simple to swap out
|
||||
system (or developer) messages in new responses.
|
||||
|
||||
previous_response_id: The unique ID of the previous response to the model. Use this to create
|
||||
multi-turn conversations. Learn more about
|
||||
[conversation state](https://platform.openai.com/docs/guides/conversation-state).
|
||||
Cannot be used in conjunction with `conversation`.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
extra_query: Add additional query parameters to the request
|
||||
|
||||
extra_body: Add additional JSON properties to the request
|
||||
|
||||
timeout: Override the client-level default timeout for this request, in seconds
|
||||
"""
|
||||
return await self._post(
|
||||
"/responses/compact",
|
||||
body=await async_maybe_transform(
|
||||
{
|
||||
"model": model,
|
||||
"input": input,
|
||||
"instructions": instructions,
|
||||
"previous_response_id": previous_response_id,
|
||||
},
|
||||
response_compact_params.ResponseCompactParams,
|
||||
),
|
||||
options=make_request_options(
|
||||
extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
|
||||
),
|
||||
cast_to=CompactedResponse,
|
||||
)
|
||||
|
||||
|
||||
class ResponsesWithRawResponse:
|
||||
def __init__(self, responses: Responses) -> None:
|
||||
@@ -2998,6 +3310,9 @@ class ResponsesWithRawResponse:
|
||||
self.cancel = _legacy_response.to_raw_response_wrapper(
|
||||
responses.cancel,
|
||||
)
|
||||
self.compact = _legacy_response.to_raw_response_wrapper(
|
||||
responses.compact,
|
||||
)
|
||||
self.parse = _legacy_response.to_raw_response_wrapper(
|
||||
responses.parse,
|
||||
)
|
||||
@@ -3027,6 +3342,9 @@ class AsyncResponsesWithRawResponse:
|
||||
self.cancel = _legacy_response.async_to_raw_response_wrapper(
|
||||
responses.cancel,
|
||||
)
|
||||
self.compact = _legacy_response.async_to_raw_response_wrapper(
|
||||
responses.compact,
|
||||
)
|
||||
self.parse = _legacy_response.async_to_raw_response_wrapper(
|
||||
responses.parse,
|
||||
)
|
||||
@@ -3056,6 +3374,9 @@ class ResponsesWithStreamingResponse:
|
||||
self.cancel = to_streamed_response_wrapper(
|
||||
responses.cancel,
|
||||
)
|
||||
self.compact = to_streamed_response_wrapper(
|
||||
responses.compact,
|
||||
)
|
||||
|
||||
@cached_property
|
||||
def input_items(self) -> InputItemsWithStreamingResponse:
|
||||
@@ -3082,6 +3403,9 @@ class AsyncResponsesWithStreamingResponse:
|
||||
self.cancel = async_to_streamed_response_wrapper(
|
||||
responses.cancel,
|
||||
)
|
||||
self.compact = async_to_streamed_response_wrapper(
|
||||
responses.compact,
|
||||
)
|
||||
|
||||
@cached_property
|
||||
def input_items(self) -> AsyncInputItemsWithStreamingResponse:
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -10,7 +10,6 @@ import httpx
|
||||
from .. import _legacy_response
|
||||
from ..types import (
|
||||
VideoSize,
|
||||
VideoModel,
|
||||
VideoSeconds,
|
||||
video_list_params,
|
||||
video_remix_params,
|
||||
@@ -34,8 +33,8 @@ from ..types.video import Video
|
||||
from .._base_client import AsyncPaginator, make_request_options
|
||||
from .._utils._utils import is_given
|
||||
from ..types.video_size import VideoSize
|
||||
from ..types.video_model import VideoModel
|
||||
from ..types.video_seconds import VideoSeconds
|
||||
from ..types.video_model_param import VideoModelParam
|
||||
from ..types.video_delete_response import VideoDeleteResponse
|
||||
|
||||
__all__ = ["Videos", "AsyncVideos"]
|
||||
@@ -66,7 +65,7 @@ class Videos(SyncAPIResource):
|
||||
*,
|
||||
prompt: str,
|
||||
input_reference: FileTypes | Omit = omit,
|
||||
model: VideoModel | Omit = omit,
|
||||
model: VideoModelParam | Omit = omit,
|
||||
seconds: VideoSeconds | Omit = omit,
|
||||
size: VideoSize | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
@@ -84,11 +83,13 @@ class Videos(SyncAPIResource):
|
||||
|
||||
input_reference: Optional image reference that guides generation.
|
||||
|
||||
model: The video generation model to use. Defaults to `sora-2`.
|
||||
model: The video generation model to use (allowed values: sora-2, sora-2-pro). Defaults
|
||||
to `sora-2`.
|
||||
|
||||
seconds: Clip duration in seconds. Defaults to 4 seconds.
|
||||
seconds: Clip duration in seconds (allowed values: 4, 8, 12). Defaults to 4 seconds.
|
||||
|
||||
size: Output resolution formatted as width x height. Defaults to 720x1280.
|
||||
size: Output resolution formatted as width x height (allowed values: 720x1280,
|
||||
1280x720, 1024x1792, 1792x1024). Defaults to 720x1280.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
@@ -128,7 +129,7 @@ class Videos(SyncAPIResource):
|
||||
*,
|
||||
prompt: str,
|
||||
input_reference: FileTypes | Omit = omit,
|
||||
model: VideoModel | Omit = omit,
|
||||
model: VideoModelParam | Omit = omit,
|
||||
seconds: VideoSeconds | Omit = omit,
|
||||
size: VideoSize | Omit = omit,
|
||||
poll_interval_ms: int | Omit = omit,
|
||||
@@ -419,7 +420,7 @@ class AsyncVideos(AsyncAPIResource):
|
||||
*,
|
||||
prompt: str,
|
||||
input_reference: FileTypes | Omit = omit,
|
||||
model: VideoModel | Omit = omit,
|
||||
model: VideoModelParam | Omit = omit,
|
||||
seconds: VideoSeconds | Omit = omit,
|
||||
size: VideoSize | Omit = omit,
|
||||
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
|
||||
@@ -437,11 +438,13 @@ class AsyncVideos(AsyncAPIResource):
|
||||
|
||||
input_reference: Optional image reference that guides generation.
|
||||
|
||||
model: The video generation model to use. Defaults to `sora-2`.
|
||||
model: The video generation model to use (allowed values: sora-2, sora-2-pro). Defaults
|
||||
to `sora-2`.
|
||||
|
||||
seconds: Clip duration in seconds. Defaults to 4 seconds.
|
||||
seconds: Clip duration in seconds (allowed values: 4, 8, 12). Defaults to 4 seconds.
|
||||
|
||||
size: Output resolution formatted as width x height. Defaults to 720x1280.
|
||||
size: Output resolution formatted as width x height (allowed values: 720x1280,
|
||||
1280x720, 1024x1792, 1792x1024). Defaults to 720x1280.
|
||||
|
||||
extra_headers: Send extra headers
|
||||
|
||||
@@ -481,7 +484,7 @@ class AsyncVideos(AsyncAPIResource):
|
||||
*,
|
||||
prompt: str,
|
||||
input_reference: FileTypes | Omit = omit,
|
||||
model: VideoModel | Omit = omit,
|
||||
model: VideoModelParam | Omit = omit,
|
||||
seconds: VideoSeconds | Omit = omit,
|
||||
size: VideoSize | Omit = omit,
|
||||
poll_interval_ms: int | Omit = omit,
|
||||
|
||||
Reference in New Issue
Block a user