@@ -132,9 +132,9 @@ def create(
132132 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
133133 tokens appearing in the completion.
134134
135- logprobs: This is not yet supported by our models. Whether to return log probabilities of
136- the output tokens or not. If true, returns the log probabilities of each output
137- token returned in the `content` of `message`.
135+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
136+ returns the log probabilities of each output token returned in the `content` of
137+ `message`.
138138
139139 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
140140 total length of input tokens and generated tokens is limited by the model's
@@ -200,10 +200,9 @@ def create(
200200 means only the first 10 tokens with higher probability are considered. Is
201201 recommended altering this, top_p or temperature but not more than one of these.
202202
203- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
204- the number of most likely tokens to return at each token position, each with an
205- associated log probability. `logprobs` must be set to `true` if this parameter
206- is used.
203+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
204+ return at each token position, each with an associated log probability.
205+ `logprobs` must be set to `true` if this parameter is used.
207206
208207 top_p: Cumulative probability for token choices. An alternative to sampling with
209208 temperature, called nucleus sampling, where the model considers the results of
@@ -312,9 +311,9 @@ def create(
312311 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
313312 tokens appearing in the completion.
314313
315- logprobs: This is not yet supported by our models. Whether to return log probabilities of
316- the output tokens or not. If true, returns the log probabilities of each output
317- token returned in the `content` of `message`.
314+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
315+ returns the log probabilities of each output token returned in the `content` of
316+ `message`.
318317
319318 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
320319 total length of input tokens and generated tokens is limited by the model's
@@ -375,10 +374,9 @@ def create(
375374 means only the first 10 tokens with higher probability are considered. Is
376375 recommended altering this, top_p or temperature but not more than one of these.
377376
378- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
379- the number of most likely tokens to return at each token position, each with an
380- associated log probability. `logprobs` must be set to `true` if this parameter
381- is used.
377+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
378+ return at each token position, each with an associated log probability.
379+ `logprobs` must be set to `true` if this parameter is used.
382380
383381 top_p: Cumulative probability for token choices. An alternative to sampling with
384382 temperature, called nucleus sampling, where the model considers the results of
@@ -487,9 +485,9 @@ def create(
487485 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
488486 tokens appearing in the completion.
489487
490- logprobs: This is not yet supported by our models. Whether to return log probabilities of
491- the output tokens or not. If true, returns the log probabilities of each output
492- token returned in the `content` of `message`.
488+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
489+ returns the log probabilities of each output token returned in the `content` of
490+ `message`.
493491
494492 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
495493 total length of input tokens and generated tokens is limited by the model's
@@ -550,10 +548,9 @@ def create(
550548 means only the first 10 tokens with higher probability are considered. Is
551549 recommended altering this, top_p or temperature but not more than one of these.
552550
553- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
554- the number of most likely tokens to return at each token position, each with an
555- associated log probability. `logprobs` must be set to `true` if this parameter
556- is used.
551+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
552+ return at each token position, each with an associated log probability.
553+ `logprobs` must be set to `true` if this parameter is used.
557554
558555 top_p: Cumulative probability for token choices. An alternative to sampling with
559556 temperature, called nucleus sampling, where the model considers the results of
@@ -784,9 +781,9 @@ async def create(
784781 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
785782 tokens appearing in the completion.
786783
787- logprobs: This is not yet supported by our models. Whether to return log probabilities of
788- the output tokens or not. If true, returns the log probabilities of each output
789- token returned in the `content` of `message`.
784+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
785+ returns the log probabilities of each output token returned in the `content` of
786+ `message`.
790787
791788 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
792789 total length of input tokens and generated tokens is limited by the model's
@@ -852,10 +849,9 @@ async def create(
852849 means only the first 10 tokens with higher probability are considered. Is
853850 recommended altering this, top_p or temperature but not more than one of these.
854851
855- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
856- the number of most likely tokens to return at each token position, each with an
857- associated log probability. `logprobs` must be set to `true` if this parameter
858- is used.
852+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
853+ return at each token position, each with an associated log probability.
854+ `logprobs` must be set to `true` if this parameter is used.
859855
860856 top_p: Cumulative probability for token choices. An alternative to sampling with
861857 temperature, called nucleus sampling, where the model considers the results of
@@ -964,9 +960,9 @@ async def create(
964960 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
965961 tokens appearing in the completion.
966962
967- logprobs: This is not yet supported by our models. Whether to return log probabilities of
968- the output tokens or not. If true, returns the log probabilities of each output
969- token returned in the `content` of `message`.
963+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
964+ returns the log probabilities of each output token returned in the `content` of
965+ `message`.
970966
971967 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
972968 total length of input tokens and generated tokens is limited by the model's
@@ -1027,10 +1023,9 @@ async def create(
10271023 means only the first 10 tokens with higher probability are considered. Is
10281024 recommended altering this, top_p or temperature but not more than one of these.
10291025
1030- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
1031- the number of most likely tokens to return at each token position, each with an
1032- associated log probability. `logprobs` must be set to `true` if this parameter
1033- is used.
1026+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
1027+ return at each token position, each with an associated log probability.
1028+ `logprobs` must be set to `true` if this parameter is used.
10341029
10351030 top_p: Cumulative probability for token choices. An alternative to sampling with
10361031 temperature, called nucleus sampling, where the model considers the results of
@@ -1139,9 +1134,9 @@ async def create(
11391134 logit_bias: This is not yet supported by our models. Modify the likelihood of specified
11401135 tokens appearing in the completion.
11411136
1142- logprobs: This is not yet supported by our models. Whether to return log probabilities of
1143- the output tokens or not. If true, returns the log probabilities of each output
1144- token returned in the `content` of `message`.
1137+ logprobs: Whether to return log probabilities of the output tokens or not. If true,
1138+ returns the log probabilities of each output token returned in the `content` of
1139+ `message`.
11451140
11461141 max_completion_tokens: The maximum number of tokens that can be generated in the chat completion. The
11471142 total length of input tokens and generated tokens is limited by the model's
@@ -1202,10 +1197,9 @@ async def create(
12021197 means only the first 10 tokens with higher probability are considered. Is
12031198 recommended altering this, top_p or temperature but not more than one of these.
12041199
1205- top_logprobs: This is not yet supported by our models. An integer between 0 and 20 specifying
1206- the number of most likely tokens to return at each token position, each with an
1207- associated log probability. `logprobs` must be set to `true` if this parameter
1208- is used.
1200+ top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to
1201+ return at each token position, each with an associated log probability.
1202+ `logprobs` must be set to `true` if this parameter is used.
12091203
12101204 top_p: Cumulative probability for token choices. An alternative to sampling with
12111205 temperature, called nucleus sampling, where the model considers the results of
0 commit comments