-
Notifications
You must be signed in to change notification settings - Fork 1
Add OpenAI Chat Completions API #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
tools = openai_kwargs.get("tools", None) | ||
|
||
prompt_text = form_prompt_string(messages, tools) | ||
response_text = _get_string_response(response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you save a ticket for future PR:
we should try to extract token-probs out of response
object if they are in there and include them in the score, like in TLMLite
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when the user does: TLMChatCompletions.create()
the resulting scores should also be including token-probs (can save for future PR too)
|
||
if not isinstance(response, ChatCompletion): | ||
raise TypeError("The response is not an OpenAI ChatCompletion object.") | ||
if response.choices[0].message.content is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will tool call output / structured output be here too if OpenAI chose to return that?
if not, can you save a ticket for future PR that we should get those outputs and use the appropriate TLM scoring for them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No this code does not do that yet, I believe I can add rudimentary support for that pretty easily tho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@huiwengoh do you still plan to add the support in this PR or in a followup ticket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created ticket!
@@ -0,0 +1,123 @@ | |||
import asyncio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my recommended tutorial to make for this is here:
would be good to investigate whether the better monkey-patch version of workflow 2 in that link can be made to work with this API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API LGTM. Leaving underlying code review to @jas2600
Note I suggested the ways this should be tested and how our eventual tutorial for it should look:
tools = openai_kwargs.get("tools", None) | ||
|
||
prompt_text = form_prompt_string(messages, tools) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit] cleaner to just pass openai_kwargs.get("tools", None)
directly instead of creating an additional variable
don't forget to release a new version |
Co-authored-by: Jonas Mueller <[email protected]>
Sample usage for create:
Sample usage for score: