-
Notifications
You must be signed in to change notification settings - Fork 390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversations rather than one-offs #39
Comments
Great question. We are experimenting with adding an conversation history section to the beginning of the prompt that would provide an array of user and assistant entries. A complication is that in TypeChat the LLM output is a formal representation of user intent. Because of this, it's not the right thing to put in the Assistant part of the history. Instead, an abstraction or summary of the application's output to the user should be in the Assistant part of the history. How to do this varies by application. For example the music application may output a relatively long track list in response to a search request. Rather than consuming tokens by placing in the track list verbatim, we are looking at putting in a summary of the track list or just the information that a track list with k entries was printed. We need to gain some more experience with this aspect of conversation history, but we will be doing this one relatively soon. |
Wonderful - that's what I've been working towards outside of TypeChat, but you guys save me a ton of time! |
This example would be more realistic with some modifications. For example, the bot could be asked to summarize the user's needs and seek confirmation from the user. |
I'm wondering if the problem could be solved by simply adding a confirmation of generation. For example, start by communicating the requirements with the user, then confirm the requirements with the user, and after confirmation generate according to their requirements. That is, there is no need to generate code for every round of dialog. |
I think we could create some new type for description the process of conversation. |
In addition to the above, sometimes the user may not have provided enough information, in which case the conversation should proceed with a question. This can be detected if some required property is missing. For example:
|
This situation can also pose a challenge. If the user hasn't given adequate context, it can potentially be addressed using a fallback mechanism within the framework, similar to handling unknown types. However, there's still the possibility that the user might only provide the information that was missing, causing the question to lose its original context. Consequently, determining the user's original intent becomes quite intricate. For instance: User: I'm interested in reserving a flight. And so on. |
See #114. |
Looks promising! Will give this a spin today |
Is there a solution as regards this? |
I'd like to restart this conversation. In #238 I am trying to add a new Python example that would benefit from a conversation with the user. I felt it was wrong to put the user's conversation history before the description of the schema (where Then I noticed that there's another demo, healthData, that implements chat history, by overriding the Translator class. This makes me think that there's an actual need, and that So let's break it all up. Let's somehow add a mechanism that allows the user to do their own prompt engineering. In a comment on my PR, @DanielRosenwasser writes:
Maybe the solution is just to change Anyone? |
Any thoughts on how to use TypeChat in conversation-style interactions? In my use case, there is a need to go back and forth with the LLM, refining queries. In your coffee shop example, something like this:
User: Two tall lattes. The first one with no foam.
Assistant: Two tall lattes coming up.
User: The second one with whole milk. Actually make the first one a grande.
Assistant: One grande latte, one tall latte with whole milk. Coming up.
The text was updated successfully, but these errors were encountered: