Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversations rather than one-offs #39

Open
claus-topholt-private opened this issue Jul 21, 2023 · 11 comments
Open

Conversations rather than one-offs #39

claus-topholt-private opened this issue Jul 21, 2023 · 11 comments
Assignees
Labels
In Discussion Scope or design nuances need to be discussed Suggestion New feature or request

Comments

@claus-topholt-private
Copy link

Any thoughts on how to use TypeChat in conversation-style interactions? In my use case, there is a need to go back and forth with the LLM, refining queries. In your coffee shop example, something like this:

User: Two tall lattes. The first one with no foam.
Assistant: Two tall lattes coming up.
User: The second one with whole milk. Actually make the first one a grande.
Assistant: One grande latte, one tall latte with whole milk. Coming up.

@steveluc
Copy link
Contributor

Great question. We are experimenting with adding an conversation history section to the beginning of the prompt that would provide an array of user and assistant entries. A complication is that in TypeChat the LLM output is a formal representation of user intent. Because of this, it's not the right thing to put in the Assistant part of the history. Instead, an abstraction or summary of the application's output to the user should be in the Assistant part of the history. How to do this varies by application. For example the music application may output a relatively long track list in response to a search request. Rather than consuming tokens by placing in the track list verbatim, we are looking at putting in a summary of the track list or just the information that a track list with k entries was printed. We need to gain some more experience with this aspect of conversation history, but we will be doing this one relatively soon.

@steveluc steveluc self-assigned this Jul 21, 2023
@claus-topholt-private
Copy link
Author

Wonderful - that's what I've been working towards outside of TypeChat, but you guys save me a ton of time!

@coderfengyun
Copy link

Any thoughts on how to use TypeChat in conversation-style interactions? In my use case, there is a need to go back and forth with the LLM, refining queries. In your coffee shop example, something like this:

User: Two tall lattes. The first one with no foam. Assistant: Two tall lattes coming up. User: The second one with whole milk. Actually make the first one a grande. Assistant: One grande latte, one tall latte with whole milk. Coming up.

This example would be more realistic with some modifications. For example, the bot could be asked to summarize the user's needs and seek confirmation from the user.

@coderfengyun
Copy link

coderfengyun commented Jul 25, 2023

Great question. We are experimenting with adding an conversation history section to the beginning of the prompt that would provide an array of user and assistant entries. A complication is that in TypeChat the LLM output is a formal representation of user intent. Because of this, it's not the right thing to put in the Assistant part of the history. Instead, an abstraction or summary of the application's output to the user should be in the Assistant part of the history. How to do this varies by application. For example the music application may output a relatively long track list in response to a search request. Rather than consuming tokens by placing in the track list verbatim, we are looking at putting in a summary of the track list or just the information that a track list with k entries was printed. We need to gain some more experience with this aspect of conversation history, but we will be doing this one relatively soon.

I'm wondering if the problem could be solved by simply adding a confirmation of generation. For example, start by communicating the requirements with the user, then confirm the requirements with the user, and after confirmation generate according to their requirements. That is, there is no need to generate code for every round of dialog.

@weykon
Copy link
Contributor

weykon commented Jul 28, 2023

I think we could create some new type for description the process of conversation.
it will be a good experiment to test how about explicitly inputting the state type works.

@kalinkrustev
Copy link

In addition to the above, sometimes the user may not have provided enough information, in which case the conversation should proceed with a question. This can be detected if some required property is missing. For example:

User: I want two big pizzas.
TypeChat: Do you want them takeaway?

@DanielRosenwasser DanielRosenwasser added Suggestion New feature or request In Discussion Scope or design nuances need to be discussed labels Jul 28, 2023
@alilibx
Copy link

alilibx commented Aug 27, 2023

This situation can also pose a challenge. If the user hasn't given adequate context, it can potentially be addressed using a fallback mechanism within the framework, similar to handling unknown types. However, there's still the possibility that the user might only provide the information that was missing, causing the question to lose its original context. Consequently, determining the user's original intent becomes quite intricate.

For instance:

User: I'm interested in reserving a flight.
Assistant: Kindly provide details such as the departure and arrival locations, as well as the dates.
User: Departing from JFK to London.

And so on.

@ahejlsberg
Copy link
Member

See #114.

@rohanrajpal
Copy link

See #114.

Looks promising! Will give this a spin today

@olawalejuwonm
Copy link

Is there a solution as regards this?

@gvanrossum
Copy link
Contributor

gvanrossum commented Apr 22, 2024

I'd like to restart this conversation. In #238 I am trying to add a new Python example that would benefit from a conversation with the user. I felt it was wrong to put the user's conversation history before the description of the schema (where translate() currently puts it, at least in the Python version) and ended up doing some hacking on translate() to allow the chat history to be placed where I figured it should go. I ripped that out of my PR, but the sample session there shows that it works.

Then I noticed that there's another demo, healthData, that implements chat history, by overriding the Translator class. This makes me think that there's an actual need, and that prompt_preamble (#114 mentioned above) isn't the whole answer. The healthData example also does some other prompt engineering: it adds additional instructions for the agent to the prompt and also some additional hardcoded instructions about relative dates and times.

So let's break it all up. Let's somehow add a mechanism that allows the user to do their own prompt engineering. In a comment on my PR, @DanielRosenwasser writes:

I would think that the best format for incorporating chat history would be something like

  • System: The assistant is a bot that responds in JSON according to a schema.
  • ...: In-between messages alternating between user/assistant
  • User: Some request
  • System: Translate the prior request with JSON that satisfies Type in the following schema:

That gives background for the current convo, plus reinforces the current task at hand. This PR is pretty close to that.

Maybe the solution is just to change translate() so that the prompt preamble goes between the schema-describing prompt and the final user request. That would support the needs of the healthData example too, IIUC.

Anyone?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
In Discussion Scope or design nuances need to be discussed Suggestion New feature or request
Projects
None yet
Development

No branches or pull requests