You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed it's targeted at static prompts (You can write the prompt directly into the code).
This means it's useful for unit tests, but not e2e tests of prompts that load some of their text from a similarity search on a vector DB or other.
I think this could be easy to fix by adding another test case that relies on an external function or API call to call the LLM. Right now after skimming the code, my guess is that the PromptCase speaks directly to OpenAI and so there's no room for external calls to modify the prompt.
I think this would be a super useful feature as a lot of applications are being built with this architecture and have the same problem you're trying to solve as you build it for Preset. I'm happy to work on it as well if you think it's something you'd add to the repo.
classDynamicPromptCase:
pass# Laterfrompromtimize.promptsimportDynamicPromptCase,
frompromptimizeimportevalsdefenhance(prompt: str) ->str:
context="I like the following music the most: Organize by Asake (Afrobeats), Ride by Twenty-One Pilots (Band)."returnf"Use the following context to answer my question\n{context}\n{prompt}"simple_prompts= [
DynamicPromptCase(
"name my personal favourite band", enhance, lambdax: evals.all_words(x, ["Twenty-One Pilots"])
),
]
The text was updated successfully, but these errors were encountered:
A more general version of this could also let you swap out OpenAI entirely, so that the original PromptCase you have becomes a special case of the DynamicPromptCase with pre-defined values like OpenAI, and no enhancements, etc.
Hi @mistercrunch
Thanks for building this, it's sorely needed.
I noticed it's targeted at static prompts (You can write the prompt directly into the code).
This means it's useful for unit tests, but not e2e tests of prompts that load some of their text from a similarity search on a vector DB or other.
I think this could be easy to fix by adding another test case that relies on an external function or API call to call the LLM. Right now after skimming the code, my guess is that the PromptCase speaks directly to OpenAI and so there's no room for external calls to modify the prompt.
I think this would be a super useful feature as a lot of applications are being built with this architecture and have the same problem you're trying to solve as you build it for Preset. I'm happy to work on it as well if you think it's something you'd add to the repo.
The text was updated successfully, but these errors were encountered: