Description:
We can potentially reduce costs by up to 90% by auto-inject prompt caching checkpoints. Long, static parts of your prompts can be cached to avoid repeated processing.
[optional Relevant Links:]
https://docs.litellm.ai/docs/tutorials/prompt_caching