You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-6Lines changed: 17 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,6 +38,13 @@ No faster way to get started than by diving in and playing around with a demo.
38
38
39
39
Need quickstarts to begin your Redis AI journey? **Start here.**
40
40
41
+
### Non-Python Redis AI Recipes
42
+
43
+
#### ☕️ Java
44
+
45
+
A set of Java recipes can be found under [/java-recipes](/java-recipes/README.md).
46
+
47
+
41
48
### Getting started with Redis & Vector Search
42
49
43
50
| Recipe | Description |
@@ -48,11 +55,6 @@ Need quickstarts to begin your Redis AI journey? **Start here.**
48
55
|[/vector-search/02_hybrid_search.ipynb](/python-recipes/vector-search/02_hybrid_search.ipynb)| Hybrid search techniques with Redis (BM25 + Vector) |
49
56
|[/vector-search/03_dtype_support.ipynb](/python-recipes/vector-search/03_dtype_support.ipynb)| Shows how to convert a float32 index to float16 or integer dataypes|
50
57
51
-
### Non-Python Redis AI Recipes
52
-
53
-
#### ☕️ Java
54
-
55
-
A set of Java recipes can be found under [/java-recipes](/java-recipes/README.md).
56
58
57
59
### Retrieval Augmented Generation (RAG)
58
60
@@ -77,7 +79,7 @@ LLMs are stateless. To maintain context within a conversation chat sessions must
77
79
|[/llm-message-history/00_message_history.ipynb](python-recipes/llm-message-history/00_llm_message_history.ipynb)| LLM message history with semantic similarity |
78
80
|[/llm-message-history/01_multiple_sessions.ipynb](python-recipes/llm-message-history/01_multiple_sessions.ipynb)| Handle multiple simultaneous chats with one instance |
79
81
80
-
### Semantic Cache
82
+
### Semantic Caching
81
83
An estimated 31% of LLM queries are potentially redundant ([source](https://arxiv.org/pdf/2403.02694)). Redis enables semantic caching to help cut down on LLM costs quickly.
82
84
83
85
| Recipe | Description |
@@ -94,6 +96,15 @@ Routing is a simple and effective way of preventing misuses with your AI applica
94
96
|[/semantic-router/00_semantic_routing.ipynb](python-recipes/semantic-router/00_semantic_routing.ipynb)| Simple examples of how to build an allow/block list router in addition to a multi-topic router |
95
97
|[/semantic-router/01_routing_optimization.ipynb](python-recipes/semantic-router/01_routing_optimization.ipynb)| Use RouterThresholdOptimizer from redisvl to setup best router config |
96
98
99
+
100
+
### AI Gateways
101
+
AI gateways manage LLM traffic through a centralized, managed layer that can implement routing, rate limiting, caching, and more.
102
+
103
+
| Recipe | Description |
104
+
| --- | --- |
105
+
|[/gateway/00_litellm_proxy_redis.ipynb](python-recipes/gateway/00_litellm_proxy_redis.ipynb)| Getting started with LiteLLM proxy and Redis. |
0 commit comments