Releases: run-llama/create-llama
Releases · run-llama/create-llama
v0.1.11
Patch Changes
- 48b96ff: Add DuckDuckGo search tool
- 9c9decb: Reuse function tool instances and improve e2b interpreter tool for Python
- 02ed277: Add Groq as a model provider
- 0748f2e: Remove hard-coded Gemini supported models
v0.1.10
Patch Changes
- 9112d08: Add OpenAPI tool for Typescript
- 8f03f8d: Add OLLAMA_REQUEST_TIMEOUT variable to config Ollama timeout (Python)
- 8f03f8d: Apply nest_asyncio for llama parse
v0.1.9
Patch Changes
- a42fa53: Add CSV upload
- 563b51d: Fix Vercel streaming (python) to stream data events instantly
- d60b3c5: Add E2B code interpreter tool for FastAPI
- 956538e: Add OpenAPI action tool for FastAPI
v0.1.8
Patch Changes
- cd50a33: Add interpreter tool for TS using e2b.dev
v0.1.7
Patch Changes
- 260d37a: Add system prompt env variable for TS
- bbd5b8d: Fix postgres connection leaking issue
- bb53425: Support HTTP proxies by setting the GLOBAL_AGENT_HTTP_PROXY env variable
- 69c2e16: Fix streaming for Express
- 7873bfb: Update Ollama provider to run with the base URL from the environment variable
v0.1.6
Patch Changes
- 56537a1: Display PDF files in source nodes
v0.1.5
Patch Changes
- 84db798: feat: support display latex in chat markdown
v0.1.4
Patch Changes
- 0bc8e75: Use ingestion pipeline for dedicated vector stores (Python only)
- cb1001d: Add ChromaDB vector store
v0.1.3
Patch Changes
- 416073d: Directly import vector stores to work with NextJS
v0.1.2
Patch Changes
- 056e376: Add support for displaying tool outputs (including weather widget as example)