-
-
Notifications
You must be signed in to change notification settings - Fork 277
Open
cpetersen/ruby_llm
#2Labels
enhancementNew feature or requestNew feature or request
Description
Scope check
- This is core LLM communication (not application logic)
- This benefits most users (not just my use case)
- This can't be solved in application code with current RubyLLM
- I read the Contributing Guide
Due diligence
- I searched existing issues
- I checked the documentation
What problem does this solve?
We'd like access to in process, local running LLMs. These are especially useful for running smaller models in line or for running cost prohibitive work loads.
Proposed solution
red-candle allows users to run state-of-the-art language models directly from Ruby with blazing-fast Rust under the hood. Hardware accelerated with Metal (Mac) and CUDA (NVIDIA). Red candle leverages the Rust ecosystem, notably Candle and Magnus, to provide a fast and efficient way to run LLMs in Ruby. https://github.com/scientist-labs/red-candle
Why this belongs in RubyLLM
red-candle provides a unique alternative to other providers in that it is local, in process and low requirement.
tpaulshippy
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request