You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**One *beautiful* Ruby API for GPT, Claude, Gemini, and more.** Easily build chatbots, AI agents, RAG applications, and content generators. Features chat (text, images, audio, PDFs), image generation, embeddings, tools (function calling), structured output, Rails integration, and streaming. Works with OpenAI, Anthropic, Google Gemini, AWS Bedrock, DeepSeek, Mistral, Ollama (local models), OpenRouter, Perplexity, GPUStack, and any OpenAI-compatible API.
8
+
<strong>One *beautiful* Ruby API for GPT, Claude, Gemini, and more.</strong>
9
+
10
+
Battle tested at [<picture><sourcemedia="(prefers-color-scheme: dark)"srcset="https://chatwithwork.com/logotype-dark.svg"><imgsrc="https://chatwithwork.com/logotype.svg"alt="Chat with Work"height="30"align="absmiddle"></picture>](https://chatwithwork.com) — *Claude Code for your documents*
Battle tested at [<picture><sourcemedia="(prefers-color-scheme: dark)"srcset="https://chatwithwork.com/logotype-dark.svg"><imgsrc="https://chatwithwork.com/logotype.svg"alt="Chat with Work"height="30"align="absmiddle"></picture>](https://chatwithwork.com) — *Claude Code for your documents*
20
+
---
21
+
22
+
Build chatbots, AI agents, RAG applications. Works with OpenAI, Anthropic, Google, AWS, local models, and any OpenAI-compatible API.
16
23
17
-
## The problem with AI libraries
24
+
## Why RubyLLM?
18
25
19
-
Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
26
+
Every AI provider ships their own bloated client. Different APIs. Different response formats. Different conventions. It's exhausting.
20
27
21
-
RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday, Zeitwerk, and Marcel. Because working with AI should be a joy, not a chore.
28
+
RubyLLM gives you one beautiful API for all of them. Same interface whether you're using GPT, Claude, or your local Ollama. Just three dependencies: Faraday, Zeitwerk, and Marcel. That's it.
22
29
23
-
## What makes it great
30
+
## Show me the code
24
31
25
32
```ruby
26
33
# Just ask questions
27
34
chat =RubyLLM.chat
28
35
chat.ask "What's the best way to learn Ruby?"
36
+
```
29
37
30
-
# Analyze images, audio, documents, and text files
38
+
```ruby
39
+
# Analyze any file type
31
40
chat.ask "What's in this image?", with:"ruby_conf.jpg"
32
41
chat.ask "Describe this meeting", with:"meeting.wav"
33
42
chat.ask "Summarize this document", with:"contract.pdf"
34
43
chat.ask "Explain this code", with:"app.rb"
44
+
```
35
45
36
-
# Multiple files at once - types automatically detected
46
+
```ruby
47
+
# Multiple files at once
37
48
chat.ask "Analyze these files", with: ["diagram.png", "report.pdf", "notes.txt"]
49
+
```
38
50
39
-
# Stream responses in real-time
40
-
chat.ask "Tell me a story about a Ruby programmer"do |chunk|
51
+
```ruby
52
+
# Stream responses
53
+
chat.ask "Tell me a story about Ruby"do |chunk|
41
54
print chunk.content
42
55
end
56
+
```
43
57
58
+
```ruby
44
59
# Generate images
45
60
RubyLLM.paint "a sunset over mountains in watercolor style"
response = chat.with_schema(ProductSchema).ask "Analyze this product", with:"product.txt"
80
95
```
81
96
82
-
## Core Capabilities
83
-
84
-
* 💬 **Unified Chat:**Converse with models from OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Perplexity, Mistral, Ollama, or any OpenAI-compatible API using `RubyLLM.chat`.
85
-
* 👁️ **Vision:** Analyze images within chats.
86
-
* 🔊 **Audio:** Transcribe and understand audio content.
87
-
* 📄 **Document Analysis:** Extract information from PDFs, text files, CSV, JSON, XML, Markdown, and code files.
88
-
* 🖼️ **Image Generation:** Create images with `RubyLLM.paint`.
89
-
* 📊 **Embeddings:**Generate text embeddings for vector search with `RubyLLM.embed`.
90
-
* 🔧 **Tools (Function Calling):** Let AI models call your Ruby code using `RubyLLM::Tool`.
91
-
* 📋 **Structured Output:**Guarantee responses conform to JSON schemas with `RubyLLM::Schema`.
92
-
* 🚂 **Rails Integration:**Easily persist chats, messages, and tool calls using `acts_as_chat` and `acts_as_message`.
93
-
* 🌊 **Streaming:**Process responses in real-time with idiomatic Ruby blocks.
94
-
* ⚡ **Async Support:**Built-in fiber-based concurrency for high-performance operations.
95
-
* 🎯 **Smart Configuration:**Global and scoped configs with automatic retries and proxy support.
96
-
* 📚 **Model Registry:**Access 500+ models with capability detection and pricing info.
97
+
## Features
98
+
99
+
***Chat:**Conversational AI with `RubyLLM.chat`
100
+
***Vision:** Analyze images and screenshots
101
+
***Audio:** Transcribe and understand speech
102
+
***Documents:** Extract from PDFs, CSVs, JSON, any file type
103
+
***Image generation:** Create images with `RubyLLM.paint`
104
+
***Embeddings:**Vector search with `RubyLLM.embed`
105
+
***Tools:** Let AI call your Ruby methods
106
+
***Structured output:** JSON schemas that just work
107
+
***Streaming:**Real-time responses with blocks
108
+
***Rails:**ActiveRecord integration with `acts_as_chat`
109
+
***Async:**Fiber-based concurrency
110
+
***Model registry:**500+ models with capability detection and pricing
111
+
***Providers:**OpenAI, Anthropic, Gemini, Bedrock, DeepSeek, Mistral, Ollama, OpenRouter, Perplexity, GPUStack, and any OpenAI-compatible API
97
112
98
113
## Installation
99
114
@@ -103,69 +118,36 @@ gem 'ruby_llm'
103
118
```
104
119
Then `bundle install`.
105
120
106
-
Configure your API keys (using environment variables is recommended):
0 commit comments