Skip to content

Commit 9f0215c

Browse files
authored
Merge branch 'crmne:main' into main
2 parents ca001bd + 55efae7 commit 9f0215c

File tree

84 files changed

+4866
-4156
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+4866
-4156
lines changed

.overcommit.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,10 @@ PreCommit:
2424
targets: ['models:update', 'models:docs', 'aliases:generate']
2525
on_warn: fail
2626

27-
AppraisalGenerate:
27+
AppraisalUpdate:
2828
enabled: true
29-
description: 'Generate appraisal gemfiles'
30-
command: ['bundle', 'exec', 'appraisal', 'generate']
29+
description: 'Update appraisal gemfiles'
30+
command: ['bundle', 'exec', 'appraisal', 'update']
3131

3232
PostCheckout:
3333
ALL: # Special hook name that customizes all hooks of this type

.rubocop.yml

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
plugins:
2+
- rubocop-performance
23
- rubocop-rake
34
- rubocop-rspec
45

@@ -21,7 +22,18 @@ Metrics/MethodLength:
2122
Enabled: false
2223
Metrics/ModuleLength:
2324
Enabled: false
25+
Performance/CollectionLiteralInLoop:
26+
Exclude:
27+
- spec/**/*
28+
Performance/RedundantBlockCall:
29+
Enabled: false # TODO: temporarily disabled to avoid potential breaking change
30+
Performance/StringInclude:
31+
Exclude:
32+
- lib/ruby_llm/providers/**/capabilities.rb
33+
Performance/UnfreezeString:
34+
Exclude:
35+
- spec/**/*
2436
RSpec/ExampleLength:
2537
Enabled: false
2638
RSpec/MultipleExpectations:
27-
Enabled: false
39+
Enabled: false

Gemfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ group :development do # rubocop:disable Metrics/BlockLength
2222
gem 'reline'
2323
gem 'rspec', '~> 3.12'
2424
gem 'rubocop', '>= 1.0'
25+
gem 'rubocop-performance'
2526
gem 'rubocop-rake', '>= 0.6'
2627
gem 'rubocop-rspec'
2728
gem 'ruby_llm-schema', '~> 0.1.0'

README.md

Lines changed: 8 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,44 +1,18 @@
1-
<img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250">
2-
3-
**A delightful Ruby way to work with AI.** RubyLLM provides **one** beautiful, Ruby-like interface to interact with modern AI models. Chat, generate images, create embeddings, and use tools – all with clean, expressive code that feels like Ruby, not like patching together multiple services.
4-
5-
<div class="provider-icons">
6-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" alt="Anthropic" class="logo-small">
7-
&nbsp;
8-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-color.svg" alt="Bedrock" class="logo-medium">
9-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-text.svg" alt="Bedrock" class="logo-small">
10-
&nbsp;
11-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" alt="DeepSeek" class="logo-medium">
12-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" alt="DeepSeek" class="logo-small">
13-
&nbsp;
14-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-brand-color.svg" alt="Gemini" class="logo-large">
15-
<br>
16-
<img src="https://raw.githubusercontent.com/gpustack/gpustack/main/docs/assets/gpustack-logo.png" alt="GPUStack" class="logo-medium" height="16">
17-
&nbsp;
18-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/mistral-color.svg" alt="Mistral" class="logo-medium">
19-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/mistral-text.svg" alt="Mistral" class="logo-small">
20-
&nbsp;
21-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" alt="Ollama" class="logo-medium">
22-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" alt="Ollama" class="logo-medium">
23-
&nbsp;
24-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" alt="OpenAI" class="logo-medium">
25-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" alt="OpenAI" class="logo-medium">
26-
&nbsp;
27-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" alt="OpenRouter" class="logo-medium">
28-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" alt="OpenRouter" class="logo-small">
29-
&nbsp;
30-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/perplexity-color.svg" alt="Perplexity" class="logo-medium">
31-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/perplexity-text.svg" alt="Perplexity" class="logo-small">
32-
</div>
1+
<picture>
2+
<source media="(prefers-color-scheme: dark)" srcset="/docs/assets/images/logotype_dark.svg">
3+
<img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250">
4+
</picture>
5+
6+
**One *beautiful* Ruby API for GPT, Claude, Gemini, and more.** Easily build chatbots, AI agents, RAG applications, and content generators. Features chat (text, images, audio, PDFs), image generation, embeddings, tools (function calling), structured output, Rails integration, and streaming. Works with OpenAI, Anthropic, Google Gemini, AWS Bedrock, DeepSeek, Mistral, Ollama (local models), OpenRouter, Perplexity, GPUStack, and any OpenAI-compatible API.
337

348
<div class="badge-container">
35-
<a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg?a=3" alt="Gem Version" /></a>
9+
<a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg?a=5" alt="Gem Version" /></a>
3610
<a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a>
3711
<a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a>
3812
<a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a>
3913
</div>
4014

41-
🤺 Battle tested at [💬 Chat with Work](https://chatwithwork.com)
15+
Battle tested at [<picture><source media="(prefers-color-scheme: dark)" srcset="https://chatwithwork.com/logotype-dark.svg"><img src="https://chatwithwork.com/logotype.svg" alt="Chat with Work" height="30" align="absmiddle"></picture>](https://chatwithwork.com)*Claude Code for your documents*
4216

4317
## The problem with AI libraries
4418

docs/_advanced/error-handling.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ RubyLLM::Error # Base error class for API/network issues
5050
RubyLLM::ConfigurationError # Missing required configuration (e.g., API key)
5151
RubyLLM::ModelNotFoundError # Requested model ID not found in registry
5252
RubyLLM::InvalidRoleError # Invalid role symbol used for a message
53-
RubyLLM::UnsupportedFunctionsError # Tried to use tools with an unsupported model
5453
```
5554

5655
## Basic Error Handling

docs/_advanced/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -260,7 +260,7 @@ puts response.content
260260

261261
# You can also use it in .with_model
262262
chat.with_model(
263-
model: 'gpt-5-alpha',
263+
'gpt-5-alpha',
264264
provider: :openai, # MUST specify provider
265265
assume_exists: true
266266
)

docs/_config.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,6 @@ nav_external_links:
8383
url: https://paolino.me
8484
hide_icon: false
8585

86-
footer_content: "Open source under <a href=\"https://github.com/crmne/ruby_llm/tree/main/LICENSE\">MIT license</a>.<br>Built by <a href=\"https://paolino.me\">Carmine Paolino</a>, maker of <a href=\"https://chatwithwork.com\"><strong>Chat with Work</strong></a> — Claude Code for your documents."
87-
8886
last_edit_timestamp: true
8987

9088
enable_copy_code_button: true

docs/_core_features/chat.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -424,17 +424,16 @@ Not all models support structured output. Currently supported:
424424
- **Anthropic**: No native structured output support. You can simulate it with tool definitions or careful prompting
425425
- **Gemini**: Gemini 1.5 Pro/Flash and newer
426426

427-
Models that don't support structured output will raise an error:
427+
Models that don't support structured output:
428428

429429
```ruby
430+
# RubyLLM 1.6.2+ will attempt to use schemas with any model
430431
chat = RubyLLM.chat(model: 'gpt-3.5-turbo')
431-
chat.with_schema(schema) # Raises UnsupportedStructuredOutputError
432-
```
433-
434-
You can force schema usage even if the model registry says it's unsupported:
432+
chat.with_schema(schema)
433+
response = chat.ask('Generate a person')
434+
# Provider will return an error if unsupported
435435

436-
```ruby
437-
chat.with_schema(schema, force: true)
436+
# Prior to 1.6.2, with_schema would raise UnsupportedStructuredOutputError
438437
```
439438

440439
### Multi-turn Conversations with Schemas

docs/_core_features/tools.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -204,14 +204,27 @@ weather_tool = Weather.new
204204
chat.with_tool(weather_tool)
205205
# Or add multiple: chat.with_tools(WeatherLookup, AnotherTool.new)
206206

207+
# Replace all tools with new ones
208+
chat.with_tools(NewTool, AnotherTool, replace: true)
209+
210+
# Clear all tools
211+
chat.with_tools(replace: true)
212+
207213
# Ask a question that should trigger the tool
208214
response = chat.ask "What's the current weather like in Berlin? (Lat: 52.52, Long: 13.40)"
209215
puts response.content
210216
# => "Current weather at 52.52, 13.4: Temperature: 12.5°C, Wind Speed: 8.3 km/h, Conditions: Mainly clear, partly cloudy, and overcast."
211217
```
212218

213-
> Ensure the model you select supports function calling/tools. Check model capabilities using `RubyLLM.models.find('your-model-id').supports_functions?`. Attempting to use `with_tool` on an unsupported model will raise `RubyLLM::UnsupportedFunctionsError`.
214-
{: .warning }
219+
### Model Compatibility
220+
{: .d-inline-block }
221+
222+
Changed in v1.6.2+
223+
{: .label .label-green }
224+
225+
RubyLLM v1.6.2+ will attempt to use tools with any model. If the model doesn't support function calling, the provider will return an appropriate error when you call `ask`.
226+
227+
Prior to v1.6.2, calling `with_tool` on an unsupported model would immediately raise `RubyLLM::UnsupportedFunctionsError`.
215228

216229
## The Tool Execution Flow
217230

docs/_getting_started/configuration.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ Connect to any OpenAI-compatible API endpoint, including local models, proxies,
9898
RubyLLM.configure do |config|
9999
# API key - use what your server expects
100100
config.openai_api_key = ENV['CUSTOM_API_KEY'] # Or 'dummy-key' if not required
101-
101+
102102
# Your custom endpoint
103103
config.openai_api_base = "http://localhost:8080/v1" # vLLM, LiteLLM, etc.
104104
end
@@ -119,7 +119,7 @@ OpenAI's API now uses 'developer' role for system messages, but some OpenAI-comp
119119
RubyLLM.configure do |config|
120120
# For servers that require 'system' role (e.g., older vLLM, some local models)
121121
config.openai_use_system_role = true # Use 'system' role instead of 'developer'
122-
122+
123123
# Your OpenAI-compatible endpoint
124124
config.openai_api_base = "http://localhost:11434/v1" # Ollama, vLLM, etc.
125125
config.openai_api_key = "dummy-key" # If required by your server
@@ -222,9 +222,6 @@ RubyLLM.configure do |config|
222222
# Enable debug logging via environment variable
223223
config.log_level = :debug if ENV['RUBYLLM_DEBUG'] == 'true'
224224

225-
# Silence "Assuming model exists" warnings
226-
config.log_assume_model_exists = false
227-
228225
# Show detailed streaming chunks (v1.6.0+)
229226
config.log_stream_debug = true # Or set RUBYLLM_STREAM_DEBUG=true
230227
end
@@ -356,7 +353,6 @@ RubyLLM.configure do |config|
356353
config.logger = Logger
357354
config.log_file = String
358355
config.log_level = Symbol
359-
config.log_assume_model_exists = Boolean
360356
config.log_stream_debug = Boolean # v1.6.0+
361357
end
362358
```

0 commit comments

Comments
 (0)