Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion docs/guides/chat.md
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,19 @@ puts response2.content

You can set the temperature using `with_temperature`, which returns the `Chat` instance for chaining.

## Custom Request Parameters

You can configure additional provider-specific features by adding custom fields to each API request. Use the `with_params` method.

```ruby
# response_format parameter is supported by :openai, :ollama, :deepseek
chat = RubyLLM.chat.with_params(response_format: { type: 'json_object' })
response = chat.ask "What is the square root of 64? Answer with a JSON object with the key `result`."
puts JSON.parse(response.content)
```

Allowed parameters vary widely by provider and model.

## Tracking Token Usage

Understanding token usage is important for managing costs and staying within context limits. Each `RubyLLM::Message` returned by `ask` includes token counts.
Expand Down Expand Up @@ -311,4 +324,4 @@ This guide covered the core `Chat` interface. Now you might want to explore:
* [Using Tools]({% link guides/tools.md %}): Enable the AI to call your Ruby code.
* [Streaming Responses]({% link guides/streaming.md %}): Get real-time feedback from the AI.
* [Rails Integration]({% link guides/rails.md %}): Persist your chat conversations easily.
* [Error Handling]({% link guides/error-handling.md %}): Build robust applications that handle API issues.
* [Error Handling]({% link guides/error-handling.md %}): Build robust applications that handle API issues.
5 changes: 5 additions & 0 deletions lib/ruby_llm/active_record/acts_as.rb
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,11 @@ def with_context(...)
self
end

def with_params(...)
to_llm.with_params(...)
self
end

def on_new_message(...)
to_llm.on_new_message(...)
self
Expand Down
9 changes: 8 additions & 1 deletion lib/ruby_llm/chat.rb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ module RubyLLM
class Chat
include Enumerable

attr_reader :model, :messages, :tools
attr_reader :model, :messages, :tools, :params

def initialize(model: nil, provider: nil, assume_model_exists: false, context: nil)
if assume_model_exists && !provider
Expand All @@ -25,6 +25,7 @@ def initialize(model: nil, provider: nil, assume_model_exists: false, context: n
@temperature = 0.7
@messages = []
@tools = {}
@params = {}
@on = {
new_message: nil,
end_message: nil
Expand Down Expand Up @@ -78,6 +79,11 @@ def with_context(context)
self
end

def with_params(**params)
@params = params
self
end

def on_new_message(&block)
@on[:new_message] = block
self
Expand All @@ -99,6 +105,7 @@ def complete(&)
temperature: @temperature,
model: @model.id,
connection: @connection,
params: @params,
&wrap_streaming_block(&)
)

Expand Down
27 changes: 21 additions & 6 deletions lib/ruby_llm/provider.rb
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,19 @@ module Provider
module Methods
extend Streaming

def complete(messages, tools:, temperature:, model:, connection:, &)
def complete(messages, tools:, temperature:, model:, connection:, params: {}, &) # rubocop:disable Metrics/ParameterLists
normalized_temperature = maybe_normalize_temperature(temperature, model)

payload = render_payload(messages,
tools: tools,
temperature: normalized_temperature,
model: model,
stream: block_given?)
payload = deep_merge(
params,
render_payload(
messages,
tools: tools,
temperature: normalized_temperature,
model: model,
stream: block_given?
)
)

if block_given?
stream_response connection, payload, &
Expand All @@ -26,6 +31,16 @@ def complete(messages, tools:, temperature:, model:, connection:, &)
end
end

def deep_merge(params, payload)
params.merge(payload) do |_key, params_value, payload_value|
if params_value.is_a?(Hash) && payload_value.is_a?(Hash)
deep_merge(params_value, payload_value)
else
payload_value
end
end
end

def list_models(connection:)
response = connection.get models_url
parse_list_models_response response, slug, capabilities
Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading