Skip to content

Commit

Permalink
Add parameter to test_connection
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Dec 30, 2024
1 parent 36e64ca commit 1592525
Show file tree
Hide file tree
Showing 39 changed files with 178 additions and 110 deletions.
10 changes: 5 additions & 5 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# ollamar (development version)

- `generate()` and `chat()` support [structured output](https://ollama.com/blog/structured-outputs) via `format` parameter.
- `test_connection()` returns boolean instead of `httr2` object. #29
- `test_connection()` returns `httr2::response` object by default, but also support returning a logical value. #29
- `chat()` supports [tool calling](https://ollama.com/blog/tool-support) via `tools` parameter. Added `get_tool_calls()` helper function to process tools. #30
- Simplify README and add Get started vignette with more examples.

Expand Down Expand Up @@ -29,17 +29,17 @@

## Bug fixes

- Fixed invalid URLs.
- Updated title and description.
- Fixed invalid URLs.
- Updated title and description.

# ollamar 1.0.0

* Initial CRAN submission.

## New features

- Integrate R with Ollama to run language models locally on your own machine.
- Include `test_connection()` function to test connection to Ollama server.
- Integrate R with Ollama to run language models locally on your own machine.
- Include `test_connection(logical = TRUE)` function to test connection to Ollama server.
- Include `list_models()` function to list available models.
- Include `pull()` function to pull a model from Ollama server.
- Include `delete()` function to delete a model from Ollama server.
Expand Down
26 changes: 13 additions & 13 deletions R/ollama.R
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ create_request <- function(endpoint, host = NULL) {
#' @references
#' [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion)
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' # text prompt
#' generate("llama3", "The sky is...", stream = FALSE, output = "df")
#' # stream and increase temperature
Expand Down Expand Up @@ -187,7 +187,7 @@ generate <- function(model, prompt, suffix = "", images = "", format = list(), s
#' @return A response in the format specified in the output parameter.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' # one message
#' messages <- list(
#' list(role = "user", content = "How are you doing?")
Expand Down Expand Up @@ -306,7 +306,7 @@ chat <- function(model, messages, tools = list(), stream = FALSE, format = list(
#' @return A response in the format specified in the output parameter.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' create("mario", "FROM llama3\nSYSTEM You are mario from Super Mario Bros.")
#' generate("mario", "who are you?", output = "text") # model should say it's Mario
#' delete("mario") # delete the model created above
Expand Down Expand Up @@ -388,7 +388,7 @@ create <- function(name, modelfile = NULL, stream = FALSE, path = NULL, endpoint
#' @return A response in the format specified in the output parameter.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' list_models() # returns dataframe
#' list_models("df") # returns dataframe
#' list_models("resp") # httr2 response object
Expand Down Expand Up @@ -435,7 +435,7 @@ list_models <- function(output = c("df", "resp", "jsonlist", "raw", "text"), end
#' @return A response in the format specified in the output parameter.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' # show("llama3") # returns jsonlist
#' show("llama3", output = "resp") # returns response object
show <- function(name, verbose = FALSE, output = c("jsonlist", "resp", "raw"), endpoint = "/api/show", host = NULL) {
Expand Down Expand Up @@ -482,7 +482,7 @@ show <- function(name, verbose = FALSE, output = c("jsonlist", "resp", "raw"), e
#' @return A httr2 response object.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' copy("llama3", "llama3_copy")
#' delete("llama3_copy") # delete the model was just got copied
copy <- function(source, destination, endpoint = "/api/copy", host = NULL) {
Expand Down Expand Up @@ -576,7 +576,7 @@ delete <- function(name, endpoint = "/api/delete", host = NULL) {
#' @return A httr2 response object.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' pull("llama3")
#' pull("all-minilm", stream = FALSE)
pull <- function(name, stream = FALSE, insecure = FALSE, endpoint = "/api/pull", host = NULL) {
Expand Down Expand Up @@ -644,7 +644,7 @@ pull <- function(name, stream = FALSE, insecure = FALSE, endpoint = "/api/pull",
#' @return A httr2 response object.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' push("mattw/pygmalion:latest")
push <- function(name, insecure = FALSE, stream = FALSE, output = c("resp", "jsonlist", "raw", "text", "df"), endpoint = "/api/push", host = NULL) {

Expand Down Expand Up @@ -744,7 +744,7 @@ normalize <- function(x) {
#' @return A numeric matrix of the embedding. Each column is the embedding for one input.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' embed("nomic-embed-text:latest", "The quick brown fox jumps over the lazy dog.")
#' # pass multiple inputs
#' embed("nomic-embed-text:latest", c("Good bye", "Bye", "See you."))
Expand Down Expand Up @@ -816,7 +816,7 @@ embed <- function(model, input, truncate = TRUE, normalize = TRUE, keep_alive =
#' @return A numeric vector of the embedding.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' embeddings("nomic-embed-text:latest", "The quick brown fox jumps over the lazy dog.")
#' # pass model options to the model
#' embeddings("nomic-embed-text:latest", "Hello!", temperature = 0.1, num_predict = 3)
Expand Down Expand Up @@ -869,7 +869,7 @@ embeddings <- function(model, prompt, normalize = TRUE, keep_alive = "5m", endpo
#' @return A response in the format specified in the output parameter.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' ps("text")
ps <- function(output = c("df", "resp", "jsonlist", "raw", "text"), endpoint = "/api/ps", host = NULL) {
output <- output[1]
Expand Down Expand Up @@ -915,7 +915,7 @@ ps <- function(output = c("df", "resp", "jsonlist", "raw", "text"), endpoint = "
#' @return Does not return anything. It prints the conversation in the console.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' ohelp(first_prompt = "quit")
#' # regular usage: ohelp()
ohelp <- function(model = "codegemma:7b", ...) {
Expand Down Expand Up @@ -964,7 +964,7 @@ ohelp <- function(model = "codegemma:7b", ...) {
#' @return A logical value indicating if the model exists.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' model_avail("codegemma:7b")
#' model_avail("abc")
#' model_avail("llama3")
Expand Down
25 changes: 17 additions & 8 deletions R/utils.R
Original file line number Diff line number Diff line change
@@ -1,30 +1,39 @@
#' Test connection to Ollama server
#'
#' @description
#' `test_connection()` tests whether the Ollama server is running or not.
#' Tests whether the Ollama server is running or not.
#'
#' @param url The URL of the Ollama server. Default is http://localhost:11434
#' @param logical Logical. If TRUE, returns a boolean value. Default is FALSE.
#'
#' @return Boolean TRUE if the server is running, otherwise FALSE.
#' @return Boolean value or httr2 response object, where status_code is either 200 (success) or 503 (error).
#' @export
#'
#' @examples
#' test_connection()
#' test_connection(logical = TRUE)
#' test_connection("http://localhost:11434") # default url
#' test_connection("http://127.0.0.1:11434")
test_connection <- function(url = "http://localhost:11434") {
test_connection <- function(url = "http://localhost:11434", logical = FALSE) {
req <- httr2::request(url)
req <- httr2::req_method(req, "GET")

tryCatch(
{
resp <- httr2::req_perform(req)
message("Ollama local server running")
return(TRUE)
if (logical) {
return(TRUE)
} else {
return(resp)
}
},
error = function(e) {
message("Ollama local server not running or wrong server.\nDownload and launch Ollama app to run the server. Visit https://ollama.com or https://github.com/ollama/ollama")
req$status_code <- 503
return(FALSE)
if (logical) {
return(FALSE)
} else {
return(httr2::response(status_code = 503, url = url))
}
}
)
}
Expand Down Expand Up @@ -117,7 +126,7 @@ get_tool_calls <- function(resp) {
#' @return A data frame, json list, raw or httr2 response object.
#' @export
#'
#' @examplesIf test_connection()
#' @examplesIf test_connection(logical = TRUE)
#' resp <- list_models("resp")
#' resp_process(resp, "df") # parse response to dataframe/tibble
#' resp_process(resp, "jsonlist") # parse response to list
Expand Down
16 changes: 8 additions & 8 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ knitr::opts_chunk$set(
[![CRAN_Download_Badge](https://cranlogs.r-pkg.org/badges/grand-total/ollamar)](https://cran.r-project.org/package=ollamar)
<!-- badges: end -->

The [Ollama R library](https://hauselin.github.io/ollama-r/) is the easiest way to integrate R with [Ollama](https://ollama.com/), which lets you run language models locally on your own machine.
The [Ollama R library](https://hauselin.github.io/ollama-r/) is the easiest way to integrate R with [Ollama](https://ollama.com/), which lets you run language models locally on your own machine.

The library also makes it easy to work with data structures (e.g., conversational/chat histories) that are standard for different LLMs (such as those provided by OpenAI and Anthropic). It also lets you specify different output formats (e.g., dataframes, text/vector, lists) that best suit your need, allowing easy integration with other libraries/tools and parallelization via the `httr2` library.

Expand All @@ -44,11 +44,11 @@ This library has been inspired by the official [Ollama Python](https://github.co
- Linux: `curl -fsSL https://ollama.com/install.sh | sh`
- [Docker image](https://hub.docker.com/r/ollama/ollama)

2. Open/launch the Ollama app to start the local server.
2. Open/launch the Ollama app to start the local server.

3. Install either the stable or latest/development version of `ollamar`.

Stable version:
Stable version:

```{r eval=FALSE}
install.packages("ollamar")
Expand All @@ -65,7 +65,7 @@ remotes::install_github("hauselin/ollamar")

Below is a basic demonstration of how to use the library. For details, see the [getting started vignette](https://hauselin.github.io/ollama-r/articles/ollamar.html) on our [main page](https://hauselin.github.io/ollama-r/).

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. If the response object says `Status: 200 OK`, then the request was successful.
`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. If the response object says `Status: 200 OK`, then the request was successful.

```{r eval=FALSE}
library(ollamar)
Expand All @@ -77,7 +77,7 @@ test_connection() # test connection to Ollama server
pull("llama3.1") # download a model (equivalent bash code: ollama run llama3.1)
# generate a response/text based on a prompt; returns an httr2 response by default
resp <- generate("llama3.1", "tell me a 5-word story")
resp <- generate("llama3.1", "tell me a 5-word story")
resp
#' interpret httr2 response object
Expand All @@ -88,15 +88,15 @@ resp
#' Body: In memory (414 bytes)
# get just the text from the response object
resp_process(resp, "text")
resp_process(resp, "text")
# get the text as a tibble dataframe
resp_process(resp, "df")
resp_process(resp, "df")
# alternatively, specify the output type when calling the function initially
txt <- generate("llama3.1", "tell me a 5-word story", output = "text")
# list available models (models you've pulled/downloaded)
list_models()
list_models()
name size parameter_size quantization_level modified
1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10
2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ test_connection() # test connection to Ollama server
pull("llama3.1") # download a model (equivalent bash code: ollama run llama3.1)

# generate a response/text based on a prompt; returns an httr2 response by default
resp <- generate("llama3.1", "tell me a 5-word story")
resp <- generate("llama3.1", "tell me a 5-word story")
resp

#' interpret httr2 response object
Expand All @@ -112,15 +112,15 @@ resp
#' Body: In memory (414 bytes)

# get just the text from the response object
resp_process(resp, "text")
resp_process(resp, "text")
# get the text as a tibble dataframe
resp_process(resp, "df")
resp_process(resp, "df")

# alternatively, specify the output type when calling the function initially
txt <- generate("llama3.1", "tell me a 5-word story", output = "text")

# list available models (models you've pulled/downloaded)
list_models()
list_models()
name size parameter_size quantization_level modified
1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10
2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33
Expand Down
2 changes: 1 addition & 1 deletion man/chat.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/copy.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/create.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/embed.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/embeddings.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/generate.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/list_models.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/model_avail.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/ohelp.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/ps.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/pull.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/push.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 1592525

Please sign in to comment.