Skip to content

Commit

Permalink
Update utils and changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Aug 5, 2024
1 parent 9f5c13b commit afff36f
Show file tree
Hide file tree
Showing 4 changed files with 62 additions and 16 deletions.
29 changes: 20 additions & 9 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
# ollamar 1.2.0

- All functions calling API endpoints have `endpoint` parameter.
- All functions calling API endpoints have `...` parameter to pass additional model options to the API.
- All functions calling API endpoints have `host` parameter to specify the host URL. Default is `NULL`, which uses the default Ollama URL.
- Add `req` as an output format for `generate()` and `chat()`.
- Add new functions for calling APIs: `create()`, `show()`, `copy()`, `delete()`, `push()`, `embed()` (supercedes `embeddings()`), `ps()`.
- Add helper functions to manipulate chat/conversation history for `chat()` function (or other APIs like OpenAI): `create_message()`, `append_message()`, `prepend_message()`, `delete_message()`, `insert_message()`.
- Add `ohelp()` function to chat with models in real-time.
- Add helper functions: `model_avail()`, `image_encode_base64()`, `check_option_valid()`, `check_options()`, `search_options()`, `validate_options()`

# ollamar 1.1.1

## Bug fixes
Expand All @@ -11,13 +22,13 @@

## New features

- Integrated R with Ollama to run language models locally on your own machine.
- Included `test_connection()` function to test connection to Ollama server.
- Included `list_models()` function to list available models.
- Included `pull()` function to pull a model from Ollama server.
- Included `delete()` function to delete a model from Ollama server.
- Included `chat()` function to chat with a model.
- Included `generate()` function to generate text from a model.
- Included `embeddings()` function to get embeddings from a model.
- Included `resp_process()` function to process `httr2_response` objects.
- Integrate R with Ollama to run language models locally on your own machine.
- Include `test_connection()` function to test connection to Ollama server.
- Include `list_models()` function to list available models.
- Include `pull()` function to pull a model from Ollama server.
- Include `delete()` function to delete a model from Ollama server.
- Include `chat()` function to chat with a model.
- Include `generate()` function to generate text from a model.
- Include `embeddings()` function to get embeddings from a model.
- Include `resp_process()` function to process `httr2_response` objects.

8 changes: 6 additions & 2 deletions R/utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -86,9 +86,13 @@ stream_handler <- function(x, env, endpoint) {
#' resp_process(resp, "resp") # return input response object
#' resp_process(resp, "text") # return text/character vector
resp_process <- function(resp, output = c("df", "jsonlist", "raw", "resp", "text")) {

if (!inherits(resp, "httr2_response")) {
stop("Input must be a httr2 response object")
}

if (is.null(resp) || resp$status_code != 200) {
warning("Cannot process response")
return(NULL)
stop("Cannot process response")
}

endpoints_to_skip <- c("api/delete", "api/embed", "api/embeddings", "api/create")
Expand Down
17 changes: 16 additions & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,22 @@ messages <- delete_message(messages, 2)

### Parallel requests

For the `generate()` and `chat()` endpoints/functions, you can make parallel requests with the `req_perform_parallel` function from the `httr2` library. You need to specify `output = 'req'` in the function so the functions return `httr2_request` objects instead of `httr2_response` objects.
For the `generate()` and `chat()` endpoints/functions, you can specify `output = 'req'` in the function so the functions return `httr2_request` objects instead of `httr2_response` objects.

```{r eval=FALSE}
prompt <- "Tell me a 10-word story"
req <- generate("llama3.1", prompt, output = "req") # returns a httr2_request object
# <httr2_request>
# POST http://127.0.0.1:11434/api/generate
# Headers:
# • content_type: 'application/json'
# • accept: 'application/json'
# • user_agent: 'ollama-r/1.1.1 (aarch64-apple-darwin20) R/4.4.0'
# Body: json encoded data
```

When you have multiple `httr2_request` objects in a list, you can make parallel requests with the `req_perform_parallel` function from the `httr2` library. See [`httr2` documentation](https://httr2.r-lib.org/reference/req_perform_parallel.html) for details.

```{r eval=FALSE}
library(httr2)
Expand Down
24 changes: 20 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -328,11 +328,27 @@ messages <- delete_message(messages, 2)

### Parallel requests

For the `generate()` and `chat()` endpoints/functions, you can make
For the `generate()` and `chat()` endpoints/functions, you can specify
`output = 'req'` in the function so the functions return `httr2_request`
objects instead of `httr2_response` objects.

``` r
prompt <- "Tell me a 10-word story"
req <- generate("llama3.1", prompt, output = "req") # returns a httr2_request object
# <httr2_request>
# POST http://127.0.0.1:11434/api/generate
# Headers:
# • content_type: 'application/json'
# • accept: 'application/json'
# • user_agent: 'ollama-r/1.1.1 (aarch64-apple-darwin20) R/4.4.0'
# Body: json encoded data
```

When you have multiple `httr2_request` objects in a list, you can make
parallel requests with the `req_perform_parallel` function from the
`httr2` library. You need to specify `output = 'req'` in the function so
the functions return `httr2_request` objects instead of `httr2_response`
objects.
`httr2` library. See [`httr2`
documentation](https://httr2.r-lib.org/reference/req_perform_parallel.html)
for details.

``` r
library(httr2)
Expand Down

0 comments on commit afff36f

Please sign in to comment.