Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
hauselin committed Aug 17, 2024
1 parent c788868 commit b8bcd0f
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 34 deletions.
22 changes: 10 additions & 12 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -21,36 +21,34 @@ knitr::opts_chunk$set(

The [Ollama R library](https://hauselin.github.io/ollama-r/) is the easiest way to integrate R with [Ollama](https://ollama.com/), which lets you run language models locally on your own machine. Main site: https://hauselin.github.io/ollama-r/

> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
To use this R library, ensure the [Ollama](https://ollama.com) app is installed. Ollama can use GPUs for accelerating LLM inference. See[Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information.

To use this R library, you'll need to ensure the [Ollama](https://ollama.com/) app is installed. Ollama can use GPUs for accelerating LLM inference. See the [Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information.
See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md).

See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation/details here)](https://github.com/ollama/ollama/blob/main/docs/api.md).
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
## Ollama R versus Ollama Python
## Ollama R versus Ollama Python/JavaScript

This library has been inspired by the official [Ollama Python](https://github.com/ollama/ollama-python) and [Ollama JavaScript](https://github.com/ollama/ollama-js) libraries. If you're coming from Python or JavaScript, you should feel right at home. Alternatively, if you plan to use Ollama with Python or JavaScript, using this R library will help you understand the Python/JavaScript libraries as well.

## Installation

1. You should have the Ollama app installed on your computer. Download it from [Ollama](https://ollama.com/).
1. Download and install [Ollama](https://ollama.com).

2. Open/launch the Ollama app to start the local server. You can then run your language models locally, on your own machine/computer.
2. Open/launch the Ollama app to start the local server.

3. Install the **stable** version like so:
3. Install the **stable** version of `ollamar` like so:

```r
```{r eval=FALSE}
install.packages("ollamar")
```

4. Alternatively, for the **latest/development** version with more/latest features, you can install the latest version from GitHub using the `install_github` function from the `remotes` library:
Alternatively, for the **latest/development** version with more/latest features, you can install it from GitHub using the `install_github` function from the `remotes` library. If it doesn't work or you don't have `remotes` library, please run `install.packages("remotes")` in R or RStudio before running the code below.

``` r
```{r eval=FALSE}
remotes::install_github("hauselin/ollamar")
```

If it doesn't work or you don't have `remotes` library installed, please run `install.packages("remotes")` in R or RStudio first.

## Usage

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html) to make HTTP requests to the Ollama server, so many functions in this library returns an `httr2_response` object by default. If the response object says `Status: 200 OK`, then the request was successful. See [Notes section](#notes) below for more information.
Expand Down
38 changes: 17 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,20 @@ easiest way to integrate R with [Ollama](https://ollama.com/), which
lets you run language models locally on your own machine. Main site:
<https://hauselin.github.io/ollama-r/>

> Note: You should have at least 8 GB of RAM available to run the 7B
> models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
To use this R library, you’ll need to ensure the
[Ollama](https://ollama.com/) app is installed. Ollama can use GPUs for
accelerating LLM inference. See the [Ollama GPU
To use this R library, ensure the [Ollama](https://ollama.com) app is
installed. Ollama can use GPUs for accelerating LLM inference.
See[Ollama GPU
documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md)
for more information.

See [Ollama’s Github page](https://github.com/ollama/ollama) for more
information. This library uses the [Ollama REST API (see
documentation/details
here)](https://github.com/ollama/ollama/blob/main/docs/api.md).
information. This library uses the [Ollama REST API (see documentation
for details)](https://github.com/ollama/ollama/blob/main/docs/api.md).

## Ollama R versus Ollama Python
> Note: You should have at least 8 GB of RAM available to run the 7B
> models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
## Ollama R versus Ollama Python/JavaScript

This library has been inspired by the official [Ollama
Python](https://github.com/ollama/ollama-python) and [Ollama
Expand All @@ -39,29 +38,26 @@ libraries as well.

## Installation

1. You should have the Ollama app installed on your computer. Download
it from [Ollama](https://ollama.com/).
1. Download and install [Ollama](https://ollama.com).

2. Open/launch the Ollama app to start the local server. You can then
run your language models locally, on your own machine/computer.
2. Open/launch the Ollama app to start the local server.

3. Install the **stable** version like so:
3. Install the **stable** version of `ollamar` like so:

``` r
install.packages("ollamar")
```

4. Alternatively, for the **latest/development** version with
more/latest features, you can install the latest version from GitHub
using the `install_github` function from the `remotes` library:
Alternatively, for the **latest/development** version with more/latest
features, you can install it from GitHub using the `install_github`
function from the `remotes` library. If it doesn’t work or you don’t
have `remotes` library, please run `install.packages("remotes")` in R or
RStudio before running the code below.

``` r
remotes::install_github("hauselin/ollamar")
```

If it doesn’t work or you don’t have `remotes` library installed, please
run `install.packages("remotes")` in R or RStudio first.

## Usage

`ollamar` uses the [`httr2` library](https://httr2.r-lib.org/index.html)
Expand Down
2 changes: 1 addition & 1 deletion _pkgdown.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
url: https://hauselin.github.io/ollama-r/
url: https://hauselin.github.io/ollama-r

template:
bootstrap: 5
Expand Down

0 comments on commit b8bcd0f

Please sign in to comment.