diff --git a/README.Rmd b/README.Rmd index cda841f..789ae7d 100644 --- a/README.Rmd +++ b/README.Rmd @@ -21,7 +21,7 @@ knitr::opts_chunk$set( The [Ollama R library](https://hauselin.github.io/ollama-r/) is the easiest way to integrate R with [Ollama](https://ollama.com/), which lets you run language models locally on your own machine. Main site: https://hauselin.github.io/ollama-r/ -To use this R library, ensure the [Ollama](https://ollama.com) app is installed. Ollama can use GPUs for accelerating LLM inference. See[Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information. +To use this R library, ensure the [Ollama](https://ollama.com) app is installed. Ollama can use GPUs for accelerating LLM inference. See [Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information. See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md). diff --git a/README.md b/README.md index 1415ec0..480c2e7 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,8 @@ lets you run language models locally on your own machine. Main site: To use this R library, ensure the [Ollama](https://ollama.com) app is -installed. Ollama can use GPUs for accelerating LLM inference. -See[Ollama GPU +installed. Ollama can use GPUs for accelerating LLM inference. See +[Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information. diff --git a/_pkgdown.yml b/_pkgdown.yml index 778e645..bf09a60 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -1,7 +1,18 @@ url: https://hauselin.github.io/ollama-r +home: + title: Ollama R Library + description: Run Ollama language models in R. + template: bootstrap: 5 + light-switch: true + theme: a11y-light + theme-dark: a11y-dark + opengraph: + twitter: + site: "@hauselin" + creator: "@hauselin" reference: - title: Ollamar functions