You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -208,6 +209,27 @@ Ollama is a local model, which has an OpenAI compatible API. It supports the mod
208
209
k8sgpt analyze --explain --backend localai
209
210
```
210
211
212
+
## Ollama
213
+
214
+
Ollama can get up and running locally with large language models. It runs Llama 2, Code Llama, and other models.
215
+
216
+
- To start the Ollama server, follow the instruction in [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#start-ollama).
217
+
```bash
218
+
ollama serve
219
+
```
220
+
It can also run as an docker image, follow the instruction in [Ollama BLog](https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image)
221
+
```bash
222
+
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.
0 commit comments