You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/providers/backend.md
+22-1Lines changed: 22 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
A Backend (also called Provider) is a service that provides access to the AI language model. There are many different backends available for K8sGPT. Each backend has its own strengths and weaknesses, so it is important to choose the one that is right for your needs.
4
4
5
-
Currently, we have a total of 11 backends available:
5
+
Currently, we have a total of 12 backends available:
6
6
7
7
-[OpenAI](https://openai.com/)
8
8
-[Cohere](https://cohere.com/)
@@ -14,6 +14,7 @@ Currently, we have a total of 11 backends available:
@@ -195,6 +196,25 @@ LocalAI is a local model, which is an OpenAI compatible API. It uses llama.cpp a
195
196
k8sgpt analyze --explain --backend localai
196
197
```
197
198
199
+
<<<<<<< HEAD
200
+
## Oracle Cloud Infrastructure (OCI) Generative AI
201
+
202
+
[Oracle Cloud Infrastructure (OCI)](https://www.oracle.com/cloud/) Generative AI s a fully managed OCI service that provides a set of state-of-the-art, customizable large language models.
203
+
K8sgpt can be configured to use ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters.
204
+
205
+
To authenticate with OCI, create a [OCI SDK/CLI](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) `config` file in your home directory's `.oci/` directory.
206
+
207
+
Next, configure the OCI backend for a given model within an OCI compartment:
Ollama is a local model, which has an OpenAI compatible API. It supports the models listed in the [Ollama library](https://ollama.com/library).
@@ -230,6 +250,7 @@ Ollama can get up and running locally with large language models. It runs Llama
230
250
```bash
231
251
k8sgpt analyze --explain --backend ollama
232
252
```
253
+
>>>>>>> 4a1fbed37eedd35111986d07c99cf340d5653fd6
233
254
## FakeAI
234
255
235
256
FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.
0 commit comments