Skip to content

Commit e17bf00

Browse files
Updates.
1 parent f060f22 commit e17bf00

File tree

13 files changed

+107
-222
lines changed

13 files changed

+107
-222
lines changed

docs/docs/distributions/remote_hosted_distro/oci.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ The `llamastack/distribution-oci` distribution consists of the following provide
1515
| inference | `remote::oci` |
1616
| safety | `inline::llama-guard` |
1717
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
18-
| telemetry | `inline::meta-reference` |
19-
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
18+
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `remote::model-context-protocol` |
2019
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
2120

2221

@@ -131,7 +130,6 @@ docker run \
131130
If you've set up your local development environment, you can also build the image using your local virtual environment.
132131

133132
```bash
134-
OCI_GENAI_MODEL_OCID=oci.ocid1.generativeaimodel.oc1.us-chicago-1.<ocid>
135133
llama stack build --distro oci --image-type venv
136134
llama stack run ./run.yaml \
137135
--port 8321 \

docs/docs/providers/eval/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
description: "Evaluations
33
4-
Llama Stack Evaluation API for running evaluations on model and agent candidates."
4+
Llama Stack Evaluation API for running evaluations on model and agent candidates."
55
sidebar_label: Eval
66
title: Eval
77
---
@@ -12,6 +12,6 @@ title: Eval
1212

1313
Evaluations
1414

15-
Llama Stack Evaluation API for running evaluations on model and agent candidates.
15+
Llama Stack Evaluation API for running evaluations on model and agent candidates.
1616

1717
This section contains documentation for all available providers for the **eval** API.

docs/docs/providers/files/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
description: "Files
33
4-
This API is used to upload documents that can be used with other Llama Stack APIs."
4+
This API is used to upload documents that can be used with other Llama Stack APIs."
55
sidebar_label: Files
66
title: Files
77
---
@@ -12,6 +12,6 @@ title: Files
1212

1313
Files
1414

15-
This API is used to upload documents that can be used with other Llama Stack APIs.
15+
This API is used to upload documents that can be used with other Llama Stack APIs.
1616

1717
This section contains documentation for all available providers for the **files** API.

docs/docs/providers/inference/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
description: "Inference
33
4-
Llama Stack Inference API for generating completions, chat completions, and embeddings.
4+
Llama Stack Inference API for generating completions, chat completions, and embeddings.
55
66
This API provides the raw interface to the underlying models. Two kinds of models are supported:
77
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
@@ -16,7 +16,7 @@ title: Inference
1616

1717
Inference
1818

19-
Llama Stack Inference API for generating completions, chat completions, and embeddings.
19+
Llama Stack Inference API for generating completions, chat completions, and embeddings.
2020

2121
This API provides the raw interface to the underlying models. Two kinds of models are supported:
2222
- LLM models: these models generate "raw" and "chat" (conversational) completions.

docs/docs/providers/safety/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
description: "Safety
33
4-
OpenAI-compatible Moderations API."
4+
OpenAI-compatible Moderations API."
55
sidebar_label: Safety
66
title: Safety
77
---
@@ -12,6 +12,6 @@ title: Safety
1212

1313
Safety
1414

15-
OpenAI-compatible Moderations API.
15+
OpenAI-compatible Moderations API.
1616

1717
This section contains documentation for all available providers for the **safety** API.

docs/static/llama-stack-spec.html

Lines changed: 0 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -5061,62 +5061,8 @@
50615061
"description": "The model that was used to generate the chat completion"
50625062
},
50635063
"usage": {
5064-
<<<<<<< HEAD
50655064
"$ref": "#/components/schemas/OpenAIChatCompletionUsage",
50665065
"description": "Token usage information (typically included in final chunk with stream_options)"
5067-
=======
5068-
"type": "object",
5069-
"properties": {
5070-
"completion_tokens": {
5071-
"type": "integer"
5072-
},
5073-
"prompt_tokens": {
5074-
"type": "integer"
5075-
},
5076-
"total_tokens": {
5077-
"type": "integer"
5078-
},
5079-
"completion_tokens_details": {
5080-
"type": "object",
5081-
"properties": {
5082-
"accepted_prediction_tokens": {
5083-
"type": "integer"
5084-
},
5085-
"audio_tokens": {
5086-
"type": "integer"
5087-
},
5088-
"reasoning_tokens": {
5089-
"type": "integer"
5090-
},
5091-
"rejected_prediction_tokens": {
5092-
"type": "integer"
5093-
}
5094-
},
5095-
"additionalProperties": false,
5096-
"title": "CompletionTokensDetails"
5097-
},
5098-
"prompt_tokens_details": {
5099-
"type": "object",
5100-
"properties": {
5101-
"audio_tokens": {
5102-
"type": "integer"
5103-
},
5104-
"cached_tokens": {
5105-
"type": "integer"
5106-
}
5107-
},
5108-
"additionalProperties": false,
5109-
"title": "PromptTokensDetails"
5110-
}
5111-
},
5112-
"additionalProperties": false,
5113-
"required": [
5114-
"completion_tokens",
5115-
"prompt_tokens",
5116-
"total_tokens"
5117-
],
5118-
"description": "(Optional) Usage information for the completion"
5119-
>>>>>>> 18b9c4c1 (feat: add oci genai service as chat inference provider)
51205066
}
51215067
},
51225068
"additionalProperties": false,

llama_stack/apis/inference/inference.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@
1515
)
1616

1717
from fastapi import Body
18-
from openai.types.completion_usage import CompletionUsage
1918
from pydantic import BaseModel, Field, field_validator
2019
from typing_extensions import TypedDict
2120

llama_stack/distributions/oci/build.yaml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ distribution_spec:
1313
- provider_type: inline::llama-guard
1414
agents:
1515
- provider_type: inline::meta-reference
16-
telemetry:
17-
- provider_type: inline::meta-reference
1816
eval:
1917
- provider_type: inline::meta-reference
2018
datasetio:
@@ -27,7 +25,6 @@ distribution_spec:
2725
tool_runtime:
2826
- provider_type: remote::brave-search
2927
- provider_type: remote::tavily-search
30-
- provider_type: inline::rag-runtime
3128
- provider_type: remote::model-context-protocol
3229
files:
3330
- provider_type: inline::localfs

llama_stack/distributions/oci/doc_template.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,6 @@ docker run \
123123
If you've set up your local development environment, you can also build the image using your local virtual environment.
124124

125125
```bash
126-
OCI_GENAI_MODEL_OCID=oci.ocid1.generativeaimodel.oc1.us-chicago-1.<ocid>
127126
llama stack build --distro oci --image-type venv
128127
llama stack run ./run.yaml \
129128
--port 8321 \

llama_stack/distributions/oci/oci.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,6 @@ docker run \
131131
If you've set up your local development environment, you can also build the image using your local virtual environment.
132132

133133
```bash
134-
OCI_GENAI_MODEL_OCID=oci.ocid1.generativeaimodel.oc1.us-chicago-1.<ocid>
135134
llama stack build --distro oci --image-type venv
136135
llama stack run ./run.yaml \
137136
--port 8321 \

0 commit comments

Comments
 (0)