Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
ashwinb committed Jan 24, 2025
1 parent 2fefe8d commit 9351a4b
Show file tree
Hide file tree
Showing 3 changed files with 81 additions and 41 deletions.
10 changes: 6 additions & 4 deletions docs/source/concepts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,23 @@ We are working on adding a few more APIs to complete the application lifecycle.

## API Providers

The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Obvious examples for these include
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, SambaNova, etc.),
- Vector databases (e.g., ChromaDB, Weaviate, Qdrant, etc.),
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, etc.),
- Vector databases (e.g., ChromaDB, Weaviate, Qdrant, FAISS, PGVector, etc.),
- Safety providers (e.g., Meta's Llama Guard, AWS Bedrock Guardrails, etc.)

Providers come in two flavors:
- **Remote**: the provider runs as a separate service external to the Llama Stack codebase. Llama Stack contains a small amount of adapter code.
- **Inline**: the provider is fully specified and implemented within the Llama Stack codebase. It may be a simple wrapper around an existing library, or a full fledged implementation within Llama Stack.

Most importantly, Llama Stack always strives to provide at least one fully "local" provider for each API so you can iterate on a fully featured environment locally.
## Resources

Some of these APIs are associated with a set of **Resources**. Here is the mapping of APIs to resources:

- **Inference**, **Eval** and **Post Training** are associated with `Model` resources.
- **Safety** is associated with `Shield` resources.
- **Tool Runtime** is associated with `ToolGroup` resources.
- **DatasetIO** is associated with `Dataset` resources.
- **Scoring** is associated with `ScoringFunction` resources.
- **Eval** is associated with `Model` and `EvalTask` resources.
Expand All @@ -56,7 +58,7 @@ While there is a lot of flexibility to mix-and-match providers, often users will

**Remotely Hosted Distro**: These are the simplest to consume from a user perspective. You can simply obtain the API key for these providers, point to a URL and have _all_ Llama Stack APIs working out of the box. Currently, [Fireworks](https://fireworks.ai/) and [Together](https://together.xyz/) provide such easy-to-consume Llama Stack distributions.

**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Cerebras, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.


**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.)
Expand Down
61 changes: 41 additions & 20 deletions docs/source/getting_started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

In this guide, we'll walk through how you can use the Llama Stack (server and client SDK ) to test a simple RAG agent.

A Llama Stack agent is a simple autonomous system that can perform tasks by combining a Llama model for reasoning with tools (e.g., RAG, web search, code execution, etc.) for taking actions.
A Llama Stack agent is a simple integrated system that can perform tasks by combining a Llama model for reasoning with tools (e.g., RAG, web search, code execution, etc.) for taking actions.

In Llama Stack, we provide a server exposing multiple APIs. These APIs are backed by implementations from different providers. For this guide, we will use [Ollama](https://ollama.com/) as the inference provider.

Expand All @@ -18,9 +18,22 @@ By default, Ollama keeps the model loaded in memory for 5 minutes which can be t
NOTE: If you do not have ollama, you can install it from [here](https://ollama.ai/docs/installation).


### 2. Start the Llama Stack server

Llama Stack is based on a client-server architecture. It consists of a server which can be configured very flexibly so you can mix-and-match various providers for its individual API components -- beyond Inference, these include Memory, Agents, Telemetry, Evals and so forth.
### 2. Pick a client environment

Llama Stack has a service-oriented architecture, so every interaction with the Stack happens through an REST interface. You can interact with the Stack in two ways:

* Install the `llama-stack-client` PyPI package and point `LlamaStackClient` to a local or remote Llama Stack server.
* Or, install the `llama-stack` PyPI package and use the Stack as a library using `LlamaStackAsLibraryClient`.

```{admonition} Note
:class: tip
The API is **exactly identical** for both clients.
```

:::{dropdown} Starting up the Llama Stack server
The Llama Stack server can be configured flexibly so you can mix-and-match various providers for its individual API components -- beyond Inference, these include Vector IO, Agents, Telemetry, Evals, Post Training, etc.

To get started quickly, we provide various Docker images for the server component that work with different inference providers out of the box. For this guide, we will use `llamastack/distribution-ollama` as the Docker image.

Expand All @@ -40,11 +53,12 @@ docker run -it \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434
```

Configuration for this is available at `distributions/ollama/run.yaml`.

:::

### 3. Use the Llama Stack client SDK

:::{dropdown} Installing the Llama Stack client CLI and SDK

You can interact with the Llama Stack server using various client SDKs. We will use the Python SDK which you can install using the following command. Note that you must be using Python 3.10 or newer:
```bash
Expand Down Expand Up @@ -72,13 +86,28 @@ llama-stack-client \
inference chat-completion \
--message "hello, what model are you?"
```
:::

 

Here is a simple example to perform chat completions using Python instead of the CLI.
### 3. Run inference with Python SDK

Here is a simple example to perform chat completions using the SDK.
```python
import os
from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
def create_http_client():
from llama_stack_client import LlamaStackClient
return LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")

def create_library_client(template="ollama"):
from llama_stack import LlamaStackAsLibraryClient
client = LlamaStackAsLibraryClient(template)
client.initialize()
return client


client = create_library_client() # or create_http_client() depending on the environment you picked

# List available models
models = client.models.list()
Expand All @@ -99,7 +128,7 @@ print(response.completion_message.content)

### 4. Your first RAG agent

Here is an example of a simple RAG agent that uses the Llama Stack client SDK.
Here is an example of a simple RAG (Retrieval Augmented Generation) chatbot agent which can answer questions about TorchTune documentation.

```python
import os
Expand All @@ -108,14 +137,11 @@ from termcolor import cprint
from llama_stack_client.lib.agents.agent import Agent
from llama_stack_client.lib.agents.event_logger import EventLogger
from llama_stack_client.types.agent_create_params import AgentConfig
from llama_stack_client.types.tool_runtime import DocumentParam as Document

from llama_stack_client import LlamaStackClient
from llama_stack_client.types import Document

# Define the client and point it to the server URL
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
client = create_library_client() # or create_http_client() depending on the environment you picked

# Define the documents to be used for RAG
# Documents to be used for RAG
urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"]
documents = [
Document(
Expand All @@ -142,13 +168,10 @@ client.tool_runtime.rag_tool.insert(
chunk_size_in_tokens=512,
)

# Create an agent
agent_config = AgentConfig(
# Define the inference model to use
model=os.environ["INFERENCE_MODEL"],
# Define instructions for the agent ( aka system prompt)
instructions="You are a helpful assistant",
# Enable session persistence
enable_session_persistence=False,
# Define tools available to the agent
toolgroups = [
Expand All @@ -161,11 +184,9 @@ agent_config = AgentConfig(
],
)

# Create an agent session
rag_agent = Agent(client, agent_config)
session_id = rag_agent.create_session("test-session")

# Define a user prompts
user_prompts = [
"What are the top 5 topics that were explained? Only list succinct bullet points.",
]
Expand Down
51 changes: 34 additions & 17 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,23 +37,40 @@ We have a number of client-side SDKs available for different languages.

## Supported Llama Stack Implementations

A number of "adapters" are available for some popular Inference and Memory (Vector Store) providers. For other APIs (particularly Safety and Agents), we provide *reference implementations* you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.

| **API Provider** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
| :----: | :----: | :----: | :----: | :----: | :----: | :----: |
| Meta Reference | Single Node | Y | Y | Y | Y | Y |
| Cerebras | Single Node | | Y | | | |
| Fireworks | Hosted | Y | Y | Y | | |
| AWS Bedrock | Hosted | | Y | | Y | |
| Together | Hosted | Y | Y | | Y | |
| SambaNova | Hosted | | Y | | | |
| Ollama | Single Node | | Y | | |
| TGI | Hosted and Single Node | | Y | | |
| NVIDIA NIM | Hosted and Single Node | | Y | | |
| Chroma | Single Node | | | Y | | |
| Postgres | Single Node | | | Y | | |
| PyTorch ExecuTorch | On-device iOS | Y | Y | | |
| PyTorch ExecuTorch | On-device Android | | Y | | |
A number of "adapters" are available for some popular Inference and Vector Store providers. For other APIs (particularly Safety and Agents), we provide *reference implementations* you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.

**Inference API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
| Ollama | Single Node |
| Fireworks | Hosted |
| Together | Hosted |
| NVIDIA NIM | Hosted and Single Node |
| vLLM | Hosted and Single Node |
| TGI | Hosted and Single Node |
| AWS Bedrock | Hosted |
| Cerebras | Hosted |
| Groq | Hosted |
| SambaNova | Hosted |
| PyTorch ExecuTorch | On-device iOS, Android |

**Vector IO API**
| **Provider** | **Environments** |
| :----: | :----: |
| FAISS | Single Node |
| Chroma | Hosted and Single Node |
| Postgres (PGVector) | Hosted and Single Node |
| Weaviate | Hosted |

**Safety API**
| **Provider** | **Environments** |
| :----: | :----: |
| Llama Guard | Depends on Inference Provider |
| Prompt Guard | Single Node |
| Code Scanner | Single Node |
| AWS Bedrock | Hosted |


```{toctree}
:hidden:
Expand Down

0 comments on commit 9351a4b

Please sign in to comment.