diff --git a/docs/runtime_suite/rag-chatbot-api/10_overview_and_usage.md b/docs/runtime_suite/rag-chatbot-api/10_overview_and_usage.md new file mode 100644 index 0000000000..4495050f88 --- /dev/null +++ b/docs/runtime_suite/rag-chatbot-api/10_overview_and_usage.md @@ -0,0 +1,246 @@ +--- +id: overview_and_usage +title: AI RAG Template +sidebar_label: Overview And Usage +--- + + + +The _AI RAG Template_ is a template to build and run your own RAG application and build a Chatbot that is capable to perform a conversation with a user. + +The service is developed using the [LangChain](https://python.langchain.com/docs/get_started/introduction/) framework, which enables creating sequences of complex interactions using Large Language Models. The web server is implemented using the [FastAPI](https://fastapi.tiangolo.com/) framework. + +In order to work, it is required to have a MongoDB instance to be used as Vector Store and that supports [MongoDB Vector Search indexes](https://www.mongodb.com/docs/atlas/atlas-vector-search/tutorials/vector-search-quick-start/), which means an Atlas instance with version 6 or above. + +## Overview + +The following is the high-level architecture of the template. + +```mermaid +flowchart LR + fe[Frontend] + be[Backend] + vs[(Vector Store)] + llm[LLM API] + eg[Embeddings Generator API] + + fe --1. user question +\nchat history--> be + be --2. user question--> eg + eg --3. embedding-->be + be --4. similarity search-->vs + vs --5. similar docs-->be + be --6. user question +\nchat history +\nsimilar docs-->llm + llm --7. bot answer--> be + be --8. bot answer--> fe +``` + +### Embeddings + +Please mind that the template does not include embeddings or any logic to create them. It is intended that the Vector Store will include the embeddings (or these are generated separately). In any case, please ensure that the embedding model used the populate the Vector Store is the same embedding model used when running the service, otherwise the service will generate answers only based on its own knowledge, without being able to use the Vector Store, with the risk of hallucinations when chatting with the user. + +### API + +Read more at [the related page](./20_apis.md) + +## Environment Variables + +The following environment variables are required for the service to work: + +- **PORT**: the port used to expose the API (default: _3000_) +- **LOG_LEVEL**: the level of the logger (default: _INFO_) +- **CONFIGURATION_PATH**: the path that contains the [JSON configuration file](#configuration) +- **MONGODB_CLUSTER_URI**: the MongoDB connection string +- **LLM_API_KEY**: the API Key of the LLM (_NOTE_: currently, we support only the OpenAI models, thus the API Key is the same as the OpenAI API Key) +- **EMBEDDINGS_API_KEY**: the API Key of the embeddings model (_NOTE_: currently, we support only the OpenAI models, thus the API Key is the same as the OpenAI API Key) + +It is suggested to save the environment variables in a `.env` file. + +## Configuration + +The service requires several configuration parameters for execution. Below is an example of configuration: + +```json +{ + "llm": { + "type": "openai", + "name": "gpt-3.5-turbo", + "temperature": 0.7, + }, + "embeddings": { + "type": "openai", + "name": "text-embedding-3-small" + }, + "vectorStore": { + "dbName": "database-test", + "collectionName": "assistant-documents", + "indexName": "vector_index", + "relevanceScoreFn": "euclidean", + "embeddingKey": "embedding", + "textKey": "text", + "maxDocumentsToRetrieve": 4, + "minScoreDistance": 0.5 + }, + "chain": { + "aggregateMaxTokenNumber": 2000, + "rag": { + "promptsFilePath": { + "system": "/path/to/system-prompt.txt", + "user": "/path/to/user-prompt.txt" + } + } + } +} +``` + +Description of configuration parameters: + +| Param Name | Description | +|------------|-------------| +| LLM Type | Identifier of the provider to use for the LLM. Default: `openai`. See more in [Supported LLM providers](#supported-llm-providers) | +| LLM Name | Name of the chat model to use. [Must be supported by LangChain.](https://python.langchain.com/docs/integrations/chat/) | +| LLM Temperature | Temperature parameter for the LLM, intended as the grade of variability and randomness of the generated response. Default: `0.7` (suggested value). | +| Embeddings Type | Identifier of the provider to use for the Embeddings. Default: `openai`. See more in [Supported Embeddings providers](#supported-embeddings-providers) | +| Embeddings Name | Name of the encoder to use. [Must be supported by LangChain.](https://python.langchain.com/docs/integrations/text_embedding/) | +| Vector Store DB Name | Name of the MongoDB database to use as a knowledge base. | +| Vector Store Collection Name | Name of the MongoDB collection to use for storing documents and document embeddings. | +| Vector Store Index Name | Name of the vector index to use for retrieving documents related to the user's query. The application will check at startup if a vector index with this name exists, it needs to be updated or needs to be created. | +| Vector Store Relevance Score Function | Name of the similarity function used for extracting similar documents using the created vector index. In case the existing vector index uses a different similarity function, the index will be updated using this as a similarity function. | +| Vector Store Embeddings Key | Name of the field used to save the semantic encoding of documents. In case the existing vector index uses a different key to store the embedding in the collection, the index will be updated using this as key. Please mind that any change of this value might require to recreate the embeddings. | +| Vector Store Text Key | Name of the field used to save the raw document (or chunk of document). | +| Vector Store Max. Documents To Retrieve | Maximum number of documents to retrieve from the Vector Store. | +| Vector Store Min. Score Distance | Minimum distance beyond which retrieved documents from the Vector Store are discarded. | +| Chain Aggregate Max Token Number | Maximum number of tokens extracted from the retrieved documents from the Vector Store to be included in the prompt (1 token is approximately 4 characters). Default is `2000`. | +| Chain RAG System Prompts File Path | Path to the file containing system prompts for the RAG model. If omitted, the application will use a standard system prompt. More details in the [dedicated paragraph](#configure-your-own-system-and-user-prompts). | +| Chain RAG User Prompts File Path | Path to the file containing user prompts for the RAG model. If omitted, the application will use a standard system prompt. More details in the [dedicated paragraph](#configure-your-own-system-and-user-prompts). | + +### Supported LLM providers + +The property `type` inside the `llm` object of the configuration should be one of the supported providers for the LLM. +Currently, the supported LLM providers are: + +- OpenAI (`openai`), in which case the `llm` configuration could be the following: + ```json + { + "type": "openai", + "name": "gpt-3.5-turbo", + "temperature": 0.7, + } + ``` + with the properties explained above. + +- Azure OpenAI (`azure`), in which case the `llm` configuration could be the following: + ```json + { + "type": "azure", + "name": "gpt-3.5-turbo", + "deploymentName": "dep-gpt-3.5-turbo", + "url": "https://my-company.openai.azure.com/", + "apiVersion": "my-azure-api-version", + "temperature": 0.7 + } + ``` + + While, `type` is always `azure`, and `name` and `temperature` have been already explained, the other properties are: + | Name | Description | + |------|-------------| + | `deploymentName` | Name of the deployment to use. | + | `url` | URL of the Azure OpenAI service to call. | + | `apiVersion` | API version of the Azure OpenAI service. | + +### Supported Embeddings providers + +The property `type` inside the `embeddings` object of the configuration should be one of the supported providers for the Embeddings. +Currently, the supported Embeddings providers are: + +- OpenAI (`openai`), in which case the `embeddings` configuration could be the following: + ```json + { + "type": "openai", + "name": "text-embedding-3-small", + } + ``` + with the properties explained above. + + - Azure OpenAI (`azure`), in which case the `embeddings` configuration could be the following: + ```json + { + "type": "azure", + "name": "text-embedding-3-small", + "deploymentName": "dep-text-embedding-3-small", + "url": "https://my-company.openai.azure.com/", + "apiVersion": "my-azure-api-version" + } + ``` + While, `type` is always `azure`, and `name` have been already explained, the other properties are: + + | Name | Description | + |------|-------------| + | `deploymentName` | Name of the deployment to use. | + | `url` | URL of the Azure OpenAI service to call. | + | `apiVersion` | API version of the Azure OpenAI service. | + +### Configure your own system and user prompts + +The application sends to the LLM a prompt that is composed of a _system prompt_ and a _user prompt_: + +- the _system prompt_ is a message that provides instructions to the LLM on how to respond to the user's input. +- the _user prompt_ is a message that contains the user's input. + +A default version of these prompts are included in the application, but you can also use your own prompts to instruct the LLM to behave in a more specific way, such as behaving as a generic assistant in any field or as an expert in a specific field related to the embedding documents you are using. + +Both the system and user prompts are optional, but if you want to use your own prompts, you need to create a text file with the content of the prompt and specify the path to the file in the configuration at `chain.rag.systemPromptsFilePath` and `chain.rag.userPromptsFilePath` respectively. + +Moreover, the _system prompt_ must include the following placeholders: + +- `{chat_history}`: placeholder that will be replaced by the chat history, which is a list of messages exchanged between the user and the chatbot until then (received via the `chat_history` property from the body of the [`/chat/completions` endpoint](#chat-endpoint-chatcompletions)) +- `{output_text}`: placeholder that will be replaced by the text extracted from the embedding documents + +> **Note** +> +> The application already includes some context text to explain what the chat history is and what the output text is, so you don't need to add your explanation to the system prompt. + +Also, the _user prompt_ must include the following placeholder: + +- `{query}`: placeholder that will be replaced by the user's input (received via the `chat_query` property from the body of the [`/chat/completions` endpoint](#chat-endpoint-chatcompletions)) + +Generally speaking, it is suggested to have a _system prompt_ tailored to the needs of your application, to specify what type of information the chatbot should provide and the tone and style of the responses. The _user prompt_ can be omitted unless you need to specify particular instructions or constraints specific to each question. + +### Create a Vector Index + +:::info +MongoDB Vector Search Index is updated automatically by the application at its startup, always updating the `path`, the `numDimensions` and the `similarity` fields according to the configuration. + +It also creates the index with the name `vectorStore.indexName` if it does not exist. + +This part is included only for information purposes. +::: + +This template requires a [MongoDB Vector Search Index](https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-type/) to function correctly, and requires using MongoDB Atlas or a MongoDB on-premise cluster with version 6 or above. + +The Vector Search Index should have the following structure: + +```json +{ + "fields": [ + { + "type": "vector", + "path": "<>", + "numDimensions": 768, + "similarity": "<>" + } + ] +} +``` + +where: +- `embeddingsKey` is the name of the field used to store the semantic encoding of documents. +- `relevanceScoreFn` is the name of the similarity function used for extracting similar documents using the created vector index. In case the existing vector index uses a different similarity function, the index will be updated using this as a similarity function. +- the `numDimensions` value depends on the Embedding Model used (supported: `text-embedding-3-small`, `text-embedding-3-large` and its deployment versions - if using Azure OpenAI). + +:::warning +In the event that an error occurs during the creation or update of the Vector Index, the exception will be logged, but the application will still start. However, the functioning of the service is not guaranteed. +::: diff --git a/docs/runtime_suite/rag-chatbot-api/20_apis.md b/docs/runtime_suite/rag-chatbot-api/20_apis.md new file mode 100644 index 0000000000..e53c23803f --- /dev/null +++ b/docs/runtime_suite/rag-chatbot-api/20_apis.md @@ -0,0 +1,204 @@ +--- +id: apis +title: APIs +sidebar_label: APIs +--- + + + +The following documents includes a comprehensible list of the available APIs exposed by the service. + +When running the service, the application exposes a Swagger UI at the `/docs` endpoint. + +### Chat Endpoint (`/chat/completions`) + +The `/chat/completions` endpoint generates responses to user queries based on provided context and chat history. It leverages information from the configured Vector Store to formulate relevant responses, enhancing the conversational experience. + +***Example***: + +
+Request + +```curl +curl 'http://localhost:3000/chat/completions' \ + -H 'content-type: application/json' \ + --data-raw '{"chat_query":"Design a CRUD schema for an online store selling merchandise items","chat_history":[]}' +``` + +
+ +
+Response + +```json +{ + "message": "For an online store selling merchandise items, we can design a CRUD schema for a `Product` entity with the following properties: ...", + "references": [ + { + "content": "### Create CRUD to Read and Write Table Data \n...", + "url": "../../microfrontend-composer/tutorials/basics" + }, + { + "content": "### Create CRUD to Read and Write Table Data \n...", + "url": "../../microfrontend-composer/tutorials/basics" + }, + { + "content": "### Create a CRUD for persistency \n...", + "url": "../../console/tutorials/configure-marketplace-components/flow-manager" + }, + { + "content": "### Create a CRUD for persistency \n...", + "url": "../../console/tutorials/configure-marketplace-components/flow-manager" + } + ] +} +``` + +
+ +### Embedding Endpoints + +#### Generate from website (`/embeddings/generate`) + +The `/embeddings/generate` endpoint is a HTTP POST method that takes as input: + +- `url` (string, *required*), a web URL used as a starting point +- `filterPath` (string, not required), a more specific web URL that the one specified above + +- crawl the webpage +- check for links on the same domain (and, if included, that begins with the `filterPath`) of the webpage and store them in a list +- scrape the page for text +- generate the embeddings using the [configured embedding model](#configuration) +- start again from every link still in the list + +> **NOTE**: +> This method can be run only one at a time, as it uses a lock to prevent multiple requests from starting the process at the same time. +> +> No information are returned when the process ends, either as completed or stopped because of an error. + +***Eg***: + +
+Request + +```curl +curl 'http://localhost:3000/embedding/generate' \ + -H 'content-type: application/json' \ + --data-raw '{"url":"https://docs.mia-platform.eu/", "domain": "../../runtime_suite_templates" }' +``` + +
+ +
+Response in case the runner is idle + +```json +200 OK +{ + "statusOk": "true" +} +``` +
+ +
+Response in case the runner is running + +```json +409 Conflict +{ + "detail": "A process to generate embeddings is already in progress." +} +``` +
+ +#### Generate from file (`/embeddings/generateFromFile`) + +The `/embeddings/generateFromFile` endpoint is a HTTP POST method that takes as input: + +- `file` (binary, *required*), a file to be uploaded containing the text that will be transformed into embeddings. + +The file must be of format: + +- a text file (`.txt`) +- a markdown file (`.md`, `.mdx`) +- a PDF file (`.pdf`) +- a zip file (formats available: `.zip`, `.tar`, `.gz`) containing files of the same formats as above (folders and other files will be skipped). + +For this file, of each file inside the archive, the text will be retrieved, chunked and the embeddings generated. + +> **NOTE**: +> This method can be run only one at a time, as it uses a lock to prevent multiple requests from starting the process at the same time. +> +> No information are returned when the process ends, either as completed or stopped because of an error. + +***Eg***: + +
+Request + +```curl +curl -X 'POST' \ + 'https://rag-app-test.console.gcp.mia-platform.eu/api/embeddings/generateFromFile' \ + -H 'accept: application/json' \ + -H 'Content-Type: multipart/form-data' \ + -F 'file=@my-archive.zip;type=application/zip' +``` + +
+ +
+Response in case the runner is idle + +```json +200 OK +{ + "statusOk": "true" +} +``` +
+ +
+Response in case the runner is running + +```json +409 Conflict +{ + "detail": "A process to generate embeddings is already in progress." +} +``` +
+ +#### Generation status (`/embeddings/status`) + +This request returns to the user information regarding the [embeddings generation runner](#generate-embedding-endpoint-embeddingsgenerate). Could be either `idle` (no process currently running) or `running` (a process of generating embeddings is actually happenning). + +***Eg***: + +
+Request + +```curl +curl 'http://localhost:3000/embedding/status' \ + -H 'content-type: application/json' \ +``` + +
+ +
+Response + +```json +200 OK +{ + "status": "idle" +} +``` +
+ +### Metrics Endpoint (`/-/metrics`) + +The `/-/metrics` endpoint exposes the metrics collected by Prometheus. diff --git a/docs/runtime_suite/rag-chatbot-api/_category_.json b/docs/runtime_suite/rag-chatbot-api/_category_.json new file mode 100644 index 0000000000..5f68cc8efc --- /dev/null +++ b/docs/runtime_suite/rag-chatbot-api/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "RAG Chatbot API", + "position": 10 +} \ No newline at end of file diff --git a/docs/runtime_suite/rag-chatbot-api/changelog.md b/docs/runtime_suite/rag-chatbot-api/changelog.md new file mode 100644 index 0000000000..e6e8731cb9 --- /dev/null +++ b/docs/runtime_suite/rag-chatbot-api/changelog.md @@ -0,0 +1,76 @@ +--- +id: changelog +title: Changelog +sidebar_label: CHANGELOG +--- + + + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## 0.5.3 - 2025-02-07 + +### Fixed + +- Version `0.5.2` included an error with the `mdx` files for embedding generation from `generateFromFile` API. This has been fixed. +- Fixed several typos related to the `aggregateMaxTokenNumber` configurable parameter.s + +### Changed + +- Updated documentation related to the Aggregate Max Token Number and custom prompts (both system and user prompts) + +## 0.5.2 - 2025-01-29 + + +### Fixed + +- At service startup, if the Vector Search collection does not exist, it is automatically created +- Support file extension `mdx` for embedding generation +- File uploaded for embedding generation is validated either from the content-type or the file extension + +## 0.5.1 - 2024-12-20 + +## 0.5.0 - 2024-12-19 + +### Added + +- Created new pipeline flow for testing, linting, security (with `bandit` and `pip-audit`) and docker image publishing on tags. + +## 0.4.0 - 2024-12-18 + +- updated dependencies (FastAPI, Langchain, OpenAI) + +### Added + +- add endpoint `POST /embeddings/generateFromFile` for embedding generation +- add support for _Azure OpenAI_ provider for embedding generation and LLM usage + +## 0.3.1 - 2024-09-05 + +## 0.3.0 - 2024-09-05 + +### Added + +- Automatic creation/update of the Vector Index +- add endpoints `POST /embeddings/generate` and `GET /embeddings/status` for embedding generation + +## 0.2.0 - 2024-08-21 + +### Updated + +- updated dependencies (FastAPI, Langchain, OpenAI) +- the application is now using Python version 3.12.3 +- improved documentation + +## 0.1.1 - 2024-05-09 + +### Added + +- first template implementation