Start the local development environment with Docker Compose following the guide in ../development.md.
By default, the dependencies are managed with uv, go there and install it.
From ./backend/ you can install all the dependencies with:
$ uv syncThen you can activate the virtual environment with:
$ source .venv/bin/activateMake sure your editor is using the correct Python virtual environment, with the interpreter at backend/.venv/bin/python.
Modify or add SQLModel models for data and SQL tables in ./backend/app/models/, API endpoints in ./backend/app/api/.
Install the spaCy model required by the custom pii_remover validator:
python -m spacy download en_core_web_lgThere are already configurations in place to run the backend through the VS Code debugger, so that you can use breakpoints, pause and explore variables, etc.
The setup is also already configured so you can run the tests through the VS Code Python tests tab.
There is also a command override that runs fastapi run --reload instead of the default fastapi run. It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:
$ docker compose watchThere is also a commented out command override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes.
To get inside the container with a bash session you can start the stack with:
$ docker compose watchand then in another terminal, exec inside the running container:
$ docker compose exec backend bashYou should see an output like:
root@7f2607af31c3:/app#that means that you are in a bash session inside your container, as a root user, under the /app directory, this directory has another directory called "app" inside, that's where your code lives inside the container: /app/app.
There you can use the fastapi run --reload command to run the debug live reloading server.
$ fastapi run --reload app/main.py...it will look like:
root@7f2607af31c3:/app# fastapi run --reload app/main.pyand then hit enter. That runs the live reloading server that auto reloads when it detects code changes.
Nevertheless, if it doesn't detect a change but a syntax error, it will just stop with an error. But as the container is still alive and you are in a Bash session, you can quickly restart it after fixing the error, running the same command ("up arrow" and "Enter").
...this previous detail is what makes it useful to have the container alive doing nothing and then, in a Bash session, make it run the live reload server.
To test the backend run:
$ bash ./scripts/test.shThe tests run with Pytest, modify and add tests to ./backend/app/tests/.
If you use GitHub Actions the tests will run automatically.
For full details on running evaluations — including dataset setup, individual validator scripts, multi-validator end-to-end evaluation, and how to interpret metrics — see:
backend/app/evaluation/README.md
Detailed validator configuration reference:
backend/app/core/validators/README.md
Detailed API usage and end-to-end request examples:
backend/app/api/API_USAGE.md
If your stack is already up and you just want to run the tests, you can use:
docker compose exec backend bash scripts/tests-start.shThat /app/scripts/tests-start.sh script just calls pytest after making sure that the rest of the stack is running. If you need to pass extra arguments to pytest, you can pass them to that command and they will be forwarded.
For example, to stop on first error:
docker compose exec backend bash scripts/tests-start.sh -xWhen the tests are run, a file htmlcov/index.html is generated, you can open it in your browser to see the coverage of the tests.
As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with alembic commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.
Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors.
- Start an interactive session in the backend container:
$ docker compose exec backend bash-
Alembic is configured with SQLModel models under
./backend/app/models/. -
After changing a model (for example, adding a column), inside the container, create a revision, e.g.:
$ alembic revision --autogenerate -m "Add column last_name to User model"-
Commit to the git repository the files generated in the alembic directory.
-
After creating the revision, run the migration in the database (this is what will actually change the database):
$ alembic upgrade headIf you don't want to use migrations at all, uncomment the lines in the file at ./backend/app/core/db.py that end in:
SQLModel.metadata.create_all(engine)and comment the line in the file scripts/prestart.sh that contains:
$ alembic upgrade headIf you don't want to start with the default models and want to remove them / modify them, from the beginning, without having any previous revision, you can remove the revision files (.py Python files) under ./backend/app/alembic/versions/. And then create a first migration as described above.
AUTH_TOKEN must be the SHA-256 hex digest (64 lowercase hex characters) of the bearer token clients will send in the Authorization: Bearer <token> header.
Example to generate the digest:
echo -n "your-plain-text-token" | shasum -a 256Set the resulting digest as AUTH_TOKEN in your .env / .env.test.
Ban List and Topic Relevance Config APIs use X-API-KEY auth instead of bearer token auth.
Required environment variables:
KAAPI_AUTH_URL: Base URL of the Kaapi auth service used to verify API keys.KAAPI_AUTH_TIMEOUT: Timeout in seconds for auth verification calls.
At runtime, the backend calls:
GET {KAAPI_AUTH_URL}/apikeys/verify- Header:
X-API-KEY: <token>
If verification succeeds, tenant's scope (organization_id, project_id) is resolved from the auth response and applied to tenant-scoped CRUD operations (for example Ban Lists and Topic Relevance Configs).
OpenAI API key required for LLM-based validators The
llm_criticandtopic_relevancevalidators call OpenAI models at runtime. SetOPENAI_API_KEYin your.env/.env.testbefore using these validators. If the key is missing,llm_criticwill raise aValueErrorat build time andtopic_relevancewill return a validation failure with an explicit error message.
-
Ensure that the
.envfile contains the correct value forGUARDRAILS_HUB_API_KEY. The key can be fetched from here. -
Make the
install_guardrails_from_hub.shscript executable (run from thebackendfolder):
chmod +x scripts/install_guardrails_from_hub.sh- Run the script to configure Guardrails and install all hub validators:
GUARDRAILS_HUB_API_KEY=<your-key> bash scripts/install_guardrails_from_hub.shRemote inferencing is enabled by default. The script sets
ENABLE_REMOTE_INFERENCING=trueunless overridden. This is required forllamaguard_7b, which runs inference on the Guardrails Hub. You can disable it explicitly if needed:GUARDRAILS_HUB_API_KEY=<your-key> ENABLE_REMOTE_INFERENCING=false bash scripts/install_guardrails_from_hub.sh
To add a new validator from the Guardrails Hub to this project, follow the steps below.
- In the
backend/app/core/validators/configfolder, create a new Python file called<validator_name>_safety_validator_config.py. Add the following code there:
from guardrails.hub import # validator name from Guardrails Hub
from typing import List, Literal
from app.core.validators.config.base_validator_config import BaseValidatorConfig
class <Validator-name>SafetyValidatorConfig(BaseValidatorConfig):
type: Literal["<validator-name>"]
banned_words: List[str]
# This method returns the validator constructor.
def build(self):For example, this is the code for BanList validator.
from guardrails.hub import BanList
from typing import List, Literal
from app.core.validators.config.base_validator_config import BaseValidatorConfig
class BanListSafetyValidatorConfig(BaseValidatorConfig):
type: Literal["ban_list"]
banned_words: List[str]
def build(self):
return BanList(
banned_words=self.banned_words,
on_fail=self.resolve_on_fail(),
)- In
backend/app/schemas/guardrail_config.py, add the newly created config class toValidatorConfigItem.
To add a custom validator to this project, follow the steps below.
- Create the custom validator class. Take a look at the
backend/app/core/validators/gender_assumption_bias.pyas an example. Each custom validator should contain an__init__and_validatormethod. For example,
from guardrails import OnFailAction
from guardrails.validators import (
FailResult,
PassResult,
register_validator,
ValidationResult,
Validator
)
from typing import Callable, List, Optional
@register_validator(name="<validator-name>", data_type="string")
class <Validator-Name>(Validator):
def __init__(
self,
# any parameters required while initializing the validator
on_fail: Optional[Callable] = OnFailAction.FIX #can be changed
):
# Initialize the required variables
super().__init__(on_fail=on_fail)
def _validate(self, value: str, metadata: dict = None) -> ValidationResult:
# add logic for validation- In the
backend/app/core/validators/configfolder, create a new Python file called<validator_name>_safety_validator_config.py. Add the following code there:
from typing import List, Literal
from app.core.validators.config.base_validator_config import BaseValidatorConfig
class <Validator-name>SafetyValidatorConfig(BaseValidatorConfig):
type: Literal["<validator-name>"]
banned_words: List[str]
# This method returns the validator constructor.
def build(self):For example, this is the code for GenderAssumptionBias validator.
from typing import ClassVar, List, Literal, Optional
from app.core.validators.config.base_validator_config import BaseValidatorConfig
from app.core.enum import BiasCategories
from app.core.validators.gender_assumption_bias import GenderAssumptionBias
class GenderAssumptionBiasSafetyValidatorConfig(BaseValidatorConfig):
type: Literal["gender_assumption_bias"]
categories: Optional[List[BiasCategories]] = [BiasCategories.All]
def build(self):
return GenderAssumptionBias(
categories=self.categories,
on_fail=self.resolve_on_fail(),
)- In
backend/app/schemas/guardrail_config.py, add the newly created config class toValidatorConfigItem.