Aura-Guard is a Raspberry Pi wildlife-detection prototype for daytime deer detection first, with thermal support planned as a later path.
- Lead Engineer: Desmond Kwame Nunoo
- Supporting Engineer 1: Uthman Yushaw Mohammed
- Supporting Engineer 2: Christopher Power-woods
The current working split is:
MacBook Pro M1 Pro: dataset prep, training, validation, and model exportRaspberry Pi 5: live camera inference, LED trigger, GPS capture, and runtime testing
Important limitations right now:
- the runtime supports one camera source per run
- there is no simultaneous
Pi Camera 3 + FLIRfusion yet - the current prepared dataset is visible-light only, not thermal
Dependency inventory:
Supported browser test path:
- local or hosted Streamlit app in webapp/README.md
As of April 3, 2026:
- the deer dataset has been prepared inside
datasets/deer - all deer-like subclasses were collapsed into a single
deerclass - the local Mac training environment is
./.conda-train - the completed training run is
runs/detect/deer-day-v1 - final weights exist at
runs/detect/deer-day-v1/weights/best.pt - Apple
mpsfailed on the first backward pass, so the stable training path is nowcpu
Final training outcome:
- the run stopped at epoch
94 - this was expected early stopping behavior from
patience=20 - best epoch:
74 - best
mAP50:0.99098 - best
mAP50-95:0.84945 - last epoch:
94 - last
mAP50:0.98414 - last
mAP50-95:0.83016
That means the current model is ready for testing and deployment. There is no need to force another run to 100/100.
.
├── aura_guard/ # Runtime package
├── config/ # Example runtime configs
├── datasets/ # Prepared dataset and notes
├── models/ # Deployment model destination
├── requirements/ # Mac and Pi dependencies
├── training/ # Dataset prep, training, export
├── webapp/ # Separate browser-based test harness
├── aura_guard_main.py # Runtime CLI entrypoint
├── DEPENDENCIES.md # Dependency inventory, reasons, versions
├── LICENSE.md # Code license status and dataset attribution
├── hardware_prototype_spec.md # Product and hardware concept
└── raspberry_pi_setup_guide.md # Pi guide (README now includes the same flow)
This is the full process from start to finish for the current prototype.
The first scope is intentionally narrow:
- use
Pi Camera 3for daytime detection - detect
deer - trigger an LED strobe
- capture GPS location
- defer cloud/network and FLIR fusion until the daytime path is stable
You can still wire the FLIR Lepton 3.5 + PureThermal 2, but for the first demo, Pi Camera 3 + GPS + LED is the simpler path.
For the first full bring-up, have these parts ready:
- Raspberry Pi 5
- microSD card
- stable Pi 5 USB-C power supply
- Raspberry Pi Camera Module 3
- FLIR Lepton 3.5 + PureThermal 2
- GPS module
- 10W LED
- MOSFET for LED switching
- 220 ohm resistor for the MOSFET gate
- external power source for the LED
- jumper wires and camera ribbon
The current repo expects the deer dataset under datasets/deer.
Current source dataset:
- name:
deer open dataset Computer Vision Dataset - source: Roboflow Universe
- task:
Object Detection - license: CC BY 4.0
- original source classes:
Deer,deer,alageyik,Fallow,kgeyik,MuskDeer,RoeDeer,SikaDeer,spotted deer - user-reported snapshot on
April 3, 2026:627 views,42 downloads
The local training copy in this repo does not keep those 9 classes separate. For this prototype, they were collapsed into a single target class:
deer
If you import a fresh Roboflow-style YOLOv8 ZIP, prepare it with:
python3 -m training.prepare_dataset \
--zip "deer open dataset.v1i.yolov8.zip" \
--output datasets/deer \
--collapse-to-one-classThat rewrite step is important for this prototype because the current target class is one class only:
deer
If you want to add hard negatives from the Kaggle COCO 2017 bundle, use the helper below. It downloads the dataset through kagglehub, selects images that contain the requested non-deer categories, and writes empty YOLO label files into the existing deer dataset:
./.conda-train/bin/python -m training.add_coco_negatives \
--dataset-root datasets/deer \
--source-split val2017 \
--target-split val \
--limit 250Recommended negative categories for this project are:
persondogcathorsesheepcowelephantbearzebragiraffe
Dataset licensing and citation notes are in LICENSE.md.
Use the project-local training environment:
./.conda-train/bin/python -m pip install -r requirements/mac-training.txtIf you want to use Anaconda Navigator instead, see the manual training section below.
Recommended stable training command on the M1 Pro:
./.conda-train/bin/python -m training.train \
--data datasets/deer/data.yaml \
--model yolo11n.pt \
--epochs 100 \
--imgsz 640 \
--batch 8 \
--workers 0 \
--device cpu \
--name deer-day-v1Why this is the stable path:
mpson this machine already failed once during backward propagationcpuis slower, but it is the most reliable path so farworkers 0avoids extra multiprocessing issues on this setup
If you want the script to try mps first and fall back if it crashes:
./.conda-train/bin/python -m training.train \
--data datasets/deer/data.yaml \
--model yolo11n.pt \
--epochs 100 \
--imgsz 640 \
--batch 8 \
--workers 0 \
--name deer-day-v1-autoIf the project-local env is inconvenient, train manually in Anaconda Navigator:
- Open
Anaconda Navigator - Open
Environments - Create a new environment named
aura-guard-train - Choose
Python 3.11 - Select the environment and click
Open Terminal
Then run:
cd "/Users/desmondknunoo/Desktop/Achendo/Achendo Software/aura_guard "
python -m pip install --upgrade pip
python -m pip install -r requirements/mac-training.txt
python -m training.train \
--data datasets/deer/data.yaml \
--model yolo11n.pt \
--epochs 100 \
--imgsz 640 \
--batch 8 \
--workers 0 \
--device cpu \
--name deer-day-v1-manualIf you remove --device cpu, the script will try Apple mps, but that backend has already failed once on this project.
Watch the metrics:
tail -f runs/detect/deer-day-v1/results.csvCheck the weights:
ls -lah runs/detect/deer-day-v1/weightsCheck that the training process is still running:
ps -ax -o pid= -o state= -o %cpu= -o %mem= -o command= | rg "training\.train|deer-day-v1"Training time expectation on the MacBook Pro M1 Pro:
- if
mpsruns cleanly, often25 to 60 minutes - on the current stable
cpupath, expect several hours
The stable runtime model for the current Pi app is still:
models/deer-best.pt
An exported side artifact was also generated as:
models/deer-best.onnx
Export command used successfully in the current environment:
./.conda-train/bin/python -m training.export \
--model runs/detect/deer-day-v1/weights/best.pt \
--format onnx \
--imgsz 640For the first Pi test, the most important file is still:
models/deer-best.pt
Note:
ncnnexport is blocked in the current Python3.11training environment because the availablencnnwheel does not match it- the current runtime already supports the
.ptmodel directly, so deployment can proceed withoutncnn
Use this sequence now that the training run is complete.
Copy the finished checkpoint into the stable runtime model location:
cp runs/detect/deer-day-v1/weights/best.pt models/deer-best.ptThis gives the project one predictable deployment path:
models/deer-best.pt
Generate the deployment export:
./.conda-train/bin/python -m training.export \
--model models/deer-best.pt \
--format onnx \
--imgsz 640Install the web app dependencies if needed:
./.conda-train/bin/python -m pip install -r webapp/requirements.txtRun the web app:
./.conda-train/bin/streamlit run webapp/app.pyIn the sidebar, point the model path at:
models/deer-best.pt
Use image upload, video upload, or browser camera snapshot mode to confirm the finished checkpoint behaves as expected.
From the Mac:
rsync -av \
--exclude '.git' \
--exclude '.conda-train' \
--exclude '.venv' \
--exclude 'datasets' \
--exclude '__pycache__' \
--exclude '.DS_Store' \
/Users/desmondknunoo/Desktop/Achendo/Achendo\ Software/aura_guard\ / \
<pi-user>@<pi-hostname>.local:~/aura_guard/scp models/deer-best.pt \
<pi-user>@<pi-hostname>.local:~/aura_guard/models/deer-best.ptOn the Pi:
cp config/runtime.pi.example.toml config/runtime.toml
python aura_guard_main.py --config config/runtime.tomlBring the system up in this order:
- camera only
- detector only
- GPS
- LED strobe
- full daytime validation
When daytime Pi Camera testing is stable, switch to:
cp config/runtime.pi.thermal.example.toml config/runtime.tomlThen validate the FLIR/PureThermal path as a separate run.
If you want a quick browser-based test harness before touching the Pi runtime, use the separate web app in webapp/README.md.
Install the web app dependencies:
./.conda-train/bin/python -m pip install -r webapp/requirements.txtRun the web app:
./.conda-train/bin/streamlit run webapp/app.pyWhat it supports:
- image upload
- video upload
- browser camera snapshots
- adjustable confidence threshold
- adjustable class IDs
- direct model-path testing against the same detector class used by the runtime
- JSON download of detections for image and video runs
The default model path is the current best available candidate:
models/deer-best.pt- then
runs/detect/deer-day-v1/weights/best.pt - then
yolo11n.pt
The supported hosted option for the current web tester is Streamlit Community Cloud.
Use this sequence:
- push the repo to GitHub
- make sure
models/deer-best.ptis present in the repo - create a Streamlit app from that repo
- set the app entrypoint to
webapp/app.py - deploy it
This repo is already prepared for that path:
webapp/app.pyis the Streamlit entrypointwebapp/requirements.txtcontains the web app dependencies.streamlit/config.tomlcontains the app theme and upload-size settingsmodels/deer-best.ptis the stable default model path
Practical note:
- use Streamlit for hosted testing
- do not use Vercel as the main deployment target for this Streamlit app
On your Mac:
- open
Raspberry Pi Imager - choose
Raspberry Pi 5 - choose
Raspberry Pi OS (64-bit)Bookworm - choose the microSD card
- open the advanced settings and set:
- hostname
- username and password
- Wi-Fi SSID and password
- enable
SSH
- flash the card
- insert it into the Pi and boot
Power the Pi down before wiring.
- connect the camera ribbon to a Pi 5 camera connector
- make sure the ribbon orientation is correct
- fully seat the ribbon before locking the connector
- connect the Lepton to the PureThermal 2 board
- connect PureThermal 2 to a Pi USB port
- this path behaves like a USB/UVC camera in the current runtime
Wire:
- GPS
VIN-> Pi3.3Vor5Vdepending on the module - GPS
GND-> PiGND - GPS
RX-> PiTX(GPIO 14) - GPS
TX-> PiRX(GPIO 15)
Wire:
- Pi
GPIO 18->220 ohm resistor-> MOSFET gate - Pi
GND-> MOSFET source - LED negative -> MOSFET drain
- LED positive -> external LED power rail
Important:
- do not power a 10W LED directly from the Pi
- the external LED supply ground must be tied to Pi ground
- if there is no common ground, the LED switching path will not behave correctly
From the Mac:
ssh <pi-user>@<pi-hostname>.localOr:
ssh <pi-user>@<pi-ip>Then on the Pi:
sudo apt update
sudo apt upgrade -yOn the Pi:
sudo raspi-configSet:
- login shell over serial:
No - serial hardware:
Yes
Then reboot:
sudo rebootOn the Pi:
sudo apt install -y \
python3-venv \
python3-pip \
python3-opencv \
python3-gpiozero \
python3-picamera2 \
git \
v4l-utilsCreate the runtime environment:
mkdir -p ~/aura_guard
cd ~/aura_guard
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pipCopy the repo from the Mac:
rsync -av \
--exclude '.git' \
--exclude '.conda-train' \
--exclude '.venv' \
--exclude 'datasets' \
--exclude '__pycache__' \
--exclude '.DS_Store' \
/Users/desmondknunoo/Desktop/Achendo/Achendo\ Software/aura_guard\ / \
<pi-user>@<pi-hostname>.local:~/aura_guard/Copy the trained model:
scp runs/detect/deer-day-v1/weights/best.pt \
<pi-user>@<pi-hostname>.local:~/aura_guard/models/deer-best.ptWhat should exist on the Pi:
- repo source code
config/runtime.tomlmodels/deer-best.pt
On the Pi:
cd ~/aura_guard
source .venv/bin/activate
pip install -r requirements/pi-runtime.txtFor Pi Camera 3:
cp config/runtime.pi.example.toml config/runtime.tomlFor FLIR PureThermal:
cp config/runtime.pi.thermal.example.toml config/runtime.tomlOnly one of these should be active as config/runtime.toml per run.
Before running, edit config/runtime.toml and confirm:
detector.model_path = "models/deer-best.pt"strobe.enabled = trueonly after camera/model loading worksgps.enabled = truewhen GPS is wired and serial is enabledalerting.enabled = falseunless you have a backend stub ready
Check the Pi camera:
rpicam-helloIf using the FLIR config, list UVC devices:
v4l2-ctl --list-devicesCheck the model file:
ls -lah ~/aura_guard/models/deer-best.ptFrom the Pi repo:
python aura_guard_main.py --config config/runtime.tomlIf the display window is enabled, press q to exit.
Use this order for the first full validation:
- boot the Pi and confirm SSH access
- verify the camera path
- verify the model file path
- run the app with camera only
- enable GPS
- enable LED strobe
- run the full daytime test
- test the FLIR path later as a separate run
There is no automated pytest suite in the repo yet. Use these checks instead:
python3 -m compileall aura_guard training webapp/app.py aura_guard_main.py./.conda-train/bin/python -m training.train --help./.conda-train/bin/python -m training.export --helppython3 -m training.prepare_dataset --helppython3 aura_guard_main.py --helpThe minimum required files for deployment are:
- repo source
config/runtime.tomlmodels/deer-best.pt
If the current training run is the one you deploy, the source model will be:
runs/detect/deer-day-v1/weights/best.pt
Symptom:
- training fails early with a tensor stride/view error on Apple
mps
Fix:
- rerun with
--device cpu --workers 0 - or use the Anaconda Navigator manual training flow above
Symptom:
- training takes hours
Reason:
- the current stable path is CPU, not MPS
Checks:
- reseat the camera ribbon
- verify ribbon orientation
- run
rpicam-hello - confirm you copied the Pi Camera config, not the thermal config
Checks:
- run
v4l2-ctl --list-devices - confirm PureThermal 2 is plugged in over USB
- confirm the thermal config is active
Checks:
- indoor testing may never lock
- confirm
raspi-configserial settings - check
/dev/serial0
Checks:
- verify external LED power
- verify common ground with the Pi
- verify MOSFET orientation
- verify
GPIO 18wiring - verify
strobe.enabled = true
Checks:
- confirm
models/deer-best.ptexists - confirm
config/runtime.tomlwas copied from the correct example - confirm only one camera path is active per run
Checks:
- install
webapp/requirements.txt - start it with
./.conda-train/bin/streamlit run webapp/app.py - confirm the model path in the sidebar exists
- if you want a stable checkpoint while training is still running, copy the current best model to
models/deer-best.pt
Checks:
- lower the confidence threshold in the sidebar
- confirm the model path is pointing at a real deer detector checkpoint
- test with
runs/detect/deer-day-v1/weights/best.ptfirst - use an image with clear daytime deer visibility before testing harder scenes
./.conda-train/bin/python -m training.train --help./.conda-train/bin/python -m training.export --helppython3 -m training.prepare_dataset --help./.conda-train/bin/python -m training.add_coco_negatives --helppython3 aura_guard_main.py --help./.conda-train/bin/streamlit run webapp/app.py