Skip to content

desmondknunoo/aura_guard

Repository files navigation

Aura-Guard

Aura-Guard is a Raspberry Pi wildlife-detection prototype for daytime deer detection first, with thermal support planned as a later path.

Engineering Team

The current working split is:

  • MacBook Pro M1 Pro: dataset prep, training, validation, and model export
  • Raspberry Pi 5: live camera inference, LED trigger, GPS capture, and runtime testing

Important limitations right now:

  • the runtime supports one camera source per run
  • there is no simultaneous Pi Camera 3 + FLIR fusion yet
  • the current prepared dataset is visible-light only, not thermal

Dependency inventory:

Supported browser test path:

Current Project Status

As of April 3, 2026:

  • the deer dataset has been prepared inside datasets/deer
  • all deer-like subclasses were collapsed into a single deer class
  • the local Mac training environment is ./.conda-train
  • the completed training run is runs/detect/deer-day-v1
  • final weights exist at runs/detect/deer-day-v1/weights/best.pt
  • Apple mps failed on the first backward pass, so the stable training path is now cpu

Final training outcome:

  • the run stopped at epoch 94
  • this was expected early stopping behavior from patience=20
  • best epoch: 74
  • best mAP50: 0.99098
  • best mAP50-95: 0.84945
  • last epoch: 94
  • last mAP50: 0.98414
  • last mAP50-95: 0.83016

That means the current model is ready for testing and deployment. There is no need to force another run to 100/100.

Project Layout

.
├── aura_guard/                  # Runtime package
├── config/                      # Example runtime configs
├── datasets/                    # Prepared dataset and notes
├── models/                      # Deployment model destination
├── requirements/                # Mac and Pi dependencies
├── training/                    # Dataset prep, training, export
├── webapp/                      # Separate browser-based test harness
├── aura_guard_main.py           # Runtime CLI entrypoint
├── DEPENDENCIES.md              # Dependency inventory, reasons, versions
├── LICENSE.md                   # Code license status and dataset attribution
├── hardware_prototype_spec.md   # Product and hardware concept
└── raspberry_pi_setup_guide.md  # Pi guide (README now includes the same flow)

End-To-End Process

This is the full process from start to finish for the current prototype.

1. Choose The First Scope

The first scope is intentionally narrow:

  1. use Pi Camera 3 for daytime detection
  2. detect deer
  3. trigger an LED strobe
  4. capture GPS location
  5. defer cloud/network and FLIR fusion until the daytime path is stable

You can still wire the FLIR Lepton 3.5 + PureThermal 2, but for the first demo, Pi Camera 3 + GPS + LED is the simpler path.

2. Hardware Needed

For the first full bring-up, have these parts ready:

  • Raspberry Pi 5
  • microSD card
  • stable Pi 5 USB-C power supply
  • Raspberry Pi Camera Module 3
  • FLIR Lepton 3.5 + PureThermal 2
  • GPS module
  • 10W LED
  • MOSFET for LED switching
  • 220 ohm resistor for the MOSFET gate
  • external power source for the LED
  • jumper wires and camera ribbon

3. Prepare The Dataset On The Mac

The current repo expects the deer dataset under datasets/deer.

Current source dataset:

  • name: deer open dataset Computer Vision Dataset
  • source: Roboflow Universe
  • task: Object Detection
  • license: CC BY 4.0
  • original source classes: Deer, deer, alageyik, Fallow, kgeyik, MuskDeer, RoeDeer, SikaDeer, spotted deer
  • user-reported snapshot on April 3, 2026: 627 views, 42 downloads

The local training copy in this repo does not keep those 9 classes separate. For this prototype, they were collapsed into a single target class:

  • deer

If you import a fresh Roboflow-style YOLOv8 ZIP, prepare it with:

python3 -m training.prepare_dataset \
  --zip "deer open dataset.v1i.yolov8.zip" \
  --output datasets/deer \
  --collapse-to-one-class

That rewrite step is important for this prototype because the current target class is one class only:

  • deer

If you want to add hard negatives from the Kaggle COCO 2017 bundle, use the helper below. It downloads the dataset through kagglehub, selects images that contain the requested non-deer categories, and writes empty YOLO label files into the existing deer dataset:

./.conda-train/bin/python -m training.add_coco_negatives \
  --dataset-root datasets/deer \
  --source-split val2017 \
  --target-split val \
  --limit 250

Recommended negative categories for this project are:

  • person
  • dog
  • cat
  • horse
  • sheep
  • cow
  • elephant
  • bear
  • zebra
  • giraffe

Dataset licensing and citation notes are in LICENSE.md.

4. Install The Mac Training Environment

Use the project-local training environment:

./.conda-train/bin/python -m pip install -r requirements/mac-training.txt

If you want to use Anaconda Navigator instead, see the manual training section below.

5. Train The Model On The Mac

Recommended stable training command on the M1 Pro:

./.conda-train/bin/python -m training.train \
  --data datasets/deer/data.yaml \
  --model yolo11n.pt \
  --epochs 100 \
  --imgsz 640 \
  --batch 8 \
  --workers 0 \
  --device cpu \
  --name deer-day-v1

Why this is the stable path:

  • mps on this machine already failed once during backward propagation
  • cpu is slower, but it is the most reliable path so far
  • workers 0 avoids extra multiprocessing issues on this setup

If you want the script to try mps first and fall back if it crashes:

./.conda-train/bin/python -m training.train \
  --data datasets/deer/data.yaml \
  --model yolo11n.pt \
  --epochs 100 \
  --imgsz 640 \
  --batch 8 \
  --workers 0 \
  --name deer-day-v1-auto

6. Manual Training In Anaconda Navigator

If the project-local env is inconvenient, train manually in Anaconda Navigator:

  1. Open Anaconda Navigator
  2. Open Environments
  3. Create a new environment named aura-guard-train
  4. Choose Python 3.11
  5. Select the environment and click Open Terminal

Then run:

cd "/Users/desmondknunoo/Desktop/Achendo/Achendo Software/aura_guard "
python -m pip install --upgrade pip
python -m pip install -r requirements/mac-training.txt
python -m training.train \
  --data datasets/deer/data.yaml \
  --model yolo11n.pt \
  --epochs 100 \
  --imgsz 640 \
  --batch 8 \
  --workers 0 \
  --device cpu \
  --name deer-day-v1-manual

If you remove --device cpu, the script will try Apple mps, but that backend has already failed once on this project.

7. Monitor Training

Watch the metrics:

tail -f runs/detect/deer-day-v1/results.csv

Check the weights:

ls -lah runs/detect/deer-day-v1/weights

Check that the training process is still running:

ps -ax -o pid= -o state= -o %cpu= -o %mem= -o command= | rg "training\.train|deer-day-v1"

Training time expectation on the MacBook Pro M1 Pro:

  • if mps runs cleanly, often 25 to 60 minutes
  • on the current stable cpu path, expect several hours

8. Export The Model

The stable runtime model for the current Pi app is still:

  • models/deer-best.pt

An exported side artifact was also generated as:

  • models/deer-best.onnx

Export command used successfully in the current environment:

./.conda-train/bin/python -m training.export \
  --model runs/detect/deer-day-v1/weights/best.pt \
  --format onnx \
  --imgsz 640

For the first Pi test, the most important file is still:

  • models/deer-best.pt

Note:

  • ncnn export is blocked in the current Python 3.11 training environment because the available ncnn wheel does not match it
  • the current runtime already supports the .pt model directly, so deployment can proceed without ncnn

9. Recommended Sequence After Training

Use this sequence now that the training run is complete.

Step 1. Freeze The Best Checkpoint

Copy the finished checkpoint into the stable runtime model location:

cp runs/detect/deer-day-v1/weights/best.pt models/deer-best.pt

This gives the project one predictable deployment path:

  • models/deer-best.pt

Step 2. Export The Deployment Model

Generate the deployment export:

./.conda-train/bin/python -m training.export \
  --model models/deer-best.pt \
  --format onnx \
  --imgsz 640

Step 3. Test The Finished Model In The Web App

Install the web app dependencies if needed:

./.conda-train/bin/python -m pip install -r webapp/requirements.txt

Run the web app:

./.conda-train/bin/streamlit run webapp/app.py

In the sidebar, point the model path at:

  • models/deer-best.pt

Use image upload, video upload, or browser camera snapshot mode to confirm the finished checkpoint behaves as expected.

Step 4. Copy The Repo And Model To The Pi

From the Mac:

rsync -av \
  --exclude '.git' \
  --exclude '.conda-train' \
  --exclude '.venv' \
  --exclude 'datasets' \
  --exclude '__pycache__' \
  --exclude '.DS_Store' \
  /Users/desmondknunoo/Desktop/Achendo/Achendo\ Software/aura_guard\ / \
  <pi-user>@<pi-hostname>.local:~/aura_guard/
scp models/deer-best.pt \
  <pi-user>@<pi-hostname>.local:~/aura_guard/models/deer-best.pt

Step 5. Start With The Pi Camera Runtime

On the Pi:

cp config/runtime.pi.example.toml config/runtime.toml
python aura_guard_main.py --config config/runtime.toml

Step 6. Enable Hardware In Order

Bring the system up in this order:

  1. camera only
  2. detector only
  3. GPS
  4. LED strobe
  5. full daytime validation

Step 7. Test The FLIR Path Separately Later

When daytime Pi Camera testing is stable, switch to:

cp config/runtime.pi.thermal.example.toml config/runtime.toml

Then validate the FLIR/PureThermal path as a separate run.

10. Optional Web App Testing On The Mac

If you want a quick browser-based test harness before touching the Pi runtime, use the separate web app in webapp/README.md.

Install the web app dependencies:

./.conda-train/bin/python -m pip install -r webapp/requirements.txt

Run the web app:

./.conda-train/bin/streamlit run webapp/app.py

What it supports:

  • image upload
  • video upload
  • browser camera snapshots
  • adjustable confidence threshold
  • adjustable class IDs
  • direct model-path testing against the same detector class used by the runtime
  • JSON download of detections for image and video runs

The default model path is the current best available candidate:

  • models/deer-best.pt
  • then runs/detect/deer-day-v1/weights/best.pt
  • then yolo11n.pt

10A. Hosted Web Testing With Streamlit

The supported hosted option for the current web tester is Streamlit Community Cloud.

Use this sequence:

  1. push the repo to GitHub
  2. make sure models/deer-best.pt is present in the repo
  3. create a Streamlit app from that repo
  4. set the app entrypoint to webapp/app.py
  5. deploy it

This repo is already prepared for that path:

  • webapp/app.py is the Streamlit entrypoint
  • webapp/requirements.txt contains the web app dependencies
  • .streamlit/config.toml contains the app theme and upload-size settings
  • models/deer-best.pt is the stable default model path

Practical note:

  • use Streamlit for hosted testing
  • do not use Vercel as the main deployment target for this Streamlit app

11. Prepare The Raspberry Pi 5 SD Card

On your Mac:

  1. open Raspberry Pi Imager
  2. choose Raspberry Pi 5
  3. choose Raspberry Pi OS (64-bit) Bookworm
  4. choose the microSD card
  5. open the advanced settings and set:
    • hostname
    • username and password
    • Wi-Fi SSID and password
    • enable SSH
  6. flash the card
  7. insert it into the Pi and boot

12. Wire The Hardware

Power the Pi down before wiring.

Pi Camera 3

  • connect the camera ribbon to a Pi 5 camera connector
  • make sure the ribbon orientation is correct
  • fully seat the ribbon before locking the connector

FLIR Lepton 3.5 + PureThermal 2

  • connect the Lepton to the PureThermal 2 board
  • connect PureThermal 2 to a Pi USB port
  • this path behaves like a USB/UVC camera in the current runtime

GPS Module

Wire:

  • GPS VIN -> Pi 3.3V or 5V depending on the module
  • GPS GND -> Pi GND
  • GPS RX -> Pi TX (GPIO 14)
  • GPS TX -> Pi RX (GPIO 15)

LED Strobe Through MOSFET

Wire:

  • Pi GPIO 18 -> 220 ohm resistor -> MOSFET gate
  • Pi GND -> MOSFET source
  • LED negative -> MOSFET drain
  • LED positive -> external LED power rail

Important:

  • do not power a 10W LED directly from the Pi
  • the external LED supply ground must be tied to Pi ground
  • if there is no common ground, the LED switching path will not behave correctly

13. SSH Into The Pi And Update It

From the Mac:

ssh <pi-user>@<pi-hostname>.local

Or:

ssh <pi-user>@<pi-ip>

Then on the Pi:

sudo apt update
sudo apt upgrade -y

14. Enable Serial For GPS

On the Pi:

sudo raspi-config

Set:

  • login shell over serial: No
  • serial hardware: Yes

Then reboot:

sudo reboot

15. Install Pi Runtime Dependencies

On the Pi:

sudo apt install -y \
  python3-venv \
  python3-pip \
  python3-opencv \
  python3-gpiozero \
  python3-picamera2 \
  git \
  v4l-utils

Create the runtime environment:

mkdir -p ~/aura_guard
cd ~/aura_guard
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip

16. Copy The Repo And Model To The Pi

Copy the repo from the Mac:

rsync -av \
  --exclude '.git' \
  --exclude '.conda-train' \
  --exclude '.venv' \
  --exclude 'datasets' \
  --exclude '__pycache__' \
  --exclude '.DS_Store' \
  /Users/desmondknunoo/Desktop/Achendo/Achendo\ Software/aura_guard\ / \
  <pi-user>@<pi-hostname>.local:~/aura_guard/

Copy the trained model:

scp runs/detect/deer-day-v1/weights/best.pt \
  <pi-user>@<pi-hostname>.local:~/aura_guard/models/deer-best.pt

What should exist on the Pi:

  • repo source code
  • config/runtime.toml
  • models/deer-best.pt

17. Install Python Requirements On The Pi

On the Pi:

cd ~/aura_guard
source .venv/bin/activate
pip install -r requirements/pi-runtime.txt

18. Choose The Runtime Config

For Pi Camera 3:

cp config/runtime.pi.example.toml config/runtime.toml

For FLIR PureThermal:

cp config/runtime.pi.thermal.example.toml config/runtime.toml

Only one of these should be active as config/runtime.toml per run.

Before running, edit config/runtime.toml and confirm:

  • detector.model_path = "models/deer-best.pt"
  • strobe.enabled = true only after camera/model loading works
  • gps.enabled = true when GPS is wired and serial is enabled
  • alerting.enabled = false unless you have a backend stub ready

19. Verify The Hardware On The Pi

Check the Pi camera:

rpicam-hello

If using the FLIR config, list UVC devices:

v4l2-ctl --list-devices

Check the model file:

ls -lah ~/aura_guard/models/deer-best.pt

20. Run The App On The Pi

From the Pi repo:

python aura_guard_main.py --config config/runtime.toml

If the display window is enabled, press q to exit.

21. First Test Order

Use this order for the first full validation:

  1. boot the Pi and confirm SSH access
  2. verify the camera path
  3. verify the model file path
  4. run the app with camera only
  5. enable GPS
  6. enable LED strobe
  7. run the full daytime test
  8. test the FLIR path later as a separate run

22. Basic Smoke Checks

There is no automated pytest suite in the repo yet. Use these checks instead:

python3 -m compileall aura_guard training webapp/app.py aura_guard_main.py
./.conda-train/bin/python -m training.train --help
./.conda-train/bin/python -m training.export --help
python3 -m training.prepare_dataset --help
python3 aura_guard_main.py --help

What To Copy To The Pi

The minimum required files for deployment are:

  • repo source
  • config/runtime.toml
  • models/deer-best.pt

If the current training run is the one you deploy, the source model will be:

  • runs/detect/deer-day-v1/weights/best.pt

Troubleshooting

MPS Training Crash On Mac

Symptom:

  • training fails early with a tensor stride/view error on Apple mps

Fix:

  • rerun with --device cpu --workers 0
  • or use the Anaconda Navigator manual training flow above

Training Seems Too Slow

Symptom:

  • training takes hours

Reason:

  • the current stable path is CPU, not MPS

Pi Camera Not Found

Checks:

  • reseat the camera ribbon
  • verify ribbon orientation
  • run rpicam-hello
  • confirm you copied the Pi Camera config, not the thermal config

FLIR Camera Not Found

Checks:

  • run v4l2-ctl --list-devices
  • confirm PureThermal 2 is plugged in over USB
  • confirm the thermal config is active

No GPS Fix

Checks:

  • indoor testing may never lock
  • confirm raspi-config serial settings
  • check /dev/serial0

LED Does Not Fire

Checks:

  • verify external LED power
  • verify common ground with the Pi
  • verify MOSFET orientation
  • verify GPIO 18 wiring
  • verify strobe.enabled = true

Runtime Fails Immediately

Checks:

  • confirm models/deer-best.pt exists
  • confirm config/runtime.toml was copied from the correct example
  • confirm only one camera path is active per run

Web App Does Not Start

Checks:

  • install webapp/requirements.txt
  • start it with ./.conda-train/bin/streamlit run webapp/app.py
  • confirm the model path in the sidebar exists
  • if you want a stable checkpoint while training is still running, copy the current best model to models/deer-best.pt

Web App Loads But Shows No Detections

Checks:

  • lower the confidence threshold in the sidebar
  • confirm the model path is pointing at a real deer detector checkpoint
  • test with runs/detect/deer-day-v1/weights/best.pt first
  • use an image with clear daytime deer visibility before testing harder scenes

Command Reference

Training CLI Help

./.conda-train/bin/python -m training.train --help

Export CLI Help

./.conda-train/bin/python -m training.export --help

Dataset Prep CLI Help

python3 -m training.prepare_dataset --help

COCO Hard-Negative CLI Help

./.conda-train/bin/python -m training.add_coco_negatives --help

Runtime CLI Help

python3 aura_guard_main.py --help

Run The Web App

./.conda-train/bin/streamlit run webapp/app.py

About

Aura-Guard is a Raspberry Pi wildlife-detection prototype for daytime deer detection first, with thermal support planned as a later path.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages