Skip to content

sancho11/registration_plate_detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Registration Plate Detector

This repository provides a complete pipeline for detecting vehicle registration plates using Ultralytics’ YOLOv11 framework. It automates downloading, unzipping, and converting the raw dataset into YOLO‐formatted train/validation splits with normalized annotations. The notebook demonstrates visualizing ground-truth bounding boxes, training and validating a YOLOv11s model, and evaluating its performance using COCO metrics. Finally, it includes helper functions for running inference on both images and videos, producing annotated outputs to showcase real-time plate detection.


Example Results

Below are a few sample predictions from the trained model, showcasing its performance on validation images. Each image displays the predicted label with confidence and the true label for comparison.

Sample Predictions:

Inference over static images

Example 1
Original Image
Example 2
Original Image
Example 3
Original Image

Inference over a video

Watch the demo

Training Performance Training and Validation Dataset:

Training Loss/Acc on Validation and Training Dataset
Original Image
Original Image

What it does

  • Dataset Preparation: Automatically downloads and extracts the raw vehicle registration plate dataset, then converts the annotations into YOLO-compatible format with normalized bounding boxes.

  • Data Organization: Creates a structured directory hierarchy (train/, valid/, labels/, images/) and sets up symlinks for easy training and validation splits.

  • Visualization: Samples images at random to display ground-truth bounding boxes and, later, side-by-side comparisons of predictions vs. annotations.

  • Training & Validation: Generates a YAML configuration for Ultralytics’ YOLOv11s model, kicks off training for a specified number of epochs, and evaluates performance on the held-out validation set.

  • Evaluation: Converts predictions and ground-truths into COCO format and runs the standard COCO evaluation script to compute AP metrics.

  • Inference: Provides helper functions to run detection on individual images or entire videos, drawing and saving annotated outputs for real-time plate detection demos.


Run Instructions

Clone the repository and build all projects in one step:

git clone https://github.com/sancho11/registration_plate_detector.git
cd registration_plate_detector
python -m venv .venv
source .venv/bin/activate  # On Windows, use: .venv\Scripts\activate
pip install -r requirements.txt

To run the project:

#For running the notebook pipeline using jupyter notebook
jupyter notebook
#For training a model using python
python train.py
#For running evaluation metrics on a trained model
python evaluate.py
#For running classification on a single image.
python infierence.py

Training

python train.py --epochs 100 --batch 16

Evaluating

Generate confusion matrix and do inference over whole dataset

python evaluate.py --model model/path/checkpoint --output-dir results/directory/path

Inferring

Do an inference on a single image using a trained model

python infer.py --model path/to/model /path/to/image/or/video output/file/path/name

Pipeline Overview

Pipeline Diagram
Pipeline Diagram


Key Techniques & Notes

  • Annotation Conversion: Uses a custom two_corner_boxes_to_normalized_bbox function to convert raw corner-based annotations into YOLO’s [x_center, y_center, width, height] format, ensuring six-decimal precision for stable training.
  • Symlinked Data Splits: Leverages filesystem symlinks to avoid duplicating large image folders, while organizing train/ and valid/ directories under a single yolo_dataset/ root.
  • Ultralytics YOLO Integration: Generates a minimal YAML config (nc, names, paths), toggles TensorBoard logging via !yolo settings tensorboard=True, and executes yolo task=detect mode=train … for streamlined training.
  • Visualization Utilities: Implements plot_box, show_random_samples, and show_random_predictions to render ground-truth and predicted boxes side by side using OpenCV and Matplotlib, aiding qualitative analysis.
  • COCO-Style Evaluation: Converts annotations and detections into COCO JSON format with generate_dataset_and_preditions_cocoeval_format, then runs pycocotools.COC Oeval to report standard AP metrics.
  • Video Inference Pipeline: Defines video_read_write to iterate over video frames, apply YOLO detection, draw boxes, and write annotated output—demonstrating real-time plate detection on arbitrary video files.
  • Modular, Well-Documented Code: All steps are wrapped in self-contained functions with clear docstrings, making it easy to adapt or extend for different datasets or model architectures.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published