This project is based on S.M. Silva and C.R. Jung's implementation of ECCV 2018 paper "License Plate Detection and Recognition in Unconstrained Scenarios".
- Paper webpage: http://sergiomsilva.com/pubs/alpr-unconstrained/
The original authors requested to cite the following paper if results produced by the code in are published in any publication:
@INPROCEEDINGS{silva2018a,
author={S. M. Silva and C. R. Jung},
booktitle={2018 European Conference on Computer Vision (ECCV)},
title={License Plate Detection and Recognition in Unconstrained Scenarios},
year={2018},
pages={580-596},
doi={10.1007/978-3-030-01258-8_36},
month={Sep},}
I am further developing this for personal use and to run it realtime on a Jetson Nano to open mij gate when I arrive at my house :-). So after converting the code to run on recent Python versions I am aiming to use GPU accelleration of the Nano to speed things up. The idea is to send resulting number plates via MQTT or REST to my OpenHAB home automation system.
In order to easily run the code, you must have installed the Keras framework with TensorFlow backend. The Darknet framework is self-contained in the "darknet" folder and must be compiled before running the tests. To build Darknet just type "make" in "darknet" folder:
$ cd darknet && make
The current version was tested in an Ubuntu 20.04 machine, with Keras 2.2.4, TensorFlow 1.5.0, OpenCV 4.2.0, NumPy 1.17 and Python 3.8.
After building the Darknet framework, you must execute the "get-networks.sh" script. This will download all the trained models:
$ bash get-networks.sh
Use the script "run.sh" to run our ALPR approach. It requires 3 arguments:
- Input directory (-i): should contain at least 1 image in JPG or PNG format;
- Output directory (-o): during the recognition process, many temporary files will be generated inside this directory and erased in the end. The remaining files will be related to the automatic annotated image;
- CSV file (-c): specify an output CSV file.
$ bash get-networks.sh && bash run.sh -i samples/test -o /tmp/output -c /tmp/output/results.csv
To train the LP detector network from scratch, or fine-tuning it for new samples, you can use the train-detector.py script. In folder samples/train-detector there are 3 annotated samples which are used just for demonstration purposes. To correctly reproduce our experiments, this folder must be filled with all the annotations provided in the training set, and their respective images transferred from the original datasets.
The following command can be used to train the network from scratch considering the data inside the train-detector folder:
$ mkdir models
$ python create-model.py eccv models/eccv-model-scracth
$ python train-detector.py --model models/eccv-model-scracth --name my-trained-model --train-dir samples/train-detector --output-dir models/my-trained-model/ -op Adam -lr .001 -its 300000 -bs 64
For fine-tunning, use your model with --model option.