Skip to content

Commit

Permalink
Update contents
Browse files Browse the repository at this point in the history
  • Loading branch information
ozora-ogino committed Dec 1, 2021
1 parent 8936795 commit 9d49bac
Show file tree
Hide file tree
Showing 8 changed files with 79 additions and 10 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ repos:
rev: v4.0.1
hooks:
- id: check-added-large-files
args: ["--maxkb=100"]
args: ["--maxkb=500"]
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2020 Ozora Ogino (ozora-ogino)
Copyright (c) 2021 Ozora Ogino (ozora-ogino)

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
57 changes: 52 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,57 @@
<h1 align="center">
TensorFlow Light Human Tracking
</h1>

## SORT
![](./outputs/trim10s.mp4_yolov5l-fp16.tflite.mp4)

> A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences.
The motivation of TensorFlow Lite Human Tracking is developing person tacking system for edge camera.
For example, count the number of visitors in somewhere.

https://github.com/abewley/sort
To track and detect people over frames, DeepSORT is adopted.

## Dataset
For the detail about DeepSORT, refer [this great article](https://medium.com/augmented-startups/deepsort-deep-learning-applied-to-object-tracking-924f59f99104).

https://www.kaggle.com/ashayajbani/oxford-town-centre/version/4?select=TownCentreXVID.mp4

Currently [YOLOv5](https://github.com/ultralytics/yolov5) models are supported for object detection model.
To get YOLOv5 tflite model, see [`models/README.md`](./models/README.md)

## <div align="center">Quick Start Example</div>

```bash
git clone [email protected]:ozora-ogino/tflite-human-tracking.git
cd tflite-human-tracking
python main.py --src ./data/<YOUR_VIDEO_FILE>.mp4 --model ./models/<YOLOV5_MODEL>.tflite
```

### Docker

```bash
./build_image.sh
./run.sh ./data/<YOUR_VIDEO_FILE>.mp4 ./models/<YOLOV5_MODEL>.tflite
```

Then you can see the results in `outputs` folder.


### Dataset
The example video on top of here is [TownCentreXVID](https://www.kaggle.com/ashayajbani/oxford-town-centre/version/4?select=TownCentreXVID.mp4).
You can download it from the link (kaggle).

I recoomend to trim it for about 10s because it's too big for testing.


## <div align="center">Citations</div>

### SORT

```
@inproceedings{Bewley2016_sort,
author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
title={Simple online and realtime tracking},
year={2016},
pages={3464-3468},
keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
doi={10.1109/ICIP.2016.7533003}
}
```
15 changes: 15 additions & 0 deletions models/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
## YOLOv5 TF Light

To get yolov5 tflite models, you can use [`yolov5/export.py`](https://github.com/ultralytics/yolov5/blob/master/export.py).

For example;

```bash
git clone [email protected]:ultralytics/yolov5.git
cd yolov5
pip install -r requirements.txt

python export.py --weights yolov5n.pt --include tflite
```

For mode details, see the [official release note](https://github.com/ultralytics/yolov5/releases).
4 changes: 4 additions & 0 deletions outputs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
**
!.gitignore
!trim10s.mp4_yolov5l-fp16.tflite.jpg
!trim10s.mp4_yolov5l-fp16.tflite.mp4
Binary file added outputs/trim10s.mp4_yolov5l-fp16.tflite.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added outputs/trim10s.mp4_yolov5l-fp16.tflite.mp4
Binary file not shown.
9 changes: 6 additions & 3 deletions src/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,11 @@ def main(
# Executed only first time.
if writer is None:
# Initialize video writer.
model_name = os.path.basename(model).split(".")[0]
video_name = os.path.basename(src).split(".")[0]
codecs = {"mp4": "MP4V", "avi": "MJPG"}
output_video = os.path.join(dest, f"result.{video_fmt}")
basename = f"{video_name}_{model_name}"
output_video = os.path.join(dest, f"{basename}.{video_fmt}")
fourcc = cv2.VideoWriter_fourcc(*codecs[video_fmt])
writer = cv2.VideoWriter(output_video, fourcc, 30, (frame.shape[1], frame.shape[0]), True)

Expand All @@ -109,7 +112,7 @@ def main(
print(f"Estimated total time: {second_per_frame * total_frames:.4f}")

# Save frame as an image and video.
cv2.imwrite(os.path.join(dest, "detect.jpg"), frame)
cv2.imwrite(os.path.join(dest, f"{basename}.jpg"), frame)
writer.write(frame)

writer.release()
Expand All @@ -123,7 +126,7 @@ def main(
parser.add_argument("--src", help="Path to video source.", default="./data/TownCentreXVID.mp4")
parser.add_argument("--dest", help="Path to output directory", default="./outputs/")
parser.add_argument("--model", help="Path to YOLOv5 tflite file", default="./models/yolov5n6-fp16.tflite")
parser.add_argument("--video-fmt", help="Format of output video file.", choices=["mp4", "avi"], default="avi")
parser.add_argument("--video-fmt", help="Format of output video file.", choices=["mp4", "avi"], default="mp4")
parser.add_argument("--confidence", type=float, default=0.2, help="Confidence threshold.")
parser.add_argument("--iou-threshold", type=float, default=0.2, help="IoU threshold for NMS.")
args = vars(parser.parse_args())
Expand Down

0 comments on commit 9d49bac

Please sign in to comment.