Semantic segmentation is a computer vision task where we classify each and every individual pixel in an image into different classes. The classes that can be classified are strictly dependent on what we train our model with.
Demo of semantic segmentation (GIF from PyImageSearch)
There are 3 notebooks used in this GitHub Repo.
-
1. Image collection.ipynb
shows how to label your data with Label Studio to prepare for semantic segmentation.
-
2. Training - Oxford-IIIT Pet Dataset.ipynb
This notebook shows how to preprocess your data, and build a U-net model from scratch in Keras for semantic segmentation. The training was done in Google Colab and you can open it here:
-
3. Training with Pre-built Model - Brain MRI Segmentation.ipynb
The third Jupyter notebook shows how to use the
segmentation_models
library (GitHub link) to easily use pre-built architectures such as U-net, LinkNet etc. The notebook was also used in Google Colab for faster training. It can be accessed directly in Google Colab here too:
You only need to install the requirements specified in the requirements.txt
file using the command below:
pip install -r requirements.txt
And you also need to install COCO API to use it to create masks. Refer to the instructions below.
COCO API is used to load the COCO JSON annotation format and create masks if necessary. If you are using Label Studio to label the masks, then you will need to install this library for generating the mask images to use for training.
For Windows:
Expand
- Download Visual C++ 2015 Build Tools from this Microsoft Link and install it with default selection
- Also install the full Visual C++ 2015 Build Tools from here to make sure everything works
- Go to
C:\Program Files (x86)\Microsoft Visual C++ Build Tools
and run thevcbuildtools_msbuild.bat
file - In Anaconda prompt, run
pip install cython
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
For Linux:
Expand
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
cp -r pycocotools <PATH_TO_TF>/TensorFlow/models/research/