Custom Object Detection for Road Damage Detection using Yolov4
This is a tutorial article on how you can train your custom object detector using YOLOv4. The codes used in this article can be found here at GitHub.
Let's directly dive into the tutorial in a stepwise fashion.
Section 1: Dataset
- Dataset is taken from RDD2020: https://data.mendeley.com/datasets/5ty2wb6gvg/1
- RDD2020 dataset comprising 26,336 road images from India, Japan, and the Czech Republic with more than 31,000 instances of road damage.
- Here we consider a subset of the dataset, i.e. the images from India alone, to reduce complexity.
- There are four types of road damage: longitudinal cracks (D00), transverse cracks (D10), alligator cracks (D20), and potholes (D40).
- The data is present in the PascalVOC format as bounding boxes labeled as xmin, ymin, xmax and ymax, stored as XML files.
Section 2: Model
About the YOLOv4 Model
At the root of the project clone the darknet repository using the command:
git clone https://github.com/AlexeyAB/darknet
This will create a folder called ‘darknet’ at the root of the project.
Section 3: Configurations
- Collect the images and XML annotation files from RDD2020 into a single folder ‘Data_India’.
- The annotation of the bounding boxes are currently in an XML format, for example :
3. This has to be converted into the YOLO format. The YOLO format for the above XML file is :
0 0.6694444444444445 0.8347222222222223 0.17500000000000002 0.3138888888888889
4, In order to perform this, open the script
pascal_yolo_conversion.py and edit the variables below. Here 'dir_path' denotes the folder where all images and XML files are stored, and 'classes' denotes the names of the class of objects to detect.
dir_path = 'Data_India/'
classes = ['D00', 'D10', 'D20', 'D40', 'D44']
5. Open command line cmd at the root of the repository.
6. Run the command
7. A new folder called ‘YOLO’ is created in the ‘dir_path’ folder. Copy the contents of the ‘YOLO’ folder to the ‘dir_path’ folder.
8. You can now remove all the XML files.
9. Finally, your data for training the YOLOv4 model is now ready.
- Create a configs folder at the root. This will contain all config files related to configuring the YOLO model.
obj.data: Change the number of classes to the number of classes you are working on. Create a training folder at the root, and this will store your training weights. Make sure to check all other paths. It is best to provide a path relative to the root. Check the sample provided.
obj.names: On every new line mention the names or labels of the objects to be detected. Check the sample provided.
yolov4-custom.cfg: This is a very important file and requires 5 important parameters changes. This file is also present in the 'darknet/cfg' folder as 'yolov4-custom.cfg'.
- Recommended having batch = 64 and subdivisions = 16 for ultimate results. If you run into any issues then up subdivisions to 32.
- Set max_batches = (# of classes) * 2000 (but no less than 6000). So if you are training for 1, 2, or 3 classes it will be 6000, however, the detector for 5 classes would have max_batches=10000.
- Set steps = (80% of max_batches), (90% of max_batches) (so if your max_batches = 10000, then steps = 8000, 9000).
- Search for classes and set it to a number of classes. (You should find it 3 times).
- Above every class, some lines above you should find parameter names filters. Set filters = (# of classes + 5) * 3 (so if you are training for one class then your filters = 18, but if you are training for 4 classes then your filters = 27).
Generate train.txt and test.txt
These files contain the path of the images to be used for training and testing relative to the darknet folder.
- Inside the folder ‘Data_India/’ create two folders ‘train’ and ‘test’.
- Divide all images and XML files present in the folder ‘Data_India/’ into the ‘train’ and ‘test’ folders in a 90:10 ratio (or 80:20, if there are sufficient training images). You can do this manually or write a script for the same. The python package
split-foldersat PyPI can also be helpful in this case.
- Once the folders ‘train’ and ‘test’ are created, now run the commands at the root of the project:
5. Make sure that your paths present in the scripts are with respect to the root of the darknet folder.
6. Sample ‘train.txt’ and ‘test.txt’ are provided in the ‘configs’ folder.
Pre-trained Yolo weights
Download and save pre-trained weights and save it in the configs folder using the command:
Section 4: Model Training
- Open the Notebook
Training_Notebook.ipynbto follow all the steps for training the model. The command for training the YOLO model is:
!./darknet detector train [path to obj.data] [path to yolov4-obj.cfg] [path to pre-trained weights] -dont_show -map
- Every 100 iterations the weights are stored in the folder mentioned in the ‘obj.data’ as parameter backup. And for every 1000 iterations, a new weights file is created.
- The training can be stopped once the MAP value reduces to around 1 and stabilizes (varies from application to application). This graph is shown in the next section.
- In case your notebook crashes or your server is down while training, you can resume your training with the following command:
!./darknet detector train [path to obj.data] [path to yolov4-obj.cfg] [path to training/backup/yolov4-obj_last.weights] -dont_show
- For checking the final MAP value of your model, enter the command:
!./darknet detector map [path to obj.data] [path to yolov4-obj.cfg] [path to weights file which you want to check MAP for]
Section 5: Results
- The training curve of MAP (Mean Average Precision) vs the iteration number is stored in the folder darknet as ‘chart.png’.
- This can be used to understand when to stop the training while evaluating the MAP value.
- MAP is an evaluation metric, commonly used in the field of computer vision used for object detection (i.e. localization and classification)
- The training at the start and towards the end is visualized in the graph below.
Section 6: Testing
Some samples of images tested with our model are shown below.
As we can see from the below samples, the model is not in its best version and need to train for a longer duration of time and iterations and also we can use data from the remaining countries as well, i.e. Japan and the Czech Republic.
- Reference The model for YOLOv4 is taken from the repository of AlexeyAB.
- Reference: The RDD2020 dataset: Deeksha Arya, Hiroya Maeda, Sanjay Kumar Ghosh, Durga Toshniwal, Hiroshi Omata, Takehiro Kashiyama, Toshikazu Seto, Alexander Mraz, Yoshihide Sekimoto
- The whole project is developed with python version
Python 3.7.7and pip version