Robust Real-Time Traffic Sign and Light Detection With Improved YOLOv7
Abstract
In this project, we seek to provide a computer vision model that can detect and classify traffic signs and lights for autonomous vehicles. Such a model enables proper information transmission to the controller so that suitable longitudinal/lateral actions can be taken based on the traffic conditions. To achieve a high-accuracy model, a significant amount of high-resolution and clear image datasets are usually necessary for training. However, such requirements might be hard to be satisfied, because the resolution of the image is determined by the resolution of the camera lens, weather conditions, etc.
Thus, the main objective of this paper is to propose an improved YOLOv7 model that has better performance than the original one in the real-time while the model is trained with random-quality and a limited number of datasets. At the end, we discuss the limitations of the proposed methods after applying and evaluating them with both images and videos of real-world driving by using the recorded rosbag. The evaluation is done with precision, recall, F-1 score, and mAP@0.5 for both the original and modified models and these values are used for comparison to identify the improvement. Finally, the conclusion is drawn based on the data comparison and the real-time detection results. We conclude that the modified model has a better performance than the original model for real-time autonomous driving.
Citation
Song, Youhan (2023). Robust Real-Time Traffic Sign and Light Detection With Improved YOLOv7. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /199872.