TY - GEN
T1 - Experimental Evaluation of Pothole Detection and Its Dimension Estimation Using YOLOv8 and Depth Camera for Road Surface Analysis
AU - Widodo, Henry
AU - Taufiqurrohman, Heru
AU - Muis, Abdul
AU - Wijayanto, Yusuf Nur
AU - Prihantoro, Galuh
AU - Dwiyanti, Hanifah
AU - Cahya, Zaid
AU - Widaryanto, Afif
AU - Nugroho, Tsani Hendro
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Pothole detection and dimension estimation are essential to improve the safety and comfort of autonomous vehicles. This paper uses a depth camera to present a new approach for position estimation and pothole detection using the You Only Look Once (YOLO) method. Experimental results on detection applications using real cars show good pothole detection accuracy in both bright and dimly lit road conditions covered by trees with precision. This object detection uses the YOLO version 8 nano model accompanied by coco as pre-trained training. In the training dataset, only one class, namely the pothole dataset, is used. A depth camera from Intel Realsense type D455 will be employed, and the Jetson Orin Nano will subsequently apply the training results. During the field test, data about the position of the pothole coordinate inside the pixel frame, the pothole's width, and its distance from the camera will be displayed, in addition to pothole-detecting objects. Every pothole object found will have its data shown in real time. Validating the width and length of the pothole involves taking actual measurements with a meter. The estimated distance and width of potholes showed good agreement with direct manual measurements with R -squared values above 0.97 and gradients approaching unity.
AB - Pothole detection and dimension estimation are essential to improve the safety and comfort of autonomous vehicles. This paper uses a depth camera to present a new approach for position estimation and pothole detection using the You Only Look Once (YOLO) method. Experimental results on detection applications using real cars show good pothole detection accuracy in both bright and dimly lit road conditions covered by trees with precision. This object detection uses the YOLO version 8 nano model accompanied by coco as pre-trained training. In the training dataset, only one class, namely the pothole dataset, is used. A depth camera from Intel Realsense type D455 will be employed, and the Jetson Orin Nano will subsequently apply the training results. During the field test, data about the position of the pothole coordinate inside the pixel frame, the pothole's width, and its distance from the camera will be displayed, in addition to pothole-detecting objects. Every pothole object found will have its data shown in real time. Validating the width and length of the pothole involves taking actual measurements with a meter. The estimated distance and width of potholes showed good agreement with direct manual measurements with R -squared values above 0.97 and gradients approaching unity.
KW - autonomous vehicle
KW - pothole detection
KW - pothole distance
KW - pothole width
KW - road surface
KW - unstructured path
UR - http://www.scopus.com/inward/record.url?scp=85215943526&partnerID=8YFLogxK
U2 - 10.1109/ICRAMET62801.2024.10809331
DO - 10.1109/ICRAMET62801.2024.10809331
M3 - Conference contribution
AN - SCOPUS:85215943526
T3 - Proceeding - 2024 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications, ICRAMET 2024
SP - 339
EP - 344
BT - Proceeding - 2024 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications, ICRAMET 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications, ICRAMET 2024
Y2 - 12 November 2024 through 13 November 2024
ER -