Authors
College of Computer Science and Engineering, Taibah University, Madinah, Saudi Arabia
Associate Professor, Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah, Saudi Arabia
Abstract
There is a recognized need for explainable artificial intelligence (XAI) in today’s age as AIs are getting integrated more and more into our lives, from everyday tasks to huge fields where AI decisions are in a critical environment and could impact people’s lives, XAI aims to make us understand the model behavior and decisions using developed methods, one such field is autonomous vehicles where there are many tasks to that uses different models, object detection in one of these tasks where the vehicles need to observe and assess the environment around them, several studies have shown the importance of XAI in the fields autonomous vehicles, and explained their actions and detections. However, true explainability has not yet been reached, with the complexity of Deep Neural Networks (DNN) and the tasks of AV, and the impact of performance and explainability motivates more studies of explainability. In this study, we train YOLOv11 and YOLOv12 models on the KITTI and a cyclist dataset and implement the explainability techniques Grad-CAM++ and Eigen-CAM to explain the decision behind them. The models showcased great performance, achieving 0.93 precision, 0.93 recall, 0.95 mAP@50, and 0.83 mAP@50-95 for YOLOv11 and similar results from YOLOv12. The heatmaps generated from Grad-cam and
Eigen-cam showcased where the model focuses on for the detections, overall explaining that this model will increase our trust and safety of AVs.
