Environmental perception and 3D object detection are key factors for advancing autonomous driving and require robust security measures to ensure optimal performance and safety. However, established methods often focus only on protecting the involved data and overlook synchronization and timing aspects, which are equally crucial for ensuring profound system security. For instance, multi-modal sensor fusion techniques for object detection can be affected by input desynchronization resulting from random communication delays or malicious cyber attacks, as these techniques combine various sensor inputs to extract shared features present in their data streams simultaneously. Current research acknowledges the importance of temporal alignment in this context. However, the presented studies typically assume genuine system behavior and neglect the potential threat of malicious attacks, as the suggested solutions lack strategies to prevent intentional data misalignment. Additionally, they do not adequately address how sensor input desynchronization affects fusion performance in depth. This paper investigates how desynchronization attacks impact sensor fusion algorithms for 3D object detection. We evaluate how varying sensor delays affect the detection performance and link our findings to the internal architecture of the sensor fusion algorithms and the influence of specific traffic scenarios and their dynamics. We compiled four datasets covering typical traffic scenarios for our empirical evaluation and tested them on four representative fusion algorithms. Our results show that all evaluated algorithms are vulnerable to input desynchronization, as the performance declines with increasing sensor delays, highlighting the existing lack of resilience to desynchronization attacks. Furthermore, we observe that the Light Detection and Ranging (LiDAR) sensor is significantly more susceptible to delays than the camera. Finally, our experiments indicate that the chosen fusion architecture correlates with the system's resilience against desynchronization, as our results demonstrate that the early fusion approach provides greater robustness than others.
Sensor Fusion Desynchronization Attacks
Mauro Bellone;
2025-01-01
Abstract
Environmental perception and 3D object detection are key factors for advancing autonomous driving and require robust security measures to ensure optimal performance and safety. However, established methods often focus only on protecting the involved data and overlook synchronization and timing aspects, which are equally crucial for ensuring profound system security. For instance, multi-modal sensor fusion techniques for object detection can be affected by input desynchronization resulting from random communication delays or malicious cyber attacks, as these techniques combine various sensor inputs to extract shared features present in their data streams simultaneously. Current research acknowledges the importance of temporal alignment in this context. However, the presented studies typically assume genuine system behavior and neglect the potential threat of malicious attacks, as the suggested solutions lack strategies to prevent intentional data misalignment. Additionally, they do not adequately address how sensor input desynchronization affects fusion performance in depth. This paper investigates how desynchronization attacks impact sensor fusion algorithms for 3D object detection. We evaluate how varying sensor delays affect the detection performance and link our findings to the internal architecture of the sensor fusion algorithms and the influence of specific traffic scenarios and their dynamics. We compiled four datasets covering typical traffic scenarios for our empirical evaluation and tested them on four representative fusion algorithms. Our results show that all evaluated algorithms are vulnerable to input desynchronization, as the performance declines with increasing sensor delays, highlighting the existing lack of resilience to desynchronization attacks. Furthermore, we observe that the Light Detection and Ranging (LiDAR) sensor is significantly more susceptible to delays than the camera. Finally, our experiments indicate that the chosen fusion architecture correlates with the system's resilience against desynchronization, as our results demonstrate that the early fusion approach provides greater robustness than others.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

