When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems

Abstract

Many research papers have recently focused on behavioral-based driver authentication systems in vehicles. Pushed by Artificial Intelligence (AI) advancements, these works propose powerful models to identify drivers through their unique biometric behavior. However, practitioners have not yet shown any interest in the topic. Indeed, several limitations and oversights make implementing the state-of-the-art impractical, such as the computational resources required for training and the management of false positives. Furthermore, while being proposed as security measures, researchers neglect possible attacks on these systems that can make them counterproductive. Driven by the significant gap between research and practical application, this paper seeks to connect these two domains. We develop two lightweight behavioral-based driver authentication systems based on Machine Learning (ML) and Deep Learning (DL) architectures designed for our constrained environments. We formalize a realistic system and threat model reflecting a real-world vehicle’s network for their implementation. When evaluated on real driving data, our models outclass the state-of-the-art with an accuracy of up to 0.999 in identification and authentication. Motivated by the inherent vulnerabilities of ML and DL models, we are the first to propose GAN-CAN, a class of novel evasion attacks, showing how attackers can still exploit these systems with a perfect attack success rate (up to 1.000). Our attacks are effective under different assumptions on the attacker’s knowledge and allow stealing a vehicle in less than 22 minutes. Finally, we formalize requirements for deploying driver authentication systems securely and avoiding attacks such as GAN-CAN. Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.

Publication
arXiv

Related