02/21/2022
AI-enhanced fuzz testing via the CAN bus

Early security testing is an effective preventive measure to check the robustness and cybersecurity of vehicle systems. To prevent cyberattacks, unauthorized access, or manipulation, security testers in the automotive sector are increasingly turning to fuzz tests. These send randomly generated input data to a target system and monitor its response (Figure 1).

At every point along the product’s development lifecycle, fuzz testing is a suitable method for discovering memory corruption, denial of service (DoS), program crashes, and extreme resource usage issues. Fuzz testing is gaining importance especially with regard to new regulations such as UN R155, which require proof of adequate security measures during the development process as a condition for the approval of new vehicle types in many automotive markets worldwide. That’s why it’s necessary to systematically expand the limited number of specialized, protocol-based fuzzers for a vehicle’s controller area network (CAN) bus.
AI helps expand test coverage and functionality

Nonetheless, fuzz testing has fundamental technical constraints that so far have limited its test-case coverage, as well as its failure discovery, dissection, and root cause analysis. The key to remedying this is artificial intelligence (AI). It can be used to quickly and efficiently extend the test coverage and functions of fuzz tests via CAN bus. Using AI incorporates automation into major aspects of the process – such as constructing test cases, making smart mutations of test cases, and deriving the root cause of issues identified – which helps improve the quality of the results and significantly reduce overall run time.
ETAS’s new model AI-enhanced fuzzer now makes it possible to systematically leverage this potential. Crucial conditions for efficient security testing with this fuzzer are preparing the data and training the AI: in a kind of learning process, the fuzzer must first be able to reliably identify and categorize passed and failed test cases. To this end, the data must be prepared such that the AI model can process it efficiently and effectively. For example, the data must first be normalized before it is used to teach the AI fuzzer model. This means adjusting the average value of the data in such a way that it can be put in relation to all other values more easily, so the model has an easier time discovering the causes of failed test cases. In doing so, two key tasks must be implemented in the AI model: parsing of the output log files and identifying a technique to determine the (failure) verdict of each test case (Figure 2).
Integrating artificial intelligence

After the data is preprocessed, the fuzzer model is constructed from a selected AI software library (e.g. TensorFlow, Keras, PyTorch, or Pandas). It can then be trained and used to make test case predictions. Depending on the number of planned learning iterations, training functions, labels, and validation splits, the new ETAS solution was able to make refined predictions and classify unlabeled data after an average of 2–3 hours and 10,000 learning iterations in previous test projects.
The AI model is integrated into a traditional fuzzer by converting the fuzzer to an asynchronous model. Instead of in advance as usual, the AI-enhanced fuzzer can then generate new CAN fuzz test cases on the fly. The fuzzer analyzes pass/fail test cases in the field. For passed test cases, it automatically runs an inverse on what has passed. If a previously run test case fails, the fuzzer will run similar test cases to validate the failure. All the test data flows into a database, which from then on serves the software library as a basis for calculating predictions (Figure 3).
More efficient fuzz testing thanks to AI
ETAS sees potential for additional AI fuzzer models based on the specific protocol layer. AI-based models then enable, for example, qualified fuzzing of the Unified Diagnostic Service (UDS), enriching knowledge during fuzzing about the architecture or protocol of the target system being tested, or efficiently covering a variety of exercise branches to discover previously undetected software errors and exploits. In this way, AI-enhanced fuzz tests will help provide the required proof of adequate security measures in the development process much more comprehensively and efficiently than was previously possible.