Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems

by   Xugui Zhou, et al.

The growing complexity of Cyber-Physical Systems (CPS) and challenges in ensuring safety and security have led to the increasing use of deep learning methods for accurate and scalable anomaly detection. However, machine learning (ML) models often suffer from low performance in predicting unexpected data and are vulnerable to accidental or malicious perturbations. Although robustness testing of deep learning models has been extensively explored in applications such as image classification and speech recognition, less attention has been paid to ML-driven safety monitoring in CPS. This paper presents the preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS against two types of accidental and malicious input perturbations, generated using a Gaussian-based noise model and the Fast Gradient Sign Method (FGSM). We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency. Experimental results with two case studies of Artificial Pancreas Systems (APS) for diabetes management show that ML-based safety monitors trained with domain knowledge can reduce on average up to 54.2 error and keep the average F1 scores high while improving transparency.


page 1

page 2

page 4


Deep Learning-Based Anomaly Detection in Cyber-Physical Systems: Progress and Opportunities

Anomaly detection is crucial to ensure the security of cyber-physical sy...

Improving Radioactive Material Localization by Leveraging Cyber-Security Model Optimizations

One of the principal uses of physical-space sensors in public safety app...

Integration of Domain Expert-Centric Ontology Design into the CRISP-DM for Cyber-Physical Production Systems

In the age of Industry 4.0 and Cyber-Physical Production Systems (CPPSs)...

Deep Learning model integrity checking mechanism using watermarking technique

In response to the growing popularity of Machine Learning (ML) technique...

SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure

Ensuring safety and explainability of machine learning (ML) is a topic o...

Robustness of ML-Enhanced IDS to Stealthy Adversaries

Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) ha...

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

Due to the increasing usage of machine learning (ML) techniques in secur...

Please sign up or login with your details

Forgot password? Click here to reset