Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

12/05/2017
by   Kexin Pei, et al.
0

Due to the increasing usage of machine learning (ML) techniques in security- and safety-critical domains, such as autonomous systems and medical diagnosis, ensuring correct behavior of ML systems, especially for different corner cases, is of growing importance. In this paper, we propose a generic framework for evaluating security and robustness of ML systems using different real-world safety properties. We further design, implement and evaluate VeriVis, a scalable methodology that can verify a diverse set of safety properties for state-of-the-art computer vision systems with only blackbox access. VeriVis leverage different input space reduction techniques for efficient verification of different safety properties. VeriVis is able to find thousands of safety violations in fifteen state-of-the-art computer vision systems including ten Deep Neural Networks (DNNs) such as Inception-v3 and Nvidia's Dave self-driving system with thousands of neurons as well as five commercial third-party vision APIs including Google vision and Clarifai for twelve different safety properties. Furthermore, VeriVis can successfully verify local safety properties, on average, for around 31.7 to 64.8x more violations than existing gradient-based methods that, unlike VeriVis, cannot ensure non-existence of any violations. Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60.2

READ FULL TEXT

page 3

page 7

page 13

page 15

page 16

research
02/02/2021

Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS)

Machine Learning (ML) is now used in a range of systems with results tha...
research
03/02/2020

Towards Probability-based Safety Verification of Systems with Components from Machine Learning

Machine learning (ML) has recently created many new success stories. Hen...
research
01/18/2018

Toward Scalable Verification for Safety-Critical Deep Networks

The increasing use of deep neural networks for safety-critical applicati...
research
04/20/2022

Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems

The growing complexity of Cyber-Physical Systems (CPS) and challenges in...
research
05/28/2020

QEBA: Query-Efficient Boundary-Based Blackbox Attack

Machine learning (ML), especially deep neural networks (DNNs) have been ...
research
10/16/2021

TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks

Deep neural networks (DNNs) are now the de facto choice for computer vis...
research
02/08/2022

If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision Components

Machine Vision Components (MVC) are becoming safety-critical. Assuring t...

Please sign up or login with your details

Forgot password? Click here to reset