Learned Systems Security

by   Roei Schuster, et al.

A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.


Threat Assessment in Machine Learning based Systems

Machine learning is a field of artificial intelligence (AI) that is beco...

Adversarial Machine Learning Threat Analysis in Open Radio Access Networks

The Open Radio Access Network (O-RAN) is a new, open, adaptive, and inte...

A Survey on Machine Learning-based Misbehavior Detection Systems for 5G and Beyond Vehicular Networks

Significant progress has been made towards deploying Vehicle-to-Everythi...

Legal Risks of Adversarial Machine Learning Research

Adversarial Machine Learning is booming with ML researchers increasingly...

Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples

Fifth Generation (5G) networks must support billions of heterogeneous de...

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...

An Evasion Attack against ML-based Phishing URL Detectors

Background: Over the year, Machine Learning Phishing URL classification ...

Please sign up or login with your details

Forgot password? Click here to reset