Suspicion-Free Adversarial Attacks on Clustering Algorithms

11/16/2019
by   Anshuman Chhabra, et al.
0

Clustering algorithms are used in a large number of applications and play an important role in modern machine learning– yet, adversarial attacks on clustering algorithms seem to be broadly overlooked unlike supervised learning. In this paper, we seek to bridge this gap by proposing a black-box adversarial attack for clustering models for linearly separable clusters. Our attack works by perturbing a single sample close to the decision boundary, which leads to the misclustering of multiple unperturbed samples, named spill-over adversarial samples. We theoretically show the existence of such adversarial samples for the K-Means clustering. Our attack is especially strong as (1) we ensure the perturbed sample is not an outlier, hence not detectable, and (2) the exact metric used for clustering is not known to the attacker. We theoretically justify that the attack can indeed be successful without the knowledge of the true metric. We conclude by providing empirical results on a number of datasets, and clustering algorithms. To the best of our knowledge, this is the first work that generates spill-over adversarial samples without the knowledge of the true metric ensuring that the perturbed sample is not an outlier, and theoretically proves the above.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2019

Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models

Machine Learning (ML) and Deep Learning (DL) models have achieved state-...
research
10/22/2021

Fairness Degrading Adversarial Attacks Against Clustering Algorithms

Clustering algorithms are ubiquitous in modern data science pipelines, a...
research
11/25/2018

Is Data Clustering in Adversarial Settings Secure?

Clustering algorithms have been increasingly adopted in security applica...
research
10/04/2022

On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses

Clustering models constitute a class of unsupervised machine learning me...
research
10/16/2021

Adversarial Attacks on Gaussian Process Bandits

Gaussian processes (GP) are a widely-adopted tool used to sequentially o...
research
09/20/2023

PRAT: PRofiling Adversarial aTtacks

Intrinsic susceptibility of deep learning to adversarial examples has le...
research
05/05/2022

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

Adversarial attack perturbs an image with an imperceptible noise, leadin...

Please sign up or login with your details

Forgot password? Click here to reset