KNN-DBSCAN: a DBSCAN in high dimensions

09/09/2020
by   Youguang Chen, et al.
0

Clustering is a fundamental task in machine learning. One of the most successful and broadly used algorithms is DBSCAN, a density-based clustering algorithm. DBSCAN requires ϵ-nearest neighbor graphs of the input dataset, which are computed with range-search algorithms and spatial data structures like KD-trees. Despite many efforts to design scalable implementations for DBSCAN, existing work is limited to low-dimensional datasets, as constructing ϵ-nearest neighbor graphs is expensive in high-dimensions. In this paper, we modify DBSCAN to enable use of κ-nearest neighbor graphs of the input dataset. The κ-nearest neighbor graphs are constructed using approximate algorithms based on randomized projections. Although these algorithms can become inaccurate or expensive in high-dimensions, they possess a much lower memory overhead than constructing ϵ-nearest neighbor graphs (𝒪(nk) vs. 𝒪(n^2)). We delineate the conditions under which kNN-DBSCAN produces the same clustering as DBSCAN. We also present an efficient parallel implementation of the overall algorithm using OpenMP for shared memory and MPI for distributed memory parallelism. We present results on up to 16 billion points in 20 dimensions, and perform weak and strong scaling studies using synthetic data. Our code is efficient in both low and high dimensions. We can cluster one billion points in 3D in less than one second on 28K cores on the Frontera system at the Texas Advanced Computing Center (TACC). In our largest run, we cluster 65 billion points in 20 dimensions in less than 40 seconds using 114,688 x86 cores on TACC's Frontera system. Also, we compare with a state of the art parallel DBSCAN code; on 20d/4M point dataset, our code is up to 37× faster.

READ FULL TEXT
research
10/19/2020

LANNS: A Web-Scale Approximate Nearest Neighbor Lookup System

Nearest neighbor search (NNS) has a wide range of applications in inform...
research
05/28/2018

Clustering by latent dimensions

This paper introduces a new clustering technique, called dimensional cl...
research
12/12/2014

Representing Data by a Mixture of Activated Simplices

We present a new model which represents data as a mixture of simplices. ...
research
07/06/2020

Parallel Algorithms for Successive Convolution

In this work, we consider alternative discretizations for PDEs which use...
research
11/05/2021

SPANN: Highly-efficient Billion-scale Approximate Nearest Neighbor Search

The in-memory algorithms for approximate nearest neighbor search (ANNS) ...
research
11/06/2018

High Dimensional Clustering with r-nets

Clustering, a fundamental task in data science and machine learning, gro...
research
09/28/2021

PSI: Constructing ad-hoc Simplices to Interpolate High-Dimensional Unstructured Data

Interpolating unstructured data using barycentric coordinates becomes in...

Please sign up or login with your details

Forgot password? Click here to reset