k-means++: few more steps yield constant approximation

02/18/2020
by   Davin Choo, et al.
0

The k-means++ algorithm of Arthur and Vassilvitskii (SODA 2007) is a state-of-the-art algorithm for solving the k-means clustering problem and is known to give an O(log k)-approximation in expectation. Recently, Lattanzi and Sohler (ICML 2019) proposed augmenting k-means++ with O(k log log k) local search steps to yield a constant approximation (in expectation) to the k-means clustering problem. In this paper, we improve their analysis to show that, for any arbitrarily small constant > 0, with only k additional local search steps, one can achieve a constant approximation guarantee (with high probability in k), resolving an open problem in their paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2018

Improved Local Search Based Approximation Algorithm for Hard Uniform Capacitated k-Median Problem

In this paper, we study the hard uniform capacitated k- median problem u...
research
08/24/2017

A Fast Approximation Scheme for Low-Dimensional k-Means

We consider the popular k-means problem in d-dimensional Euclidean space...
research
07/25/2023

Noisy k-means++ Revisited

The k-means++ algorithm by Arthur and Vassilvitskii [SODA 2007] is a cla...
research
08/13/2020

Consistent k-Median: Simpler, Better and Robust

In this paper we introduce and study the online consistent k-clustering ...
research
03/29/2016

Local Search Yields a PTAS for k-Means in Doubling Metrics

The most well known and ubiquitous clustering problem encountered in nea...
research
02/08/2018

Peekaboo - Where are the Objects? Structure Adjusting Superpixels

This paper addresses the search for a fast and meaningful image segmenta...
research
02/25/2020

The Power of Recourse: Better Algorithms for Facility Location in Online and Dynamic Models

In this paper we study the facility location problem in the online with ...

Please sign up or login with your details

Forgot password? Click here to reset