Streaming Algorithms with Large Approximation Factors

07/17/2022
by   Yi Li, et al.
0

We initiate a broad study of classical problems in the streaming model with insertions and deletions in the setting where we allow the approximation factor α to be much larger than 1. Such algorithms can use significantly less memory than the usual setting for which α = 1+ϵ for an ϵ∈ (0,1). We study large approximations for a number of problems in sketching and streaming and the following are some of our results. For the ℓ_p norm/quasinorm x_p of an n-dimensional vector x, 0 < p ≤ 2, we show that obtaining a (n)-approximation requires the same amount of memory as obtaining an O(1)-approximation for any M = n^Θ(1). For estimating the ℓ_p norm, p > 2, we show an upper bound of O(n^1-2/p (log n log M)/α^2) bits for an α-approximation, and give a matching lower bound, for almost the full range of α≥ 1 for linear sketches. For the ℓ_2-heavy hitters problem, we show that the known lower bound of Ω(k log nlog M) bits for identifying (1/k)-heavy hitters holds even if we are allowed to output items that are 1/(α k)-heavy, for almost the full range of α, provided the algorithm succeeds with probability 1-O(1/n). We also obtain a lower bound for linear sketches that is tight even for constant probability algorithms. For estimating the number ℓ_0 of distinct elements, we give an n^1/t-approximation algorithm using O(tloglog M) bits of space, as well as a lower bound of Ω(t) bits, both excluding the storage of random bits.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro