Refining the Adaptivity Notion in the Huge Object Model

06/28/2023
by   Tomer Adar, et al.
0

The Huge Object model for distribution testing, first defined by Goldreich and Ron in 2022, combines the features of classical string testing and distribution testing. In this model we are given access to independent samples from an unknown distribution P over the set of strings {0,1}^n, but are only allowed to query a few bits from the samples. The distinction between adaptive and non-adaptive algorithms, which is natural in the realm of string testing (but is not relevant for classical distribution testing), plays a substantial role in the Huge Object model as well. In this work we show that in fact, the full picture in the Huge Object model is much richer than just that of the “adaptive vs. non-adaptive” dichotomy. We define and investigate several models of adaptivity that lie between the fully-adaptive and the completely non-adaptive extremes. These models are naturally grounded by viewing the querying process from each sample independently, and considering the “algorithmic flow” between them. For example, if we allow no information at all to cross over between samples (up to the final decision), then we obtain the locally bounded adaptive model, arguably the “least adaptive” one apart from being completely non-adaptive. A slightly stronger model allows only a “one-way” information flow. Even stronger (but still far from being fully adaptive) models follow by taking inspiration from the setting of streaming algorithms. To show that we indeed have a hierarchy, we prove a chain of exponential separations encompassing most of the models that we define.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset