Distributed Evaluations: Ending Neural Point Metrics

06/11/2018
by   Daniel Cohen, et al.
0

With the rise of neural models across the field of information retrieval, numerous publications have incrementally pushed the envelope of performance for a multitude of IR tasks. However, these networks often sample data in random order, are initialized randomly, and their success is determined by a single evaluation score. These issues are aggravated by neural models achieving incremental improvements from previous neural baselines, leading to multiple near state of the art models that are difficult to reproduce and quickly become deprecated. As neural methods are starting to be incorporated into low resource and noisy collections that further exacerbate this issue, we propose evaluating neural models both over multiple random seeds and a set of hyperparameters within ϵ distance of the chosen configuration for a given metric.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset