Ranking State-of-the-art Papers via Incomplete Tournaments Induced by Citations from Performance Tables
How can we find state-of-the-art papers for a given task? Is it possible to automatically maintain leaderboards in the form of partial orders between papers, based on performance on standard benchmarks? Can we detect potential anomalies in papers where some metrics improve but others degrade? Is citation count of any use in early detection of top-performing papers? Here we answer these questions, while describing our experience building a new bibliometric system that robustly mines experimental performance from papers. We propose a novel performance tournament graph with papers as nodes, where edges encode noisy performance comparison information extracted from papers. These extractions resemble (noisy) outcomes of matches in an incomplete tournament. Had they been complete and perfectly reliable, compiling a ranking would have been trivial. In the face of noisy extractions, we propose several approaches to rank papers, identify the best of them, and show that commercial academic search systems fail miserably at finding state-of-the-art papers. Contradicting faith in a steady march of science, we find widespread existence of cycles in the performance tournament, which expose potential anomalies and reproducibility issues. Using widely-used lists of state-of-the-art papers in 27 areas of Computer Science, we demonstrate that our system can effectively build reliable rankings. Our code and data sets will be placed in the public domain.
READ FULL TEXT