Reproducibility in NLP: What Have We Learned from the Checklist?

06/16/2023
by   Ian Magnusson, et al.
0

Scientific progress in NLP rests on the reproducibility of researchers' claims. The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to remind them of key information to include. We provide the first analysis of the Checklist by examining 10,405 anonymous responses to it. First, we find evidence of an increase in reporting of information on efficiency, validation performance, summary statistics, and hyperparameters after the Checklist's introduction. Further, we show acceptance rate grows for submissions with more Yes responses. We find that the 44 submissions that gather new data are 5 that did not; the average reviewer-rated reproducibility of these submissions is also 2 claim to open-source their code, though submissions that do have 8 reproducibility score relative to those that do not, the most for any item. We discuss what can be inferred about the state of reproducibility in NLP, and provide a set of recommendations for future conferences, including: a) allowing submitting code and appendices one week after the deadline, and b) measuring dataset reproducibility by a checklist of data collection practices.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset