Lessons learned from replicating a study on information-retrieval based test case prioritization
Objective: In this study, we aim to replicate an artefact-based study on software testing to address the gap. We focus on (a) providing a step by step guide of the replication, reflecting on challenges when replicating artefact-based testing research, (b) Evaluating the replicated study concerning its validity and robustness of the findings. Method: We replicate a test case prioritization technique by Kwon et al. We replicated the original study using four programs, two from the original study and two new programs. The replication study was implemented using Python to support future replications. Results: Various general factors facilitating replications are identified, such as: (1) the importance of documentation; (2) the need of assistance from the original authors; (3) issues in the maintenance of open source repositories (e.g., concerning needed software dependencies); (4) availability of scripts. We also raised several observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. Conclusion: We conclude that the study by Kwon et al. is replicable for small and medium programs and could be automated to facilitate software practitioners, given the availability of required information.
READ FULL TEXT