Software quality: A Historical and Synthetic Content Analysis
Interconnected computers and software systems have become an indispensable part of people's lives, therefore software quality research is becoming more and more important. There have been multiple attempts to synthesize knowledge gained in software quality research, however, they were focused mainly on single aspects of software quality and not to structure the knowledge in a holistic way. The aim of our study was to close this gap. The software quality publications were harvested from the Scopus bibliographic database. The metadata was exported first to CRexlporer, which was employed to identify historical roots, and next to VOSViewer, which was used as a part of the synthetic content analysis. In our study we defined synthetic context analysis as a triangulation of bibliometrics and content analysis. Our search resulted in 14451 publications. The performance bibliometric study showed that the production of research publications relating to software quality is currently following an exponential growth trend and that the software quality research community is growing. The most productive country was the United States and the most productive Institution The Florida Atlantic University. The synthetic content analysis revealed that the published knowledge can be structured into 10 themes, the most important being the themes regarding software quality improvement with enhancing software engineering, advanced software testing, and improved defect and fault prediction with machine learning and data mining. According to the analysis of the hot topics, it seems that future research will be directed into developing and using a full specter of new artificial intelligence tools (not just machine learning and data mining) and focusing on how to assure software quality in agile development paradigms.
READ FULL TEXT