An Expert System for Learning Software Engineering Knowledge (with Case Studies in Understanding Static Code Warning)
Knowledge-based systems reason over some knowledge base. Hence, an important issue for such systems is how to acquire the knowledge needed for their inference. This paper assesses active learning methods for acquiring knowledge for "static code warnings". Static code analysis is a widely-used methods for detecting bugs and security vulnerabilities in software systems. As software becomes more complex, analysis tools also report lists of increasingly complex warnings that developers need to address on a daily basis. Such static code analysis tools often usually over-cautious; i.e. they often offer many warns about spurious issues. Previous research work shows that about 35 warnings reported as bugs by SA tools are actually unactionable (i.e., warnings that would not be acted on by developers because they are falsely suggested as bugs). Experienced developers know which errors are important and which can be safely ignoredHow can we capture that experience? This paper reports on an incremental AI tool that watches humans reading false alarm reports. Using an incremental support vector machine mechanism, this AI tool can quickly learn to distinguish spurious false alarms from more serious matters that deserve further attention. In this work, nine open source projects are employed to evaluate our proposed model on the features extracted by previous researchers and identify the actionable warnings in priority order given by our algorithm. We observe that our model can identify over 90 humans to ignore 70 to 80
READ FULL TEXT