Reducing the Effort for Systematic Reviews in Software Engineering
Abstract
Background. Systematic Reviews (SRs) are means to collect and synthesize evidence from the identification, analysis and interpretation of multiple sources, or primary studies. To this aim, they use a well-defined methodology that should mitigate the risks of biases and ensure repeatability for later updates. SRs, however, involve significant effort.
Goal. The goal of this paper is to introduce a new methodology that, among other benefits, while taking advantage of the value provided by human expertise, reduces the amount of manual tedious tasks involved in SRs.
Method. Starting from current methodologies for SRs, we replaced the steps of keywording and data extraction with an automatic methodology for generating a domain ontology and classifying the primary studies. This methodology has been then applied in the software engineering sub-area of software architecture, and software quality, and evaluated with human annotators.
Results. The result is a novel expert-driven automatic methodology for performing SRs. This combines ontology-learning techniques and semantic technologies with the human-in-the-loop. The first (thanks to automation) fosters scalability, objectivity, reproducibility and granularity of the studies; the second allows tailoring to the specific focus of the study at hand, as well as knowledge reuse from domain experts.
Conclusions. Thanks to automation of the less creative steps in SRs, our methodology allows researchers to skip the tedious tasks of keywording and manually classifying primary studies, thus freeing effort for the analysis and the discussion.