Curating GitHub for Engineered Software Projects

Authors: Nuthan Munaiah Steven Kroh Craig Cabrey Meiyappan Nagappan

Venue: EMSE   Empirical Software Engineering, Vol. 22, No. 6, pp. 3219-3253, 2017

Year: 2017

Abstract: Software forges like GitHub host millions of repositories. Software engineering researchers have been able to take advantage of such a large corpora of potential study subjects with the help of tools like GHTorrent and Boa. However, the simplicity in querying comes with a caveat: there are limited means of separating the signal (e.g. repositories containing engineered software projects) from the noise (e.g. repositories containing home work assignments). The proportion of noise in a random sample of repositories could skew the study and may lead to researchers reaching unrealistic, potentially inaccurate, conclusions. We argue that it is imperative to have the ability to sieve out the noise in such large repository forges. We propose a framework, and present a reference implementation of the framework as a tool called reaper, to enable researchers to select GitHub repositories that contain evidence of an engineered software project. We identify software engineering practices (called dimensions) and propose means for validating their existence in a GitHub repository. We used reaper to measure the dimensions of 1,857,423 GitHub repositories. We then used manually classified data sets of repositories to train classifiers capable of predicting if a given GitHub repository contains an engineered software project. The performance of the classifiers was evaluated using a set of 200 repositories with known ground truth classification. We also compared the performance of the classifiers to other approaches to classification (e.g. number of GitHub Stargazers) and found our classifiers to outperform existing approaches. We found stargazers-based classifier (with 10 as the threshold for number of stargazers) to exhibit high precision (97%) but an inversely proportional recall (32%). On the other hand, our best classifier exhibited a high precision (82%) and a high recall (86%). The stargazer-based criteria offers precision but fails to recall a significant portion of the population.

BibTeX:

@article{nuthanmunaiah2017cgfesp,
    author = "Nuthan Munaiah and Steven Kroh and Craig Cabrey and Meiyappan Nagappan",
    title = "Curating GitHub for Engineered Software Projects",
    year = "2017",
    pages = "3219-3253",
    journal = "Empirical Software Engineering",
    volume = "22",
    number = "6"
}

Plain Text:

Nuthan Munaiah, Steven Kroh, Craig Cabrey, and Meiyappan Nagappan, "Curating GitHub for Engineered Software Projects," Empirical Software Engineering, pp. 3219-3253