A Full-ﬂedged Commit Message Quality Checker Based on Machine Learning
Buchtitel: Proceedings of the 47th Annual IEEE Computers, Software, and Applications Conference (COMPSAC'23)
Commit messages (CMs) are an essential part of version control. By providing important context in regard to what has changed and why, they strongly support software maintenance and evolution. But writing good CMs is difﬁcult and often neglected by developers. So far, there is no tool suitable for practice that automatically assesses how well a CM is written, including its meaning and context. Since this task is challenging, we ask the research question: how well can the CM quality, including semantics and context, be measured with machine learning methods? By considering all rules from the most popular CM quality guideline, creating datasets for those rules, and training and evaluating state-of-the-art machine learning models to check those rules, we can answer the research question with: sufﬁciently well for practice, with the lowest F1 score of 82.9%, for the most challenging task. We develop a full-ﬂedged opensource framework that checks all these CM quality rules. It is useful for research, e.g., automatic CM generation, but most importantly for software practitioners to raise the quality of CMs and thus the maintainability and evolution speed of their software.