Recently, large language models (LLMs) have demonstrated remarkable achievements in various software engineering (SE) tasks due to their exceptional capabilities in understanding both natural language and source code, further indicating immense potential in test generation. While existing automated test generation tools have made significant progress in code coverage, they still struggle to comprehend the behavior of the programs under test. The challenge of test oracles remains a major hurdle, as generating effective test assertions to determine whether program behavior meets expectations is still fundamentally difficult.
To address the aforementioned issue, Ph.D. Candidate Quanjun Zhang from the iSE Lab conducts extensive experiments to explore the practical performance of LLMs in generating unit test assertions. This study analyses how well the generated assertions detect real software defects and proposes a retrieval-augmented assertion generation method that significantly improves the accuracy of assertion generation by existing LLMs. The study also delves into the effectiveness and limitations of LLM-based assertion generation techniques from various perspectives, including the number of assertions, assertion types, and the scale of the functions under test. This research provides several practical insights for future work to further enhance the performance of LLMs in generating assertions.
The study explores the application of LLMs in unit testing, helping developers generate high-quality test assertions to detect real-world software defects. The findings, presented in the paper titled "Exploring Automated Assertion Generation via Large Language Models," have been accepted by Transactions on Software Engineering and Methodology (TOSEM, CCF-A). The research is supported by the National Natural Science Foundation of China and the CCF-Huawei Poplar Fund for Software Engineering, with plans to integrate it into Huawei's developer toolset in the future.
Quanjun Zhang, under the joint supervision of Professor Zhenyu Chen and Associate Professor Chunrong Fang, focuses on research areas, including intelligent software testing and automated program repair. He has published several papers in top-tier software engineering academic journals and conferences such as ISSTA, ICSE, ASE, ACL, TSE, TDSC, and TOSEM.