Rethinking AI code generation: a one-shot correction approach based on user feedback

Code generation has become an integral feature of modern IDEs, gathering significant attention. Notable approaches like GitHub Copilot and TabNine have been proposed to tackle this task. However, these tools may shift code writing tasks towards code reviewing, which involves modification from users....

Full description

Saved in:
Bibliographic Details
Main Authors: Le, Kim Tuyen (Author) , Andrzejak, Artur (Author)
Format: Article (Journal)
Language:English
Published: 12 July 2024
In: Automated software engineering
Year: 2024, Volume: 31, Issue: 2, Pages: 1-42
ISSN:1573-7535
DOI:10.1007/s10515-024-00451-y
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1007/s10515-024-00451-y
Get full text
Author Notes:Kim Tuyen Le, Artur Andrzejak
Description
Summary:Code generation has become an integral feature of modern IDEs, gathering significant attention. Notable approaches like GitHub Copilot and TabNine have been proposed to tackle this task. However, these tools may shift code writing tasks towards code reviewing, which involves modification from users. Despite the advantages of user feedback, their responses remain transient and lack persistence across interaction sessions. This is attributed to the inherent characteristics of generative AI models, which require explicit re-training for new data integration. Additionally, the non-deterministic and unpredictable nature of AI-powered models limits thorough examination of their unforeseen behaviors. We propose a methodology named One-shot Correction to mitigate these issues in natural language to code translation models with no additional re-training. We utilize decomposition techniques to break down code translation into sub-problems. The final code is constructed using code snippets of each query chunk, extracted from user feedback or selectively generated from a generative model. Our evaluation indicates comparable or improved performance compared to other models. Moreover, the methodology offers straightforward and interpretable approaches, which enable in-depth examination of unexpected results and facilitate insights for potential enhancements. We also illustrate that user feedback can substantially improve code translation models without re-training. Ultimately, we develop a preliminary GUI application to demonstrate the utility of our methodology in simplifying customization and assessment of suggested code for users.
Item Description:Gesehen am 10.12.2024
Physical Description:Online Resource
ISSN:1573-7535
DOI:10.1007/s10515-024-00451-y