#1696 Efficient Text-to-Code Retrieval with Cascaded Fast and Slow Transformer Models


More

  • Michael Lyu

Accepted

[PDF] Submission (1.4MB) Feb 2, 2023, 9:54:40 PM AoE · 0636f91d75904403f6dd85aa61606baf22f34b3aab7a9cf4ec01af14f66ee7ce0636f91d

The goal of semantic code search or text-to-code search is to retrieve a semantically relevant code snippet from an existing code database using a natural language query. Existing approaches are neither effective nor efficient enough for a practical semantic code search system. In this paper, we propose an efficient and accurate text-to-code search framework with cascaded fast and slow models, in which a fast transformer encoder model is learned to optimize a scalable index for fast retrieval followed by learning a slow classification-based re-ranking model to improve the accuracy of the top K results from the fast retrieval. To further reduce the high memory cost of deploying two separate models in practice, we propose to jointly train the fast and slow model based on a single transformer encoder with shared parameters. Empirically our cascaded method is not only efficient and scalable, but also achieves state-of-the-art results with an average mean reciprocal ranking (MRR) score of 0.7795 (across 6 programming languages) on the CodeSearchNet benchmark as opposed to the prior state-of-the-art result of 0.740 MRR. Our codebase will be made publicly available.

A. Gotmare, J. Li, S. Joty, S. Hoi

  • 1. Topics: Artificial intelligence and machine learning for software engineering
Compliance with Double Anonymous Submission Policy
Compliance with Open Science Policy
Compliance with ACM Policies
Compliance with ACM Publications Policy on Research Involving Human Participants and Subjects

To edit this submission, sign in using your email and password.