SE Factual Knowledge in Frozen Giant Code Model: A Study on FQN and its Retrieval

12/16/2022
by   Qing Huang, et al.
0

Pre-trained giant code models (PCMs) start coming into the developers' daily practices. Understanding what types of and how much software knowledge is packed into PCMs is the foundation for incorporating PCMs into software engineering (SE) tasks and fully releasing their potential. In this work, we conduct the first systematic study on the SE factual knowledge in the state-of-the-art PCM CoPilot, focusing on APIs' Fully Qualified Names (FQNs), the fundamental knowledge for effective code analysis, search and reuse. Driven by FQNs' data distribution properties, we design a novel lightweight in-context learning on Copilot for FQN inference, which does not require code compilation as traditional methods or gradient update by recent FQN prompt-tuning. We systematically experiment with five in-context-learning design factors to identify the best in-context learning configuration that developers can adopt in practice. With this best configuration, we investigate the effects of amount of example prompts and FQN data properties on Copilot's FQN inference capability. Our results confirm that Copilot stores diverse FQN knowledge and can be applied for the FQN inference due to its high inference accuracy and non-reliance on code analysis. Based on our experience interacting with Copilot, we discuss various opportunities to improve human-CoPilot interaction in the FQN inference task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset