On the Feasibility of Specialized Ability Extracting for Large Language Code Models

by   Zongjie Li, et al.

Recent progress in large language code models (LLCMs) has led to a dramatic surge in the use of software development. Nevertheless, it is widely known that training a well-performed LLCM requires a plethora of workforce for collecting the data and high quality annotation. Additionally, the training dataset may be proprietary (or partially open source to the public), and the training process is often conducted on a large-scale cluster of GPUs with high costs. Inspired by the recent success of imitation attacks in extracting computer vision and natural language models, this work launches the first imitation attack on LLCMs: by querying a target LLCM with carefully-designed queries and collecting the outputs, the adversary can train an imitation model that manifests close behavior with the target LLCM. We systematically investigate the effectiveness of launching imitation attacks under different query schemes and different LLCM tasks. We also design novel methods to polish the LLCM outputs, resulting in an effective imitation training process. We summarize our findings and provide lessons harvested in this study that can help better depict the attack surface of LLCMs. Our research contributes to the growing body of knowledge on imitation attacks and defenses in deep neural models, particularly in the domain of code related tasks.


Adversarial Imitation Attack

Deep learning models are known to be vulnerable to adversarial examples....

Training-free Lexical Backdoor Attacks on Language Models

Large-scale language models have achieved tremendous success across vari...

The False Promise of Imitating Proprietary LLMs

An emerging method to cheaply improve a weaker language model is to fine...

Imitation Attacks and Defenses for Black-box Machine Translation Systems

We consider an adversary looking to steal or attack a black-box machine ...

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

Neural text ranking models have witnessed significant advancement and ar...

Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks

Recent explorations with commercial Large Language Models (LLMs) have sh...

Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models

Large language models (LLMs), such as ChatGPT, have emerged with astonis...

Please sign up or login with your details

Forgot password? Click here to reset