Research on gradient optimization based backdoor identification of large language model
Chen Jiahua1,,Chen Yu 2,,Cao Qi3
1 School of Information and Software Engineering,University of Electronic Science and Technology of China,Chengdu 610066, China; 2 School of Computer Science,Beijing University of Posts and Telecommunications, Beijing 100876, China; 3 CAS Key Laboratory of AI Security, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Abstract: With the popularity of large language models (LLM) and their application in more fields, the security concerns of large language models also arise. In general, training LLM has extremely demanding requirements for datasets and computing resources, so most users who need to use them directly use opensource datasets and models on the Internet, which provides an excellent greenhouse for backdoor attacks. A backdoor attack is when a user enters normal data into the model as if it were not injected with a backdoor, but the model output is abnormal when data with a backdoor trigger is input. An effective way to prevent backdoor attacks is to perform backdoor identification. At present, gradientbased optimization methods are commonly used, but the setting of internal impact factors has a great impact on the recognition effect when using these methods. In this paper, the word token length, the number of nearest neighbors, and the noise scale are measured experimentally and the mechanism of action is analyzed, so as to provide reference for researchers who use these methods in the future.
Key words : large language models; backdoor attack; gradient based backdoor identification; impact factor