Editing Commonsense Knowledge in GPT

05/24/2023
by   Anshita Gupta, et al.
0

Memory editing methods for updating encyclopedic knowledge in transformers have received increasing attention for their efficacy, specificity, and generalization advantages. However, it remains unclear if such methods can be adapted for the more nuanced domain of commonsense knowledge. We propose MEMIT_CSK, an adaptation of MEMIT to edit commonsense mistakes in GPT-2 Large and XL. We extend editing to various token locations and employ a robust layer selection strategy. Models edited by MEMIT_CSK outperforms the fine-tuning baselines by 10.97 20Q. We further propose a novel evaluation dataset, MEMIT-CSK-PROBE, that contains unaffected neighborhood, affected neighborhood, affected paraphrase, and affected reasoning challenges. MEMIT_CSK demonstrates favorable semantic generalization, outperforming fine-tuning baselines by 13.72 5.57 future direction of incorporating context-specific user feedback concerning commonsense in GPT by direct model editing, rectifying and customizing model behaviors via human-in-the-loop systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset