-
KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment. (ACL 2023)
Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, and Hongzhi Yin. [paper] [code] -
Knowledge unlearning for mitigating privacy risks in language models. (ACL 2023)
Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. [paper] [code] -
Unlearn What You Want to Forget: Efficient Unlearning for LLMs.
Jiaao Chen, Diyi Yang. [paper] [code] -
Large Language Model Unlearning.
Yuanshun Yao, Xiaojun Xu, and Yang Liu. [paper] [code] -
DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models.
Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, and Deyi Xiong. [paper] [code] -
Who's Harry Potter? Approximate Unlearning in LLMs.
Ronen Eldan, Mark Russinovich. [paper] -
Unlearning Bias in Language Models by Partitioning Gradients. (ACL 2023)
Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, Heng Ji. [paper] [code] -
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data.
Xinzhe Li, Ming Liu, Shang Gao. [paper] -
Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation.
Xinshuo Hu, Dongfang Li, Zihao Zheng, Zhenyu Liu, Baotian Hu, Min Zhang. [paper] -
Making Harmful Behaviors Unlearnable for Large Language Models.
Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang. [paper]
-
Editing Models with Task Arithmetic.
Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. [paper] [code] -
Composing Parameter-Efficient Modules with Arithmetic Operations.
Jinghan Zhang, Shiqi Chen, Junteng Liu, and Junxian He. [paper] [code] -
Fuse to Forget: Bias Reduction and Selective Memorization through Model Fusion.
Kerem Zaman, Leshem Choshen, Shashank Srivastava. [paper] [code]
- in-context unlearning: language models as few shot unlearners.
Martin Pawelczyk, Seth Neel, and Himabindu Lakkaraju. [paper]