Cite
MBTSAD: Mitigating Backdoors in Language Models Based on Token Splitting and Attention Distillation
MLA
Ding, Yidong, et al. MBTSAD: Mitigating Backdoors in Language Models Based on Token Splitting and Attention Distillation. 2025. EBSCOhost, widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2501.02754&authtype=sso&custid=ns315887.
APA
Ding, Y., Niu, J., & Yi, P. (2025). MBTSAD: Mitigating Backdoors in Language Models Based on Token Splitting and Attention Distillation.
Chicago
Ding, Yidong, Jiafei Niu, and Ping Yi. 2025. “MBTSAD: Mitigating Backdoors in Language Models Based on Token Splitting and Attention Distillation.” http://widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2501.02754&authtype=sso&custid=ns315887.