Cite
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
MLA
Raman, Mrigank, et al. Model-Tuning Via Prompts Makes NLP Models Adversarially Robust. 2023. EBSCOhost, widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsoai&AN=edsoai.on1381607884&authtype=sso&custid=ns315887.
APA
Raman, M., Maini, P., Kolter, J. Z., Lipton, Z. C., & Pruthi, D. (2023). Model-tuning Via Prompts Makes NLP Models Adversarially Robust.
Chicago
Raman, Mrigank, Pratyush Maini, J. Zico Kolter, Zachary C. Lipton, and Danish Pruthi. 2023. “Model-Tuning Via Prompts Makes NLP Models Adversarially Robust.” http://widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsoai&AN=edsoai.on1381607884&authtype=sso&custid=ns315887.