Cite
KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
MLA
Liu, Yongfei, et al. KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation. Sept. 2021. EBSCOhost, widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsair&AN=edsair.doi.dedup.....71b3a544e19705e749a66315d22b3341&authtype=sso&custid=ns315887.
APA
Liu, Y., Wu, C., Tseng, S., Lal, V., He, X., & Duan, N. (2021). KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation.
Chicago
Liu, Yongfei, Chenfei Wu, Shao-yen Tseng, Vasudev Lal, Xuming He, and Nan Duan. 2021. “KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation,” September. http://widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsair&AN=edsair.doi.dedup.....71b3a544e19705e749a66315d22b3341&authtype=sso&custid=ns315887.