46 results on '"Kassner, Nora"'
Search Results
2. Kids on the Street: Queer Kinship and Religion in San Francisco's Tenderloin by Joseph Plaster (review)
- Author
-
Kassner, Nora
- Published
- 2024
3. Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?
- Author
-
Yang, Sohee, Kassner, Nora, Gribovskaya, Elena, Riedel, Sebastian, and Geva, Mor
- Subjects
Computer Science - Computation and Language - Abstract
We evaluate how well Large Language Models (LLMs) latently recall and compose facts to answer multi-hop queries like "In the year Scarlett Johansson was born, the Summer Olympics were hosted in the country of". One major challenge in evaluating this ability is that LLMs may have developed shortcuts by encounters of the head entity "Scarlett Johansson" and the answer entity "United States" in the same training sequences or merely guess the answer based on frequency-based priors. To prevent shortcuts, we exclude test queries where the head and answer entities co-appear in pretraining corpora. Through careful selection of relations and facts and systematic removal of cases where models might guess answers or exploit partial matches, we construct an evaluation dataset SOCRATES (ShOrtCut-fRee lATent rEaSoning). We observe that LLMs demonstrate promising latent multi-hop reasoning abilities without exploiting shortcuts, but only for certain types of queries. For queries requiring latent recall of countries as the intermediate answer, the best models achieve 80% latent composability, but this drops to just 5% for the recall of years. Comparisons with Chain-of-Thought composability highlight a significant gap between the ability of models to reason latently versus explicitly. Analysis reveals that latent representations of the intermediate answer are constructed more often in queries with higher latent composability, and shows the emergence of latent multi-hop reasoning during pretraining.
- Published
- 2024
4. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
- Author
-
Gemini Team, Georgiev, Petko, Lei, Ving Ian, Burnell, Ryan, Bai, Libin, Gulati, Anmol, Tanzer, Garrett, Vincent, Damien, Pan, Zhufeng, Wang, Shibo, Mariooryad, Soroosh, Ding, Yifan, Geng, Xinyang, Alcober, Fred, Frostig, Roy, Omernick, Mark, Walker, Lexi, Paduraru, Cosmin, Sorokin, Christina, Tacchetti, Andrea, Gaffney, Colin, Daruki, Samira, Sercinoglu, Olcan, Gleicher, Zach, Love, Juliette, Voigtlaender, Paul, Jain, Rohan, Surita, Gabriela, Mohamed, Kareem, Blevins, Rory, Ahn, Junwhan, Zhu, Tao, Kawintiranon, Kornraphop, Firat, Orhan, Gu, Yiming, Zhang, Yujing, Rahtz, Matthew, Faruqui, Manaal, Clay, Natalie, Gilmer, Justin, Co-Reyes, JD, Penchev, Ivo, Zhu, Rui, Morioka, Nobuyuki, Hui, Kevin, Haridasan, Krishna, Campos, Victor, Mahdieh, Mahdis, Guo, Mandy, Hassan, Samer, Kilgour, Kevin, Vezer, Arpi, Cheng, Heng-Tze, de Liedekerke, Raoul, Goyal, Siddharth, Barham, Paul, Strouse, DJ, Noury, Seb, Adler, Jonas, Sundararajan, Mukund, Vikram, Sharad, Lepikhin, Dmitry, Paganini, Michela, Garcia, Xavier, Yang, Fan, Valter, Dasha, Trebacz, Maja, Vodrahalli, Kiran, Asawaroengchai, Chulayuth, Ring, Roman, Kalb, Norbert, Soares, Livio Baldini, Brahma, Siddhartha, Steiner, David, Yu, Tianhe, Mentzer, Fabian, He, Antoine, Gonzalez, Lucas, Xu, Bibo, Kaufman, Raphael Lopez, Shafey, Laurent El, Oh, Junhyuk, Hennigan, Tom, Driessche, George van den, Odoom, Seth, Lucic, Mario, Roelofs, Becca, Lall, Sid, Marathe, Amit, Chan, Betty, Ontanon, Santiago, He, Luheng, Teplyashin, Denis, Lai, Jonathan, Crone, Phil, Damoc, Bogdan, Ho, Lewis, Riedel, Sebastian, Lenc, Karel, Yeh, Chih-Kuan, Chowdhery, Aakanksha, Xu, Yang, Kazemi, Mehran, Amid, Ehsan, Petrushkina, Anastasia, Swersky, Kevin, Khodaei, Ali, Chen, Gowoon, Larkin, Chris, Pinto, Mario, Yan, Geng, Badia, Adria Puigdomenech, Patil, Piyush, Hansen, Steven, Orr, Dave, Arnold, Sebastien M. R., Grimstad, Jordan, Dai, Andrew, Douglas, Sholto, Sinha, Rishika, Yadav, Vikas, Chen, Xi, Gribovskaya, Elena, Austin, Jacob, Zhao, Jeffrey, Patel, Kaushal, Komarek, Paul, Austin, Sophia, Borgeaud, Sebastian, Friso, Linda, Goyal, Abhimanyu, Caine, Ben, Cao, Kris, Chung, Da-Woon, Lamm, Matthew, Barth-Maron, Gabe, Kagohara, Thais, Olszewska, Kate, Chen, Mia, Shivakumar, Kaushik, Agarwal, Rishabh, Godhia, Harshal, Rajwar, Ravi, Snaider, Javier, Dotiwalla, Xerxes, Liu, Yuan, Barua, Aditya, Ungureanu, Victor, Zhang, Yuan, Batsaikhan, Bat-Orgil, Wirth, Mateo, Qin, James, Danihelka, Ivo, Doshi, Tulsee, Chadwick, Martin, Chen, Jilin, Jain, Sanil, Le, Quoc, Kar, Arjun, Gurumurthy, Madhu, Li, Cheng, Sang, Ruoxin, Liu, Fangyu, Lamprou, Lampros, Munoz, Rich, Lintz, Nathan, Mehta, Harsh, Howard, Heidi, Reynolds, Malcolm, Aroyo, Lora, Wang, Quan, Blanco, Lorenzo, Cassirer, Albin, Griffith, Jordan, Das, Dipanjan, Lee, Stephan, Sygnowski, Jakub, Fisher, Zach, Besley, James, Powell, Richard, Ahmed, Zafarali, Paulus, Dominik, Reitter, David, Borsos, Zalan, Joshi, Rishabh, Pope, Aedan, Hand, Steven, Selo, Vittorio, Jain, Vihan, Sethi, Nikhil, Goel, Megha, Makino, Takaki, May, Rhys, Yang, Zhen, Schalkwyk, Johan, Butterfield, Christina, Hauth, Anja, Goldin, Alex, Hawkins, Will, Senter, Evan, Brin, Sergey, Woodman, Oliver, Ritter, Marvin, Noland, Eric, Giang, Minh, Bolina, Vijay, Lee, Lisa, Blyth, Tim, Mackinnon, Ian, Reid, Machel, Sarvana, Obaid, Silver, David, Chen, Alexander, Wang, Lily, Maggiore, Loren, Chang, Oscar, Attaluri, Nithya, Thornton, Gregory, Chiu, Chung-Cheng, Bunyan, Oskar, Levine, Nir, Chung, Timothy, Eltyshev, Evgenii, Si, Xiance, Lillicrap, Timothy, Brady, Demetra, Aggarwal, Vaibhav, Wu, Boxi, Xu, Yuanzhong, McIlroy, Ross, Badola, Kartikeya, Sandhu, Paramjit, Moreira, Erica, Stokowiec, Wojciech, Hemsley, Ross, Li, Dong, Tudor, Alex, Shyam, Pranav, Rahimtoroghi, Elahe, Haykal, Salem, Sprechmann, Pablo, Zhou, Xiang, Mincu, Diana, Li, Yujia, Addanki, Ravi, Krishna, Kalpesh, Wu, Xiao, Frechette, Alexandre, Eyal, Matan, Dafoe, Allan, Lacey, Dave, Whang, Jay, Avrahami, Thi, Zhang, Ye, Taropa, Emanuel, Lin, Hanzhao, Toyama, Daniel, Rutherford, Eliza, Sano, Motoki, Choe, HyunJeong, Tomala, Alex, Safranek-Shrader, Chalence, Kassner, Nora, Pajarskas, Mantas, Harvey, Matt, Sechrist, Sean, Fortunato, Meire, Lyu, Christina, Elsayed, Gamaleldin, Kuang, Chenkai, Lottes, James, Chu, Eric, Jia, Chao, Chen, Chih-Wei, Humphreys, Peter, Baumli, Kate, Tao, Connie, Samuel, Rajkumar, Santos, Cicero Nogueira dos, Andreassen, Anders, Rakićević, Nemanja, Grewe, Dominik, Kumar, Aviral, Winkler, Stephanie, Caton, Jonathan, Brock, Andrew, Dalmia, Sid, Sheahan, Hannah, Barr, Iain, Miao, Yingjie, Natsev, Paul, Devlin, Jacob, Behbahani, Feryal, Prost, Flavien, Sun, Yanhua, Myaskovsky, Artiom, Pillai, Thanumalayan Sankaranarayana, Hurt, Dan, Lazaridou, Angeliki, Xiong, Xi, Zheng, Ce, Pardo, Fabio, Li, Xiaowei, Horgan, Dan, Stanton, Joe, Ambar, Moran, Xia, Fei, Lince, Alejandro, Wang, Mingqiu, Mustafa, Basil, Webson, Albert, Lee, Hyo, Anil, Rohan, Wicke, Martin, Dozat, Timothy, Sinha, Abhishek, Piqueras, Enrique, Dabir, Elahe, Upadhyay, Shyam, Boral, Anudhyan, Hendricks, Lisa Anne, Fry, Corey, Djolonga, Josip, Su, Yi, Walker, Jake, Labanowski, Jane, Huang, Ronny, Misra, Vedant, Chen, Jeremy, Skerry-Ryan, RJ, Singh, Avi, Rijhwani, Shruti, Yu, Dian, Castro-Ros, Alex, Changpinyo, Beer, Datta, Romina, Bagri, Sumit, Hrafnkelsson, Arnar Mar, Maggioni, Marcello, Zheng, Daniel, Sulsky, Yury, Hou, Shaobo, Paine, Tom Le, Yang, Antoine, Riesa, Jason, Rogozinska, Dominika, Marcus, Dror, Badawy, Dalia El, Zhang, Qiao, Wang, Luyu, Miller, Helen, Greer, Jeremy, Sjos, Lars Lowe, Nova, Azade, Zen, Heiga, Chaabouni, Rahma, Rosca, Mihaela, Jiang, Jiepu, Chen, Charlie, Liu, Ruibo, Sainath, Tara, Krikun, Maxim, Polozov, Alex, Lespiau, Jean-Baptiste, Newlan, Josh, Cankara, Zeyncep, Kwak, Soo, Xu, Yunhan, Chen, Phil, Coenen, Andy, Meyer, Clemens, Tsihlas, Katerina, Ma, Ada, Gottweis, Juraj, Xing, Jinwei, Gu, Chenjie, Miao, Jin, Frank, Christian, Cankara, Zeynep, Ganapathy, Sanjay, Dasgupta, Ishita, Hughes-Fitt, Steph, Chen, Heng, Reid, David, Rong, Keran, Fan, Hongmin, van Amersfoort, Joost, Zhuang, Vincent, Cohen, Aaron, Gu, Shixiang Shane, Mohananey, Anhad, Ilic, Anastasija, Tobin, Taylor, Wieting, John, Bortsova, Anna, Thacker, Phoebe, Wang, Emma, Caveness, Emily, Chiu, Justin, Sezener, Eren, Kaskasoli, Alex, Baker, Steven, Millican, Katie, Elhawaty, Mohamed, Aisopos, Kostas, Lebsack, Carl, Byrd, Nathan, Dai, Hanjun, Jia, Wenhao, Wiethoff, Matthew, Davoodi, Elnaz, Weston, Albert, Yagati, Lakshman, Ahuja, Arun, Gao, Isabel, Pundak, Golan, Zhang, Susan, Azzam, Michael, Sim, Khe Chai, Caelles, Sergi, Keeling, James, Sharma, Abhanshu, Swing, Andy, Li, YaGuang, Liu, Chenxi, Bostock, Carrie Grimes, Bansal, Yamini, Nado, Zachary, Anand, Ankesh, Lipschultz, Josh, Karmarkar, Abhijit, Proleev, Lev, Ittycheriah, Abe, Yeganeh, Soheil Hassas, Polovets, George, Faust, Aleksandra, Sun, Jiao, Rrustemi, Alban, Li, Pen, Shivanna, Rakesh, Liu, Jeremiah, Welty, Chris, Lebron, Federico, Baddepudi, Anirudh, Krause, Sebastian, Parisotto, Emilio, Soricut, Radu, Xu, Zheng, Bloxwich, Dawn, Johnson, Melvin, Neyshabur, Behnam, Mao-Jones, Justin, Wang, Renshen, Ramasesh, Vinay, Abbas, Zaheer, Guez, Arthur, Segal, Constant, Nguyen, Duc Dung, Svensson, James, Hou, Le, York, Sarah, Milan, Kieran, Bridgers, Sophie, Gworek, Wiktor, Tagliasacchi, Marco, Lee-Thorp, James, Chang, Michael, Guseynov, Alexey, Hartman, Ale Jakse, Kwong, Michael, Zhao, Ruizhe, Kashem, Sheleem, Cole, Elizabeth, Miech, Antoine, Tanburn, Richard, Phuong, Mary, Pavetic, Filip, Cevey, Sebastien, Comanescu, Ramona, Ives, Richard, Yang, Sherry, Du, Cosmo, Li, Bo, Zhang, Zizhao, Iinuma, Mariko, Hu, Clara Huiyi, Roy, Aurko, Bijwadia, Shaan, Zhu, Zhenkai, Martins, Danilo, Saputro, Rachel, Gergely, Anita, Zheng, Steven, Jia, Dawei, Antonoglou, Ioannis, Sadovsky, Adam, Gu, Shane, Bi, Yingying, Andreev, Alek, Samangooei, Sina, Khan, Mina, Kocisky, Tomas, Filos, Angelos, Kumar, Chintu, Bishop, Colton, Yu, Adams, Hodkinson, Sarah, Mittal, Sid, Shah, Premal, Moufarek, Alexandre, Cheng, Yong, Bloniarz, Adam, Lee, Jaehoon, Pejman, Pedram, Michel, Paul, Spencer, Stephen, Feinberg, Vladimir, Xiong, Xuehan, Savinov, Nikolay, Smith, Charlotte, Shakeri, Siamak, Tran, Dustin, Chesus, Mary, Bohnet, Bernd, Tucker, George, von Glehn, Tamara, Muir, Carrie, Mao, Yiran, Kazawa, Hideto, Slone, Ambrose, Soparkar, Kedar, Shrivastava, Disha, Cobon-Kerr, James, Sharman, Michael, Pavagadhi, Jay, Araya, Carlos, Misiunas, Karolis, Ghelani, Nimesh, Laskin, Michael, Barker, David, Li, Qiujia, Briukhov, Anton, Houlsby, Neil, Glaese, Mia, Lakshminarayanan, Balaji, Schucher, Nathan, Tang, Yunhao, Collins, Eli, Lim, Hyeontaek, Feng, Fangxiaoyu, Recasens, Adria, Lai, Guangda, Magni, Alberto, De Cao, Nicola, Siddhant, Aditya, Ashwood, Zoe, Orbay, Jordi, Dehghani, Mostafa, Brennan, Jenny, He, Yifan, Xu, Kelvin, Gao, Yang, Saroufim, Carl, Molloy, James, Wu, Xinyi, Arnold, Seb, Chang, Solomon, Schrittwieser, Julian, Buchatskaya, Elena, Radpour, Soroush, Polacek, Martin, Giordano, Skye, Bapna, Ankur, Tokumine, Simon, Hellendoorn, Vincent, Sottiaux, Thibault, Cogan, Sarah, Severyn, Aliaksei, Saleh, Mohammad, Thakoor, Shantanu, Shefey, Laurent, Qiao, Siyuan, Gaba, Meenu, Chang, Shuo-yiin, Swanson, Craig, Zhang, Biao, Lee, Benjamin, Rubenstein, Paul Kishan, Song, Gan, Kwiatkowski, Tom, Koop, Anna, Kannan, Ajay, Kao, David, Schuh, Parker, Stjerngren, Axel, Ghiasi, Golnaz, Gibson, Gena, Vilnis, Luke, Yuan, Ye, Ferreira, Felipe Tiengo, Kamath, Aishwarya, Klimenko, Ted, Franko, Ken, Xiao, Kefan, Bhattacharya, Indro, Patel, Miteyan, Wang, Rui, Morris, Alex, Strudel, Robin, Sharma, Vivek, Choy, Peter, Hashemi, Sayed Hadi, Landon, Jessica, Finkelstein, Mara, Jhakra, Priya, Frye, Justin, Barnes, Megan, Mauger, Matthew, Daun, Dennis, Baatarsukh, Khuslen, Tung, Matthew, Farhan, Wael, Michalewski, Henryk, Viola, Fabio, Quitry, Felix de Chaumont, Lan, Charline Le, Hudson, Tom, Wang, Qingze, Fischer, Felix, Zheng, Ivy, White, Elspeth, Dragan, Anca, Alayrac, Jean-baptiste, Ni, Eric, Pritzel, Alexander, Iwanicki, Adam, Isard, Michael, Bulanova, Anna, Zilka, Lukas, Dyer, Ethan, Sachan, Devendra, Srinivasan, Srivatsan, Muckenhirn, Hannah, Cai, Honglong, Mandhane, Amol, Tariq, Mukarram, Rae, Jack W., Wang, Gary, Ayoub, Kareem, FitzGerald, Nicholas, Zhao, Yao, Han, Woohyun, Alberti, Chris, Garrette, Dan, Krishnakumar, Kashyap, Gimenez, Mai, Levskaya, Anselm, Sohn, Daniel, Matak, Josip, Iturrate, Inaki, Chang, Michael B., Xiang, Jackie, Cao, Yuan, Ranka, Nishant, Brown, Geoff, Hutter, Adrian, Mirrokni, Vahab, Chen, Nanxin, Yao, Kaisheng, Egyed, Zoltan, Galilee, Francois, Liechty, Tyler, Kallakuri, Praveen, Palmer, Evan, Ghemawat, Sanjay, Liu, Jasmine, Tao, David, Thornton, Chloe, Green, Tim, Jasarevic, Mimi, Lin, Sharon, Cotruta, Victor, Tan, Yi-Xuan, Fiedel, Noah, Yu, Hongkun, Chi, Ed, Neitz, Alexander, Heitkaemper, Jens, Sinha, Anu, Zhou, Denny, Sun, Yi, Kaed, Charbel, Hulse, Brice, Mishra, Swaroop, Georgaki, Maria, Kudugunta, Sneha, Farabet, Clement, Shafran, Izhak, Vlasic, Daniel, Tsitsulin, Anton, Ananthanarayanan, Rajagopal, Carin, Alen, Su, Guolong, Sun, Pei, V, Shashank, Carvajal, Gabriel, Broder, Josef, Comsa, Iulia, Repina, Alena, Wong, William, Chen, Warren Weilun, Hawkins, Peter, Filonov, Egor, Loher, Lucia, Hirnschall, Christoph, Wang, Weiyi, Ye, Jingchen, Burns, Andrea, Cate, Hardie, Wright, Diana Gage, Piccinini, Federico, Zhang, Lei, Lin, Chu-Cheng, Gog, Ionel, Kulizhskaya, Yana, Sreevatsa, Ashwin, Song, Shuang, Cobo, Luis C., Iyer, Anand, Tekur, Chetan, Garrido, Guillermo, Xiao, Zhuyun, Kemp, Rupert, Zheng, Huaixiu Steven, Li, Hui, Agarwal, Ananth, Ngani, Christel, Goshvadi, Kati, Santamaria-Fernandez, Rebeca, Fica, Wojciech, Chen, Xinyun, Gorgolewski, Chris, Sun, Sean, Garg, Roopal, Ye, Xinyu, Eslami, S. M. Ali, Hua, Nan, Simon, Jon, Joshi, Pratik, Kim, Yelin, Tenney, Ian, Potluri, Sahitya, Thiet, Lam Nguyen, Yuan, Quan, Luisier, Florian, Chronopoulou, Alexandra, Scellato, Salvatore, Srinivasan, Praveen, Chen, Minmin, Koverkathu, Vinod, Dalibard, Valentin, Xu, Yaming, Saeta, Brennan, Anderson, Keith, Sellam, Thibault, Fernando, Nick, Huot, Fantine, Jung, Junehyuk, Varadarajan, Mani, Quinn, Michael, Raul, Amit, Le, Maigo, Habalov, Ruslan, Clark, Jon, Jalan, Komal, Bullard, Kalesha, Singhal, Achintya, Luong, Thang, Wang, Boyu, Rajayogam, Sujeevan, Eisenschlos, Julian, Jia, Johnson, Finchelstein, Daniel, Yakubovich, Alex, Balle, Daniel, Fink, Michael, Agarwal, Sameer, Li, Jing, Dvijotham, Dj, Pal, Shalini, Kang, Kai, Konzelmann, Jaclyn, Beattie, Jennifer, Dousse, Olivier, Wu, Diane, Crocker, Remi, Elkind, Chen, Jonnalagadda, Siddhartha Reddy, Lee, Jong, Holtmann-Rice, Dan, Kallarackal, Krystal, Liu, Rosanne, Vnukov, Denis, Vats, Neera, Invernizzi, Luca, Jafari, Mohsen, Zhou, Huanjie, Taylor, Lilly, Prendki, Jennifer, Wu, Marcus, Eccles, Tom, Liu, Tianqi, Kopparapu, Kavya, Beaufays, Francoise, Angermueller, Christof, Marzoca, Andreea, Sarcar, Shourya, Dib, Hilal, Stanway, Jeff, Perbet, Frank, Trdin, Nejc, Sterneck, Rachel, Khorlin, Andrey, Li, Dinghua, Wu, Xihui, Goenka, Sonam, Madras, David, Goldshtein, Sasha, Gierke, Willi, Zhou, Tong, Liu, Yaxin, Liang, Yannie, White, Anais, Li, Yunjie, Singh, Shreya, Bahargam, Sanaz, Epstein, Mark, Basu, Sujoy, Lao, Li, Ozturel, Adnan, Crous, Carl, Zhai, Alex, Lu, Han, Tung, Zora, Gaur, Neeraj, Walton, Alanna, Dixon, Lucas, Zhang, Ming, Globerson, Amir, Uy, Grant, Bolt, Andrew, Wiles, Olivia, Nasr, Milad, Shumailov, Ilia, Selvi, Marco, Piccinno, Francesco, Aguilar, Ricardo, McCarthy, Sara, Khalman, Misha, Shukla, Mrinal, Galic, Vlado, Carpenter, John, Villela, Kevin, Zhang, Haibin, Richardson, Harry, Martens, James, Bosnjak, Matko, Belle, Shreyas Rammohan, Seibert, Jeff, Alnahlawi, Mahmoud, McWilliams, Brian, Singh, Sankalp, Louis, Annie, Ding, Wen, Popovici, Dan, Simicich, Lenin, Knight, Laura, Mehta, Pulkit, Gupta, Nishesh, Shi, Chongyang, Fatehi, Saaber, Mitrovic, Jovana, Grills, Alex, Pagadora, Joseph, Munkhdalai, Tsendsuren, Petrova, Dessie, Eisenbud, Danielle, Zhang, Zhishuai, Yates, Damion, Mittal, Bhavishya, Tripuraneni, Nilesh, Assael, Yannis, Brovelli, Thomas, Jain, Prateek, Velimirovic, Mihajlo, Akbulut, Canfer, Mu, Jiaqi, Macherey, Wolfgang, Kumar, Ravin, Xu, Jun, Qureshi, Haroon, Comanici, Gheorghe, Wiesner, Jeremy, Gong, Zhitao, Ruddock, Anton, Bauer, Matthias, Felt, Nick, GP, Anirudh, Arnab, Anurag, Zelle, Dustin, Rothfuss, Jonas, Rosgen, Bill, Shenoy, Ashish, Seybold, Bryan, Li, Xinjian, Mudigonda, Jayaram, Erdogan, Goker, Xia, Jiawei, Simsa, Jiri, Michi, Andrea, Yao, Yi, Yew, Christopher, Kan, Steven, Caswell, Isaac, Radebaugh, Carey, Elisseeff, Andre, Valenzuela, Pedro, McKinney, Kay, Paterson, Kim, Cui, Albert, Latorre-Chimoto, Eri, Kim, Solomon, Zeng, William, Durden, Ken, Ponnapalli, Priya, Sosea, Tiberiu, Choquette-Choo, Christopher A., Manyika, James, Robenek, Brona, Vashisht, Harsha, Pereira, Sebastien, Lam, Hoi, Velic, Marko, Owusu-Afriyie, Denese, Lee, Katherine, Bolukbasi, Tolga, Parrish, Alicia, Lu, Shawn, Park, Jane, Venkatraman, Balaji, Talbert, Alice, Rosique, Lambert, Cheng, Yuchung, Sozanschi, Andrei, Paszke, Adam, Kumar, Praveen, Austin, Jessica, Li, Lu, Salama, Khalid, Perz, Bartek, Kim, Wooyeol, Dukkipati, Nandita, Baryshnikov, Anthony, Kaplanis, Christos, Sheng, XiangHai, Chervonyi, Yuri, Unlu, Caglar, Casas, Diego de Las, Askham, Harry, Tunyasuvunakool, Kathryn, Gimeno, Felix, Poder, Siim, Kwak, Chester, Miecnikowski, Matt, Dimitriev, Alek, Parisi, Aaron, Liu, Dangyi, Tsai, Tomy, Shevlane, Toby, Kouridi, Christina, Garmon, Drew, Goedeckemeyer, Adrian, Brown, Adam R., Vijayakumar, Anitha, Elqursh, Ali, Jazayeri, Sadegh, Huang, Jin, Carthy, Sara Mc, Hoover, Jay, Kim, Lucy, Kumar, Sandeep, Chen, Wei, Biles, Courtney, Bingham, Garrett, Rosen, Evan, Wang, Lisa, Tan, Qijun, Engel, David, Pongetti, Francesco, de Cesare, Dario, Hwang, Dongseong, Yu, Lily, Pullman, Jennifer, Narayanan, Srini, Levin, Kyle, Gopal, Siddharth, Li, Megan, Aharoni, Asaf, Trinh, Trieu, Lo, Jessica, Casagrande, Norman, Vij, Roopali, Matthey, Loic, Ramadhana, Bramandia, Matthews, Austin, Carey, CJ, Johnson, Matthew, Goranova, Kremena, Shah, Rohin, Ashraf, Shereen, Dasgupta, Kingshuk, Larsen, Rasmus, Wang, Yicheng, Vuyyuru, Manish Reddy, Jiang, Chong, Ijazi, Joana, Osawa, Kazuki, Smith, Celine, Boppana, Ramya Sree, Bilal, Taylan, Koizumi, Yuma, Xu, Ying, Altun, Yasemin, Shabat, Nir, Bariach, Ben, Korchemniy, Alex, Choo, Kiam, Ronneberger, Olaf, Iwuanyanwu, Chimezie, Zhao, Shubin, Soergel, David, Hsieh, Cho-Jui, Cai, Irene, Iqbal, Shariq, Sundermeyer, Martin, Chen, Zhe, Bursztein, Elie, Malaviya, Chaitanya, Biadsy, Fadi, Shroff, Prakash, Dhillon, Inderjit, Latkar, Tejasi, Dyer, Chris, Forbes, Hannah, Nicosia, Massimo, Nikolaev, Vitaly, Greene, Somer, Georgiev, Marin, Wang, Pidong, Martin, Nina, Sedghi, Hanie, Zhang, John, Banzal, Praseem, Fritz, Doug, Rao, Vikram, Wang, Xuezhi, Zhang, Jiageng, Patraucean, Viorica, Du, Dayou, Mordatch, Igor, Jurin, Ivan, Liu, Lewis, Dubey, Ayush, Mohan, Abhi, Nowakowski, Janek, Ion, Vlad-Doru, Wei, Nan, Tojo, Reiko, Raad, Maria Abi, Hudson, Drew A., Keshava, Vaishakh, Agrawal, Shubham, Ramirez, Kevin, Wu, Zhichun, Nguyen, Hoang, Liu, Ji, Sewak, Madhavi, Petrini, Bryce, Choi, DongHyun, Philips, Ivan, Wang, Ziyue, Bica, Ioana, Garg, Ankush, Wilkiewicz, Jarek, Agrawal, Priyanka, Guo, Danhao, Xue, Emily, Shaik, Naseer, Leach, Andrew, Khan, Sadh MNM, Wiesinger, Julia, Jerome, Sammy, Chakladar, Abhishek, Wang, Alek Wenjiao, Ornduff, Tina, Abu, Folake, Ghaffarkhah, Alireza, Wainwright, Marcus, Cortes, Mario, Liu, Frederick, Maynez, Joshua, Terzis, Andreas, Samangouei, Pouya, Mansour, Riham, Kępa, Tomasz, Aubet, François-Xavier, Algymr, Anton, Banica, Dan, Weisz, Agoston, Orban, Andras, Senges, Alexandre, Andrejczuk, Ewa, Geller, Mark, Santo, Niccolo Dal, Anklin, Valentin, Merey, Majd Al, Baeuml, Martin, Strohman, Trevor, Bai, Junwen, Petrov, Slav, Wu, Yonghui, Hassabis, Demis, Kavukcuoglu, Koray, Dean, Jeff, and Vinyals, Oriol
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
In this report, we introduce the Gemini 1.5 family of models, representing the next generation of highly compute-efficient multimodal models capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. The family includes two new models: (1) an updated Gemini 1.5 Pro, which exceeds the February version on the great majority of capabilities and benchmarks; (2) Gemini 1.5 Flash, a more lightweight variant designed for efficiency with minimal regression in quality. Gemini 1.5 models achieve near-perfect recall on long-context retrieval tasks across modalities, improve the state-of-the-art in long-document QA, long-video QA and long-context ASR, and match or surpass Gemini 1.0 Ultra's state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5's long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 3.0 (200k) and GPT-4 Turbo (128k). Finally, we highlight real-world use cases, such as Gemini 1.5 collaborating with professionals on completing their tasks achieving 26 to 75% time savings across 10 different job categories, as well as surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person who learned from the same content.
- Published
- 2024
5. Do Large Language Models Latently Perform Multi-Hop Reasoning?
- Author
-
Yang, Sohee, Gribovskaya, Elena, Kassner, Nora, Geva, Mor, and Riedel, Sebastian
- Subjects
Computer Science - Computation and Language - Abstract
We study whether Large Language Models (LLMs) latently perform multi-hop reasoning with complex prompts such as "The mother of the singer of 'Superstition' is". We look for evidence of a latent reasoning pathway where an LLM (1) latently identifies "the singer of 'Superstition'" as Stevie Wonder, the bridge entity, and (2) uses its knowledge of Stevie Wonder's mother to complete the prompt. We analyze these two hops individually and consider their co-occurrence as indicative of latent multi-hop reasoning. For the first hop, we test if changing the prompt to indirectly mention the bridge entity instead of any other entity increases the LLM's internal recall of the bridge entity. For the second hop, we test if increasing this recall causes the LLM to better utilize what it knows about the bridge entity. We find strong evidence of latent multi-hop reasoning for the prompts of certain relation types, with the reasoning pathway used in more than 80% of the prompts. However, the utilization is highly contextual, varying across different types of prompts. Also, on average, the evidence for the second hop and the full multi-hop traversal is rather moderate and only substantial for the first hop. Moreover, we find a clear scaling trend with increasing model size for the first hop of reasoning but not for the second hop. Our experimental findings suggest potential challenges and opportunities for future development and applications of LLMs.
- Published
- 2024
6. Gemini: A Family of Highly Capable Multimodal Models
- Author
-
Gemini Team, Anil, Rohan, Borgeaud, Sebastian, Alayrac, Jean-Baptiste, Yu, Jiahui, Soricut, Radu, Schalkwyk, Johan, Dai, Andrew M., Hauth, Anja, Millican, Katie, Silver, David, Johnson, Melvin, Antonoglou, Ioannis, Schrittwieser, Julian, Glaese, Amelia, Chen, Jilin, Pitler, Emily, Lillicrap, Timothy, Lazaridou, Angeliki, Firat, Orhan, Molloy, James, Isard, Michael, Barham, Paul R., Hennigan, Tom, Lee, Benjamin, Viola, Fabio, Reynolds, Malcolm, Xu, Yuanzhong, Doherty, Ryan, Collins, Eli, Meyer, Clemens, Rutherford, Eliza, Moreira, Erica, Ayoub, Kareem, Goel, Megha, Krawczyk, Jack, Du, Cosmo, Chi, Ed, Cheng, Heng-Tze, Ni, Eric, Shah, Purvi, Kane, Patrick, Chan, Betty, Faruqui, Manaal, Severyn, Aliaksei, Lin, Hanzhao, Li, YaGuang, Cheng, Yong, Ittycheriah, Abe, Mahdieh, Mahdis, Chen, Mia, Sun, Pei, Tran, Dustin, Bagri, Sumit, Lakshminarayanan, Balaji, Liu, Jeremiah, Orban, Andras, Güra, Fabian, Zhou, Hao, Song, Xinying, Boffy, Aurelien, Ganapathy, Harish, Zheng, Steven, Choe, HyunJeong, Weisz, Ágoston, Zhu, Tao, Lu, Yifeng, Gopal, Siddharth, Kahn, Jarrod, Kula, Maciej, Pitman, Jeff, Shah, Rushin, Taropa, Emanuel, Merey, Majd Al, Baeuml, Martin, Chen, Zhifeng, Shafey, Laurent El, Zhang, Yujing, Sercinoglu, Olcan, Tucker, George, Piqueras, Enrique, Krikun, Maxim, Barr, Iain, Savinov, Nikolay, Danihelka, Ivo, Roelofs, Becca, White, Anaïs, Andreassen, Anders, von Glehn, Tamara, Yagati, Lakshman, Kazemi, Mehran, Gonzalez, Lucas, Khalman, Misha, Sygnowski, Jakub, Frechette, Alexandre, Smith, Charlotte, Culp, Laura, Proleev, Lev, Luan, Yi, Chen, Xi, Lottes, James, Schucher, Nathan, Lebron, Federico, Rrustemi, Alban, Clay, Natalie, Crone, Phil, Kocisky, Tomas, Zhao, Jeffrey, Perz, Bartek, Yu, Dian, Howard, Heidi, Bloniarz, Adam, Rae, Jack W., Lu, Han, Sifre, Laurent, Maggioni, Marcello, Alcober, Fred, Garrette, Dan, Barnes, Megan, Thakoor, Shantanu, Austin, Jacob, Barth-Maron, Gabriel, Wong, William, Joshi, Rishabh, Chaabouni, Rahma, Fatiha, Deeni, Ahuja, Arun, Tomar, Gaurav Singh, Senter, Evan, Chadwick, Martin, Kornakov, Ilya, Attaluri, Nithya, Iturrate, Iñaki, Liu, Ruibo, Li, Yunxuan, Cogan, Sarah, Chen, Jeremy, Jia, Chao, Gu, Chenjie, Zhang, Qiao, Grimstad, Jordan, Hartman, Ale Jakse, Garcia, Xavier, Pillai, Thanumalayan Sankaranarayana, Devlin, Jacob, Laskin, Michael, Casas, Diego de Las, Valter, Dasha, Tao, Connie, Blanco, Lorenzo, Badia, Adrià Puigdomènech, Reitter, David, Chen, Mianna, Brennan, Jenny, Rivera, Clara, Brin, Sergey, Iqbal, Shariq, Surita, Gabriela, Labanowski, Jane, Rao, Abhi, Winkler, Stephanie, Parisotto, Emilio, Gu, Yiming, Olszewska, Kate, Addanki, Ravi, Miech, Antoine, Louis, Annie, Teplyashin, Denis, Brown, Geoff, Catt, Elliot, Balaguer, Jan, Xiang, Jackie, Wang, Pidong, Ashwood, Zoe, Briukhov, Anton, Webson, Albert, Ganapathy, Sanjay, Sanghavi, Smit, Kannan, Ajay, Chang, Ming-Wei, Stjerngren, Axel, Djolonga, Josip, Sun, Yuting, Bapna, Ankur, Aitchison, Matthew, Pejman, Pedram, Michalewski, Henryk, Yu, Tianhe, Wang, Cindy, Love, Juliette, Ahn, Junwhan, Bloxwich, Dawn, Han, Kehang, Humphreys, Peter, Sellam, Thibault, Bradbury, James, Godbole, Varun, Samangooei, Sina, Damoc, Bogdan, Kaskasoli, Alex, Arnold, Sébastien M. R., Vasudevan, Vijay, Agrawal, Shubham, Riesa, Jason, Lepikhin, Dmitry, Tanburn, Richard, Srinivasan, Srivatsan, Lim, Hyeontaek, Hodkinson, Sarah, Shyam, Pranav, Ferret, Johan, Hand, Steven, Garg, Ankush, Paine, Tom Le, Li, Jian, Li, Yujia, Giang, Minh, Neitz, Alexander, Abbas, Zaheer, York, Sarah, Reid, Machel, Cole, Elizabeth, Chowdhery, Aakanksha, Das, Dipanjan, Rogozińska, Dominika, Nikolaev, Vitaliy, Sprechmann, Pablo, Nado, Zachary, Zilka, Lukas, Prost, Flavien, He, Luheng, Monteiro, Marianne, Mishra, Gaurav, Welty, Chris, Newlan, Josh, Jia, Dawei, Allamanis, Miltiadis, Hu, Clara Huiyi, de Liedekerke, Raoul, Gilmer, Justin, Saroufim, Carl, Rijhwani, Shruti, Hou, Shaobo, Shrivastava, Disha, Baddepudi, Anirudh, Goldin, Alex, Ozturel, Adnan, Cassirer, Albin, Xu, Yunhan, Sohn, Daniel, Sachan, Devendra, Amplayo, Reinald Kim, Swanson, Craig, Petrova, Dessie, Narayan, Shashi, Guez, Arthur, Brahma, Siddhartha, Landon, Jessica, Patel, Miteyan, Zhao, Ruizhe, Villela, Kevin, Wang, Luyu, Jia, Wenhao, Rahtz, Matthew, Giménez, Mai, Yeung, Legg, Keeling, James, Georgiev, Petko, Mincu, Diana, Wu, Boxi, Haykal, Salem, Saputro, Rachel, Vodrahalli, Kiran, Qin, James, Cankara, Zeynep, Sharma, Abhanshu, Fernando, Nick, Hawkins, Will, Neyshabur, Behnam, Kim, Solomon, Hutter, Adrian, Agrawal, Priyanka, Castro-Ros, Alex, Driessche, George van den, Wang, Tao, Yang, Fan, Chang, Shuo-yiin, Komarek, Paul, McIlroy, Ross, Lučić, Mario, Zhang, Guodong, Farhan, Wael, Sharman, Michael, Natsev, Paul, Michel, Paul, Bansal, Yamini, Qiao, Siyuan, Cao, Kris, Shakeri, Siamak, Butterfield, Christina, Chung, Justin, Rubenstein, Paul Kishan, Agrawal, Shivani, Mensch, Arthur, Soparkar, Kedar, Lenc, Karel, Chung, Timothy, Pope, Aedan, Maggiore, Loren, Kay, Jackie, Jhakra, Priya, Wang, Shibo, Maynez, Joshua, Phuong, Mary, Tobin, Taylor, Tacchetti, Andrea, Trebacz, Maja, Robinson, Kevin, Katariya, Yash, Riedel, Sebastian, Bailey, Paige, Xiao, Kefan, Ghelani, Nimesh, Aroyo, Lora, Slone, Ambrose, Houlsby, Neil, Xiong, Xuehan, Yang, Zhen, Gribovskaya, Elena, Adler, Jonas, Wirth, Mateo, Lee, Lisa, Li, Music, Kagohara, Thais, Pavagadhi, Jay, Bridgers, Sophie, Bortsova, Anna, Ghemawat, Sanjay, Ahmed, Zafarali, Liu, Tianqi, Powell, Richard, Bolina, Vijay, Iinuma, Mariko, Zablotskaia, Polina, Besley, James, Chung, Da-Woon, Dozat, Timothy, Comanescu, Ramona, Si, Xiance, Greer, Jeremy, Su, Guolong, Polacek, Martin, Kaufman, Raphaël Lopez, Tokumine, Simon, Hu, Hexiang, Buchatskaya, Elena, Miao, Yingjie, Elhawaty, Mohamed, Siddhant, Aditya, Tomasev, Nenad, Xing, Jinwei, Greer, Christina, Miller, Helen, Ashraf, Shereen, Roy, Aurko, Zhang, Zizhao, Ma, Ada, Filos, Angelos, Besta, Milos, Blevins, Rory, Klimenko, Ted, Yeh, Chih-Kuan, Changpinyo, Soravit, Mu, Jiaqi, Chang, Oscar, Pajarskas, Mantas, Muir, Carrie, Cohen, Vered, Lan, Charline Le, Haridasan, Krishna, Marathe, Amit, Hansen, Steven, Douglas, Sholto, Samuel, Rajkumar, Wang, Mingqiu, Austin, Sophia, Lan, Chang, Jiang, Jiepu, Chiu, Justin, Lorenzo, Jaime Alonso, Sjösund, Lars Lowe, Cevey, Sébastien, Gleicher, Zach, Avrahami, Thi, Boral, Anudhyan, Srinivasan, Hansa, Selo, Vittorio, May, Rhys, Aisopos, Konstantinos, Hussenot, Léonard, Soares, Livio Baldini, Baumli, Kate, Chang, Michael B., Recasens, Adrià, Caine, Ben, Pritzel, Alexander, Pavetic, Filip, Pardo, Fabio, Gergely, Anita, Frye, Justin, Ramasesh, Vinay, Horgan, Dan, Badola, Kartikeya, Kassner, Nora, Roy, Subhrajit, Dyer, Ethan, Campos, Víctor Campos, Tomala, Alex, Tang, Yunhao, Badawy, Dalia El, White, Elspeth, Mustafa, Basil, Lang, Oran, Jindal, Abhishek, Vikram, Sharad, Gong, Zhitao, Caelles, Sergi, Hemsley, Ross, Thornton, Gregory, Feng, Fangxiaoyu, Stokowiec, Wojciech, Zheng, Ce, Thacker, Phoebe, Ünlü, Çağlar, Zhang, Zhishuai, Saleh, Mohammad, Svensson, James, Bileschi, Max, Patil, Piyush, Anand, Ankesh, Ring, Roman, Tsihlas, Katerina, Vezer, Arpi, Selvi, Marco, Shevlane, Toby, Rodriguez, Mikel, Kwiatkowski, Tom, Daruki, Samira, Rong, Keran, Dafoe, Allan, FitzGerald, Nicholas, Gu-Lemberg, Keren, Khan, Mina, Hendricks, Lisa Anne, Pellat, Marie, Feinberg, Vladimir, Cobon-Kerr, James, Sainath, Tara, Rauh, Maribeth, Hashemi, Sayed Hadi, Ives, Richard, Hasson, Yana, Noland, Eric, Cao, Yuan, Byrd, Nathan, Hou, Le, Wang, Qingze, Sottiaux, Thibault, Paganini, Michela, Lespiau, Jean-Baptiste, Moufarek, Alexandre, Hassan, Samer, Shivakumar, Kaushik, van Amersfoort, Joost, Mandhane, Amol, Joshi, Pratik, Goyal, Anirudh, Tung, Matthew, Brock, Andrew, Sheahan, Hannah, Misra, Vedant, Li, Cheng, Rakićević, Nemanja, Dehghani, Mostafa, Liu, Fangyu, Mittal, Sid, Oh, Junhyuk, Noury, Seb, Sezener, Eren, Huot, Fantine, Lamm, Matthew, De Cao, Nicola, Chen, Charlie, Mudgal, Sidharth, Stella, Romina, Brooks, Kevin, Vasudevan, Gautam, Liu, Chenxi, Chain, Mainak, Melinkeri, Nivedita, Cohen, Aaron, Wang, Venus, Seymore, Kristie, Zubkov, Sergey, Goel, Rahul, Yue, Summer, Krishnakumaran, Sai, Albert, Brian, Hurley, Nate, Sano, Motoki, Mohananey, Anhad, Joughin, Jonah, Filonov, Egor, Kępa, Tomasz, Eldawy, Yomna, Lim, Jiawern, Rishi, Rahul, Badiezadegan, Shirin, Bos, Taylor, Chang, Jerry, Jain, Sanil, Padmanabhan, Sri Gayatri Sundara, Puttagunta, Subha, Krishna, Kalpesh, Baker, Leslie, Kalb, Norbert, Bedapudi, Vamsi, Kurzrok, Adam, Lei, Shuntong, Yu, Anthony, Litvin, Oren, Zhou, Xiang, Wu, Zhichun, Sobell, Sam, Siciliano, Andrea, Papir, Alan, Neale, Robby, Bragagnolo, Jonas, Toor, Tej, Chen, Tina, Anklin, Valentin, Wang, Feiran, Feng, Richie, Gholami, Milad, Ling, Kevin, Liu, Lijuan, Walter, Jules, Moghaddam, Hamid, Kishore, Arun, Adamek, Jakub, Mercado, Tyler, Mallinson, Jonathan, Wandekar, Siddhinita, Cagle, Stephen, Ofek, Eran, Garrido, Guillermo, Lombriser, Clemens, Mukha, Maksim, Sun, Botu, Mohammad, Hafeezul Rahman, Matak, Josip, Qian, Yadi, Peswani, Vikas, Janus, Pawel, Yuan, Quan, Schelin, Leif, David, Oana, Garg, Ankur, He, Yifan, Duzhyi, Oleksii, Älgmyr, Anton, Lottaz, Timothée, Li, Qi, Yadav, Vikas, Xu, Luyao, Chinien, Alex, Shivanna, Rakesh, Chuklin, Aleksandr, Li, Josie, Spadine, Carrie, Wolfe, Travis, Mohamed, Kareem, Das, Subhabrata, Dai, Zihang, He, Kyle, von Dincklage, Daniel, Upadhyay, Shyam, Maurya, Akanksha, Chi, Luyan, Krause, Sebastian, Salama, Khalid, Rabinovitch, Pam G, M, Pavan Kumar Reddy, Selvan, Aarush, Dektiarev, Mikhail, Ghiasi, Golnaz, Guven, Erdem, Gupta, Himanshu, Liu, Boyi, Sharma, Deepak, Shtacher, Idan Heimlich, Paul, Shachi, Akerlund, Oscar, Aubet, François-Xavier, Huang, Terry, Zhu, Chen, Zhu, Eric, Teixeira, Elico, Fritze, Matthew, Bertolini, Francesco, Marinescu, Liana-Eleonora, Bölle, Martin, Paulus, Dominik, Gupta, Khyatti, Latkar, Tejasi, Chang, Max, Sanders, Jason, Wilson, Roopa, Wu, Xuewei, Tan, Yi-Xuan, Thiet, Lam Nguyen, Doshi, Tulsee, Lall, Sid, Mishra, Swaroop, Chen, Wanming, Luong, Thang, Benjamin, Seth, Lee, Jasmine, Andrejczuk, Ewa, Rabiej, Dominik, Ranjan, Vipul, Styrc, Krzysztof, Yin, Pengcheng, Simon, Jon, Harriott, Malcolm Rose, Bansal, Mudit, Robsky, Alexei, Bacon, Geoff, Greene, David, Mirylenka, Daniil, Zhou, Chen, Sarvana, Obaid, Goyal, Abhimanyu, Andermatt, Samuel, Siegler, Patrick, Horn, Ben, Israel, Assaf, Pongetti, Francesco, Chen, Chih-Wei "Louis", Selvatici, Marco, Silva, Pedro, Wang, Kathie, Tolins, Jackson, Guu, Kelvin, Yogev, Roey, Cai, Xiaochen, Agostini, Alessandro, Shah, Maulik, Nguyen, Hung, Donnaile, Noah Ó, Pereira, Sébastien, Friso, Linda, Stambler, Adam, Kuang, Chenkai, Romanikhin, Yan, Geller, Mark, Yan, ZJ, Jang, Kane, Lee, Cheng-Chun, Fica, Wojciech, Malmi, Eric, Tan, Qijun, Banica, Dan, Balle, Daniel, Pham, Ryan, Huang, Yanping, Avram, Diana, Shi, Hongzhi, Singh, Jasjot, Hidey, Chris, Ahuja, Niharika, Saxena, Pranab, Dooley, Dan, Potharaju, Srividya Pranavi, O'Neill, Eileen, Gokulchandran, Anand, Foley, Ryan, Zhao, Kai, Dusenberry, Mike, Liu, Yuan, Mehta, Pulkit, Kotikalapudi, Ragha, Safranek-Shrader, Chalence, Goodman, Andrew, Kessinger, Joshua, Globen, Eran, Kolhar, Prateek, Gorgolewski, Chris, Ibrahim, Ali, Song, Yang, Eichenbaum, Ali, Brovelli, Thomas, Potluri, Sahitya, Lahoti, Preethi, Baetu, Cip, Ghorbani, Ali, Chen, Charles, Crawford, Andy, Pal, Shalini, Sridhar, Mukund, Gurita, Petru, Mujika, Asier, Petrovski, Igor, Cedoz, Pierre-Louis, Li, Chenmei, Chen, Shiyuan, Santo, Niccolò Dal, Goyal, Siddharth, Punjabi, Jitesh, Kappaganthu, Karthik, Kwak, Chester, LV, Pallavi, Velury, Sarmishta, Choudhury, Himadri, Hall, Jamie, Shah, Premal, Figueira, Ricardo, Thomas, Matt, Lu, Minjie, Zhou, Ting, Kumar, Chintu, Jurdi, Thomas, Chikkerur, Sharat, Ma, Yenai, Yu, Adams, Kwak, Soo, Ähdel, Victor, Rajayogam, Sujeevan, Choma, Travis, Liu, Fei, Barua, Aditya, Ji, Colin, Park, Ji Ho, Hellendoorn, Vincent, Bailey, Alex, Bilal, Taylan, Zhou, Huanjie, Khatir, Mehrdad, Sutton, Charles, Rzadkowski, Wojciech, Macintosh, Fiona, Shagin, Konstantin, Medina, Paul, Liang, Chen, Zhou, Jinjing, Shah, Pararth, Bi, Yingying, Dankovics, Attila, Banga, Shipra, Lehmann, Sabine, Bredesen, Marissa, Lin, Zifan, Hoffmann, John Eric, Lai, Jonathan, Chung, Raynald, Yang, Kai, Balani, Nihal, Bražinskas, Arthur, Sozanschi, Andrei, Hayes, Matthew, Alcalde, Héctor Fernández, Makarov, Peter, Chen, Will, Stella, Antonio, Snijders, Liselotte, Mandl, Michael, Kärrman, Ante, Nowak, Paweł, Wu, Xinyi, Dyck, Alex, Vaidyanathan, Krishnan, R, Raghavender, Mallet, Jessica, Rudominer, Mitch, Johnston, Eric, Mittal, Sushil, Udathu, Akhil, Christensen, Janara, Verma, Vishal, Irving, Zach, Santucci, Andreas, Elsayed, Gamaleldin, Davoodi, Elnaz, Georgiev, Marin, Tenney, Ian, Hua, Nan, Cideron, Geoffrey, Leurent, Edouard, Alnahlawi, Mahmoud, Georgescu, Ionut, Wei, Nan, Zheng, Ivy, Scandinaro, Dylan, Jiang, Heinrich, Snoek, Jasper, Sundararajan, Mukund, Wang, Xuezhi, Ontiveros, Zack, Karo, Itay, Cole, Jeremy, Rajashekhar, Vinu, Tumeh, Lara, Ben-David, Eyal, Jain, Rishub, Uesato, Jonathan, Datta, Romina, Bunyan, Oskar, Wu, Shimu, Zhang, John, Stanczyk, Piotr, Zhang, Ye, Steiner, David, Naskar, Subhajit, Azzam, Michael, Johnson, Matthew, Paszke, Adam, Chiu, Chung-Cheng, Elias, Jaume Sanchez, Mohiuddin, Afroz, Muhammad, Faizan, Miao, Jin, Lee, Andrew, Vieillard, Nino, Park, Jane, Zhang, Jiageng, Stanway, Jeff, Garmon, Drew, Karmarkar, Abhijit, Dong, Zhe, Lee, Jong, Kumar, Aviral, Zhou, Luowei, Evens, Jonathan, Isaac, William, Irving, Geoffrey, Loper, Edward, Fink, Michael, Arkatkar, Isha, Chen, Nanxin, Shafran, Izhak, Petrychenko, Ivan, Chen, Zhe, Jia, Johnson, Levskaya, Anselm, Zhu, Zhenkai, Grabowski, Peter, Mao, Yu, Magni, Alberto, Yao, Kaisheng, Snaider, Javier, Casagrande, Norman, Palmer, Evan, Suganthan, Paul, Castaño, Alfonso, Giannoumis, Irene, Kim, Wooyeol, Rybiński, Mikołaj, Sreevatsa, Ashwin, Prendki, Jennifer, Soergel, David, Goedeckemeyer, Adrian, Gierke, Willi, Jafari, Mohsen, Gaba, Meenu, Wiesner, Jeremy, Wright, Diana Gage, Wei, Yawen, Vashisht, Harsha, Kulizhskaya, Yana, Hoover, Jay, Le, Maigo, Li, Lu, Iwuanyanwu, Chimezie, Liu, Lu, Ramirez, Kevin, Khorlin, Andrey, Cui, Albert, LIN, Tian, Wu, Marcus, Aguilar, Ricardo, Pallo, Keith, Chakladar, Abhishek, Perng, Ginger, Abellan, Elena Allica, Zhang, Mingyang, Dasgupta, Ishita, Kushman, Nate, Penchev, Ivo, Repina, Alena, Wu, Xihui, van der Weide, Tom, Ponnapalli, Priya, Kaplan, Caroline, Simsa, Jiri, Li, Shuangfeng, Dousse, Olivier, Piper, Jeff, Ie, Nathan, Pasumarthi, Rama, Lintz, Nathan, Vijayakumar, Anitha, Andor, Daniel, Valenzuela, Pedro, Lui, Minnie, Paduraru, Cosmin, Peng, Daiyi, Lee, Katherine, Zhang, Shuyuan, Greene, Somer, Nguyen, Duc Dung, Kurylowicz, Paula, Hardin, Cassidy, Dixon, Lucas, Janzer, Lili, Choo, Kiam, Feng, Ziqiang, Zhang, Biao, Singhal, Achintya, Du, Dayou, McKinnon, Dan, Antropova, Natasha, Bolukbasi, Tolga, Keller, Orgad, Reid, David, Finchelstein, Daniel, Raad, Maria Abi, Crocker, Remi, Hawkins, Peter, Dadashi, Robert, Gaffney, Colin, Franko, Ken, Bulanova, Anna, Leblond, Rémi, Chung, Shirley, Askham, Harry, Cobo, Luis C., Xu, Kelvin, Fischer, Felix, Xu, Jun, Sorokin, Christina, Alberti, Chris, Lin, Chu-Cheng, Evans, Colin, Dimitriev, Alek, Forbes, Hannah, Banarse, Dylan, Tung, Zora, Omernick, Mark, Bishop, Colton, Sterneck, Rachel, Jain, Rohan, Xia, Jiawei, Amid, Ehsan, Piccinno, Francesco, Wang, Xingyu, Banzal, Praseem, Mankowitz, Daniel J., Polozov, Alex, Krakovna, Victoria, Brown, Sasha, Bateni, MohammadHossein, Duan, Dennis, Firoiu, Vlad, Thotakuri, Meghana, Natan, Tom, Geist, Matthieu, Girgin, Ser tan, Li, Hui, Ye, Jiayu, Roval, Ofir, Tojo, Reiko, Kwong, Michael, Lee-Thorp, James, Yew, Christopher, Sinopalnikov, Danila, Ramos, Sabela, Mellor, John, Sharma, Abhishek, Wu, Kathy, Miller, David, Sonnerat, Nicolas, Vnukov, Denis, Greig, Rory, Beattie, Jennifer, Caveness, Emily, Bai, Libin, Eisenschlos, Julian, Korchemniy, Alex, Tsai, Tomy, Jasarevic, Mimi, Kong, Weize, Dao, Phuong, Zheng, Zeyu, Liu, Frederick, Zhu, Rui, Teh, Tian Huey, Sanmiya, Jason, Gladchenko, Evgeny, Trdin, Nejc, Toyama, Daniel, Rosen, Evan, Tavakkol, Sasan, Xue, Linting, Elkind, Chen, Woodman, Oliver, Carpenter, John, Papamakarios, George, Kemp, Rupert, Kafle, Sushant, Grunina, Tanya, Sinha, Rishika, Talbert, Alice, Wu, Diane, Owusu-Afriyie, Denese, Thornton, Chloe, Pont-Tuset, Jordi, Narayana, Pradyumna, Li, Jing, Fatehi, Saaber, Wieting, John, Ajmeri, Omar, Uria, Benigno, Ko, Yeongil, Knight, Laura, Héliou, Amélie, Niu, Ning, Gu, Shane, Pang, Chenxi, Li, Yeqing, Levine, Nir, Stolovich, Ariel, Santamaria-Fernandez, Rebeca, Goenka, Sonam, Yustalim, Wenny, Strudel, Robin, Elqursh, Ali, Deck, Charlie, Lee, Hyo, Li, Zonglin, Levin, Kyle, Hoffmann, Raphael, Holtmann-Rice, Dan, Bachem, Olivier, Arora, Sho, Koh, Christy, Yeganeh, Soheil Hassas, Põder, Siim, Tariq, Mukarram, Sun, Yanhua, Ionita, Lucian, Seyedhosseini, Mojtaba, Tafti, Pouya, Liu, Zhiyu, Gulati, Anmol, Liu, Jasmine, Ye, Xinyu, Chrzaszcz, Bart, Wang, Lily, Sethi, Nikhil, Li, Tianrun, Brown, Ben, Singh, Shreya, Fan, Wei, Parisi, Aaron, Stanton, Joe, Koverkathu, Vinod, Choquette-Choo, Christopher A., Li, Yunjie, Lu, TJ, Shroff, Prakash, Varadarajan, Mani, Bahargam, Sanaz, Willoughby, Rob, Gaddy, David, Desjardins, Guillaume, Cornero, Marco, Robenek, Brona, Mittal, Bhavishya, Albrecht, Ben, Shenoy, Ashish, Moiseev, Fedor, Jacobsson, Henrik, Ghaffarkhah, Alireza, Rivière, Morgane, Walton, Alanna, Crepy, Clément, Parrish, Alicia, Zhou, Zongwei, Farabet, Clement, Radebaugh, Carey, Srinivasan, Praveen, van der Salm, Claudia, Fidjeland, Andreas, Scellato, Salvatore, Latorre-Chimoto, Eri, Klimczak-Plucińska, Hanna, Bridson, David, de Cesare, Dario, Hudson, Tom, Mendolicchio, Piermaria, Walker, Lexi, Morris, Alex, Mauger, Matthew, Guseynov, Alexey, Reid, Alison, Odoom, Seth, Loher, Lucia, Cotruta, Victor, Yenugula, Madhavi, Grewe, Dominik, Petrushkina, Anastasia, Duerig, Tom, Sanchez, Antonio, Yadlowsky, Steve, Shen, Amy, Globerson, Amir, Webb, Lynette, Dua, Sahil, Li, Dong, Bhupatiraju, Surya, Hurt, Dan, Qureshi, Haroon, Agarwal, Ananth, Shani, Tomer, Eyal, Matan, Khare, Anuj, Belle, Shreyas Rammohan, Wang, Lei, Tekur, Chetan, Kale, Mihir Sanjay, Wei, Jinliang, Sang, Ruoxin, Saeta, Brennan, Liechty, Tyler, Sun, Yi, Zhao, Yao, Lee, Stephan, Nayak, Pandu, Fritz, Doug, Vuyyuru, Manish Reddy, Aslanides, John, Vyas, Nidhi, Wicke, Martin, Ma, Xiao, Eltyshev, Evgenii, Martin, Nina, Cate, Hardie, Manyika, James, Amiri, Keyvan, Kim, Yelin, Xiong, Xi, Kang, Kai, Luisier, Florian, Tripuraneni, Nilesh, Madras, David, Guo, Mandy, Waters, Austin, Wang, Oliver, Ainslie, Joshua, Baldridge, Jason, Zhang, Han, Pruthi, Garima, Bauer, Jakob, Yang, Feng, Mansour, Riham, Gelman, Jason, Xu, Yang, Polovets, George, Liu, Ji, Cai, Honglong, Chen, Warren, Sheng, XiangHai, Xue, Emily, Ozair, Sherjil, Angermueller, Christof, Li, Xiaowei, Sinha, Anoop, Wang, Weiren, Wiesinger, Julia, Koukoumidis, Emmanouil, Tian, Yuan, Iyer, Anand, Gurumurthy, Madhu, Goldenson, Mark, Shah, Parashar, Blake, MK, Yu, Hongkun, Urbanowicz, Anthony, Palomaki, Jennimaria, Fernando, Chrisantha, Durden, Ken, Mehta, Harsh, Momchev, Nikola, Rahimtoroghi, Elahe, Georgaki, Maria, Raul, Amit, Ruder, Sebastian, Redshaw, Morgan, Lee, Jinhyuk, Zhou, Denny, Jalan, Komal, Li, Dinghua, Hechtman, Blake, Schuh, Parker, Nasr, Milad, Milan, Kieran, Mikulik, Vladimir, Franco, Juliana, Green, Tim, Nguyen, Nam, Kelley, Joe, Mahendru, Aroma, Hu, Andrea, Howland, Joshua, Vargas, Ben, Hui, Jeffrey, Bansal, Kshitij, Rao, Vikram, Ghiya, Rakesh, Wang, Emma, Ye, Ke, Sarr, Jean Michel, Preston, Melanie Moranski, Elish, Madeleine, Li, Steve, Kaku, Aakash, Gupta, Jigar, Pasupat, Ice, Juan, Da-Cheng, Someswar, Milan, M., Tejvi, Chen, Xinyun, Amini, Aida, Fabrikant, Alex, Chu, Eric, Dong, Xuanyi, Muthal, Amruta, Buthpitiya, Senaka, Jauhari, Sarthak, Khandelwal, Urvashi, Hitron, Ayal, Ren, Jie, Rinaldi, Larissa, Drath, Shahar, Dabush, Avigail, Jiang, Nan-Jiang, Godhia, Harshal, Sachs, Uli, Chen, Anthony, Fan, Yicheng, Taitelbaum, Hagai, Noga, Hila, Dai, Zhuyun, Wang, James, Hamer, Jenny, Ferng, Chun-Sung, Elkind, Chenel, Atias, Aviel, Lee, Paulina, Listík, Vít, Carlen, Mathias, van de Kerkhof, Jan, Pikus, Marcin, Zaher, Krunoslav, Müller, Paul, Zykova, Sasha, Stefanec, Richard, Gatsko, Vitaly, Hirnschall, Christoph, Sethi, Ashwin, Xu, Xingyu Federico, Ahuja, Chetan, Tsai, Beth, Stefanoiu, Anca, Feng, Bo, Dhandhania, Keshav, Katyal, Manish, Gupta, Akshay, Parulekar, Atharva, Pitta, Divya, Zhao, Jing, Bhatia, Vivaan, Bhavnani, Yashodha, Alhadlaq, Omar, Li, Xiaolin, Danenberg, Peter, Tu, Dennis, Pine, Alex, Filippova, Vera, Ghosh, Abhipso, Limonchik, Ben, Urala, Bhargava, Lanka, Chaitanya Krishna, Clive, Derik, Li, Edward, Wu, Hao, Hongtongsak, Kevin, Li, Ianna, Thakkar, Kalind, Omarov, Kuanysh, Majmundar, Kushal, Alverson, Michael, Kucharski, Michael, Patel, Mohak, Jain, Mudit, Zabelin, Maksim, Pelagatti, Paolo, Kohli, Rohan, Kumar, Saurabh, Kim, Joseph, Sankar, Swetha, Shah, Vineet, Ramachandruni, Lakshmi, Zeng, Xiangkai, Bariach, Ben, Weidinger, Laura, Vu, Tu, Andreev, Alek, He, Antoine, Hui, Kevin, Kashem, Sheleem, Subramanya, Amar, Hsiao, Sissie, Hassabis, Demis, Kavukcuoglu, Koray, Sadovsky, Adam, Le, Quoc, Strohman, Trevor, Wu, Yonghui, Petrov, Slav, Dean, Jeffrey, and Vinyals, Oriol
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of the Gemini family in cross-modal reasoning and language understanding will enable a wide variety of use cases. We discuss our approach toward post-training and deploying Gemini models responsibly to users through services including Gemini, Gemini Advanced, Google AI Studio, and Cloud Vertex AI.
- Published
- 2023
7. Multilingual End to End Entity Linking
- Author
-
Plekhanov, Mikhail, Kassner, Nora, Popat, Kashyap, Martin, Louis, Merello, Simone, Kozlovskii, Borislav, Dreyer, Frédéric A., and Cancedda, Nicola
- Subjects
Computer Science - Computation and Language - Abstract
Entity Linking is one of the most common Natural Language Processing tasks in practical applications, but so far efficient end-to-end solutions with multilingual coverage have been lacking, leading to complex model stacks. To fill this gap, we release and open source BELA, the first fully end-to-end multilingual entity linking model that efficiently detects and links entities in texts in any of 97 languages. We provide here a detailed description of the model and report BELA's performance on four entity linking datasets covering high- and low-resource languages.
- Published
- 2023
8. Language Models with Rationality
- Author
-
Kassner, Nora, Tafjord, Oyvind, Sabharwal, Ashish, Richardson, Kyle, Schuetze, Hinrich, and Clark, Peter
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
While large language models (LLMs) are proficient at question-answering (QA), it is not always clear how (or even if) an answer follows from their latent "beliefs". This lack of interpretability is a growing impediment to widespread use of LLMs. To address this, our goals are to make model beliefs and their inferential relationships explicit, and to resolve inconsistencies that may exist, so that answers are supported by interpretable chains of reasoning drawn from a consistent network of beliefs. Our approach, which we call REFLEX, is to add a rational, self-reflecting layer on top of the LLM. First, given a question, we construct a belief graph using a backward-chaining process to materialize relevant model beliefs (including beliefs about answer candidates) and their inferential relationships. Second, we identify and minimize contradictions in that graph using a formal constraint reasoner. We find that REFLEX significantly improves consistency (by 8%-11% absolute) without harming overall answer accuracy, resulting in answers supported by faithful chains of reasoning drawn from a more consistent belief system. This suggests a new style of system architecture in which an LLM extended with a rational layer can provide an interpretable window into system beliefs, add a systematic reasoning capability, and repair latent inconsistencies present in the LLM.
- Published
- 2023
9. Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
- Author
-
Imani, Ayyoob, Lin, Peiqin, Kargaran, Amir Hossein, Severini, Silvia, Sabet, Masoud Jalili, Kassner, Nora, Ma, Chunlan, Schmid, Helmut, Martins, André F. T., Yvon, François, and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, "help" from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should not limit NLP to a small fraction of the world's languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at https://github.com/cisnlp/Glot500., Comment: ACL 2023
- Published
- 2023
- Full Text
- View/download PDF
10. Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings
- Author
-
Atzeni, Mattia, Plekhanov, Mikhail, Dreyer, Frédéric A., Kassner, Nora, Merello, Simone, Martin, Louis, and Cancedda, Nicola
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Entity linking methods based on dense retrieval are an efficient and widely used solution in large-scale applications, but they fall short of the performance of generative models, as they are sensitive to the structure of the embedding space. In order to address this issue, this paper introduces DUCK, an approach to infusing structural information in the space of entity representations, using prior knowledge of entity types. Inspired by duck typing in programming languages, we propose to define the type of an entity based on the relations that it has with other entities in a knowledge graph. Then, porting the concept of box embeddings to spherical polar coordinates, we propose to represent relations as boxes on the hypersphere. We optimize the model to cluster entities of similar type by placing them inside the boxes corresponding to their relations. Our experiments show that our method sets new state-of-the-art results on standard entity-disambiguation benchmarks, it improves the performance of the model by up to 7.9 F1 points, outperforms other type-aware approaches, and matches the results of generative models with 18 times more parameters., Comment: Accepted at EMNLP 2023
- Published
- 2023
11. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
- Author
-
Workshop, BigScience, Scao, Teven Le, Fan, Angela, Akiki, Christopher, Pavlick, Ellie, Ilić, Suzana, Hesslow, Daniel, Castagné, Roman, Luccioni, Alexandra Sasha, Yvon, François, Gallé, Matthias, Tow, Jonathan, Rush, Alexander M., Biderman, Stella, Webson, Albert, Ammanamanchi, Pawan Sasanka, Wang, Thomas, Sagot, Benoît, Muennighoff, Niklas, del Moral, Albert Villanova, Ruwase, Olatunji, Bawden, Rachel, Bekman, Stas, McMillan-Major, Angelina, Beltagy, Iz, Nguyen, Huu, Saulnier, Lucile, Tan, Samson, Suarez, Pedro Ortiz, Sanh, Victor, Laurençon, Hugo, Jernite, Yacine, Launay, Julien, Mitchell, Margaret, Raffel, Colin, Gokaslan, Aaron, Simhi, Adi, Soroa, Aitor, Aji, Alham Fikri, Alfassy, Amit, Rogers, Anna, Nitzav, Ariel Kreisberg, Xu, Canwen, Mou, Chenghao, Emezue, Chris, Klamm, Christopher, Leong, Colin, van Strien, Daniel, Adelani, David Ifeoluwa, Radev, Dragomir, Ponferrada, Eduardo González, Levkovizh, Efrat, Kim, Ethan, Natan, Eyal Bar, De Toni, Francesco, Dupont, Gérard, Kruszewski, Germán, Pistilli, Giada, Elsahar, Hady, Benyamina, Hamza, Tran, Hieu, Yu, Ian, Abdulmumin, Idris, Johnson, Isaac, Gonzalez-Dios, Itziar, de la Rosa, Javier, Chim, Jenny, Dodge, Jesse, Zhu, Jian, Chang, Jonathan, Frohberg, Jörg, Tobing, Joseph, Bhattacharjee, Joydeep, Almubarak, Khalid, Chen, Kimbo, Lo, Kyle, Von Werra, Leandro, Weber, Leon, Phan, Long, allal, Loubna Ben, Tanguy, Ludovic, Dey, Manan, Muñoz, Manuel Romero, Masoud, Maraim, Grandury, María, Šaško, Mario, Huang, Max, Coavoux, Maximin, Singh, Mayank, Jiang, Mike Tian-Jian, Vu, Minh Chien, Jauhar, Mohammad A., Ghaleb, Mustafa, Subramani, Nishant, Kassner, Nora, Khamis, Nurulaqilla, Nguyen, Olivier, Espejel, Omar, de Gibert, Ona, Villegas, Paulo, Henderson, Peter, Colombo, Pierre, Amuok, Priscilla, Lhoest, Quentin, Harliman, Rheza, Bommasani, Rishi, López, Roberto Luis, Ribeiro, Rui, Osei, Salomey, Pyysalo, Sampo, Nagel, Sebastian, Bose, Shamik, Muhammad, Shamsuddeen Hassan, Sharma, Shanya, Longpre, Shayne, Nikpoor, Somaieh, Silberberg, Stanislav, Pai, Suhas, Zink, Sydney, Torrent, Tiago Timponi, Schick, Timo, Thrush, Tristan, Danchev, Valentin, Nikoulina, Vassilina, Laippala, Veronika, Lepercq, Violette, Prabhu, Vrinda, Alyafeai, Zaid, Talat, Zeerak, Raja, Arun, Heinzerling, Benjamin, Si, Chenglei, Taşar, Davut Emre, Salesky, Elizabeth, Mielke, Sabrina J., Lee, Wilson Y., Sharma, Abheesht, Santilli, Andrea, Chaffin, Antoine, Stiegler, Arnaud, Datta, Debajyoti, Szczechla, Eliza, Chhablani, Gunjan, Wang, Han, Pandey, Harshit, Strobelt, Hendrik, Fries, Jason Alan, Rozen, Jos, Gao, Leo, Sutawika, Lintang, Bari, M Saiful, Al-shaibani, Maged S., Manica, Matteo, Nayak, Nihal, Teehan, Ryan, Albanie, Samuel, Shen, Sheng, Ben-David, Srulik, Bach, Stephen H., Kim, Taewoon, Bers, Tali, Fevry, Thibault, Neeraj, Trishala, Thakker, Urmish, Raunak, Vikas, Tang, Xiangru, Yong, Zheng-Xin, Sun, Zhiqing, Brody, Shaked, Uri, Yallow, Tojarieh, Hadar, Roberts, Adam, Chung, Hyung Won, Tae, Jaesung, Phang, Jason, Press, Ofir, Li, Conglong, Narayanan, Deepak, Bourfoune, Hatim, Casper, Jared, Rasley, Jeff, Ryabinin, Max, Mishra, Mayank, Zhang, Minjia, Shoeybi, Mohammad, Peyrounette, Myriam, Patry, Nicolas, Tazi, Nouamane, Sanseviero, Omar, von Platen, Patrick, Cornette, Pierre, Lavallée, Pierre François, Lacroix, Rémi, Rajbhandari, Samyam, Gandhi, Sanchit, Smith, Shaden, Requena, Stéphane, Patil, Suraj, Dettmers, Tim, Baruwa, Ahmed, Singh, Amanpreet, Cheveleva, Anastasia, Ligozat, Anne-Laure, Subramonian, Arjun, Névéol, Aurélie, Lovering, Charles, Garrette, Dan, Tunuguntla, Deepak, Reiter, Ehud, Taktasheva, Ekaterina, Voloshina, Ekaterina, Bogdanov, Eli, Winata, Genta Indra, Schoelkopf, Hailey, Kalo, Jan-Christoph, Novikova, Jekaterina, Forde, Jessica Zosa, Clive, Jordan, Kasai, Jungo, Kawamura, Ken, Hazan, Liam, Carpuat, Marine, Clinciu, Miruna, Kim, Najoung, Cheng, Newton, Serikov, Oleg, Antverg, Omer, van der Wal, Oskar, Zhang, Rui, Zhang, Ruochen, Gehrmann, Sebastian, Mirkin, Shachar, Pais, Shani, Shavrina, Tatiana, Scialom, Thomas, Yun, Tian, Limisiewicz, Tomasz, Rieser, Verena, Protasov, Vitaly, Mikhailov, Vladislav, Pruksachatkun, Yada, Belinkov, Yonatan, Bamberger, Zachary, Kasner, Zdeněk, Rueda, Alice, Pestana, Amanda, Feizpour, Amir, Khan, Ammar, Faranak, Amy, Santos, Ana, Hevia, Anthony, Unldreaj, Antigona, Aghagol, Arash, Abdollahi, Arezoo, Tammour, Aycha, HajiHosseini, Azadeh, Behroozi, Bahareh, Ajibade, Benjamin, Saxena, Bharat, Ferrandis, Carlos Muñoz, McDuff, Daniel, Contractor, Danish, Lansky, David, David, Davis, Kiela, Douwe, Nguyen, Duong A., Tan, Edward, Baylor, Emi, Ozoani, Ezinwanne, Mirza, Fatima, Ononiwu, Frankline, Rezanejad, Habib, Jones, Hessie, Bhattacharya, Indrani, Solaiman, Irene, Sedenko, Irina, Nejadgholi, Isar, Passmore, Jesse, Seltzer, Josh, Sanz, Julio Bonis, Dutra, Livia, Samagaio, Mairon, Elbadri, Maraim, Mieskes, Margot, Gerchick, Marissa, Akinlolu, Martha, McKenna, Michael, Qiu, Mike, Ghauri, Muhammed, Burynok, Mykola, Abrar, Nafis, Rajani, Nazneen, Elkott, Nour, Fahmy, Nour, Samuel, Olanrewaju, An, Ran, Kromann, Rasmus, Hao, Ryan, Alizadeh, Samira, Shubber, Sarmad, Wang, Silas, Roy, Sourav, Viguier, Sylvain, Le, Thanh, Oyebade, Tobi, Le, Trieu, Yang, Yoyo, Nguyen, Zach, Kashyap, Abhinav Ramesh, Palasciano, Alfredo, Callahan, Alison, Shukla, Anima, Miranda-Escalada, Antonio, Singh, Ayush, Beilharz, Benjamin, Wang, Bo, Brito, Caio, Zhou, Chenxi, Jain, Chirag, Xu, Chuxin, Fourrier, Clémentine, Periñán, Daniel León, Molano, Daniel, Yu, Dian, Manjavacas, Enrique, Barth, Fabio, Fuhrimann, Florian, Altay, Gabriel, Bayrak, Giyaseddin, Burns, Gully, Vrabec, Helena U., Bello, Imane, Dash, Ishani, Kang, Jihyun, Giorgi, John, Golde, Jonas, Posada, Jose David, Sivaraman, Karthik Rangasai, Bulchandani, Lokesh, Liu, Lu, Shinzato, Luisa, de Bykhovetz, Madeleine Hahn, Takeuchi, Maiko, Pàmies, Marc, Castillo, Maria A, Nezhurina, Marianna, Sänger, Mario, Samwald, Matthias, Cullan, Michael, Weinberg, Michael, De Wolf, Michiel, Mihaljcic, Mina, Liu, Minna, Freidank, Moritz, Kang, Myungsun, Seelam, Natasha, Dahlberg, Nathan, Broad, Nicholas Michio, Muellner, Nikolaus, Fung, Pascale, Haller, Patrick, Chandrasekhar, Ramya, Eisenberg, Renata, Martin, Robert, Canalli, Rodrigo, Su, Rosaline, Su, Ruisi, Cahyawijaya, Samuel, Garda, Samuele, Deshmukh, Shlok S, Mishra, Shubhanshu, Kiblawi, Sid, Ott, Simon, Sang-aroonsiri, Sinee, Kumar, Srishti, Schweter, Stefan, Bharati, Sushil, Laud, Tanmay, Gigant, Théo, Kainuma, Tomoya, Kusa, Wojciech, Labrak, Yanis, Bajaj, Yash Shailesh, Venkatraman, Yash, Xu, Yifan, Xu, Yingxin, Xu, Yu, Tan, Zhe, Xie, Zhongli, Ye, Zifan, Bras, Mathilde, Belkada, Younes, and Wolf, Thomas
- Subjects
Computer Science - Computation and Language - Abstract
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
- Published
- 2022
12. Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
- Author
-
Elazar, Yanai, Kassner, Nora, Ravfogel, Shauli, Feder, Amir, Ravichander, Abhilasha, Mosbach, Marius, Belinkov, Yonatan, Schütze, Hinrich, and Goldberg, Yoav
- Subjects
Computer Science - Computation and Language - Abstract
Large amounts of training data are one of the major reasons for the high performance of state-of-the-art NLP models. But what exactly in the training data causes a model to make a certain prediction? We seek to answer this question by providing a language for describing how training data influences predictions, through a causal framework. Importantly, our framework bypasses the need to retrain expensive models and allows us to estimate causal effects based on observational data alone. Addressing the problem of extracting factual knowledge from pretrained language models (PLMs), we focus on simple data statistics such as co-occurrence counts and show that these statistics do influence the predictions of PLMs, suggesting that such models rely on shallow heuristics. Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models., Comment: We received a criticism regarding the validity of the causal formulation in this paper. We will address them in an upcoming version
- Published
- 2022
13. EDIN: An End-to-end Benchmark and Pipeline for Unknown Entity Discovery and Indexing
- Author
-
Kassner, Nora, Petroni, Fabio, Plekhanov, Mikhail, Riedel, Sebastian, and Cancedda, Nicola
- Subjects
Computer Science - Computation and Language - Abstract
Existing work on Entity Linking mostly assumes that the reference knowledge base is complete, and therefore all mentions can be linked. In practice this is hardly ever the case, as knowledge bases are incomplete and because novel concepts arise constantly. This paper created the Unknown Entity Discovery and Indexing (EDIN) benchmark where unknown entities, that is entities without a description in the knowledge base and labeled mentions, have to be integrated into an existing entity linking system. By contrasting EDIN with zero-shot entity linking, we provide insight on the additional challenges it poses. Building on dense-retrieval based entity linking, we introduce the end-to-end EDIN pipeline that detects, clusters, and indexes mentions of unknown entities in context. Experiments show that indexing a single embedding per entity unifying the information of multiple mentions works better than indexing mentions independently.
- Published
- 2022
14. Language Models As or For Knowledge Bases
- Author
-
Razniewski, Simon, Yates, Andrew, Kassner, Nora, and Weikum, Gerhard
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Databases - Abstract
Pre-trained language models (LMs) have recently gained attention for their potential as an alternative to (or proxy for) explicit knowledge bases (KBs). In this position paper, we examine this hypothesis, identify strengths and limitations of both LMs and KBs, and discuss the complementary nature of the two paradigms. In particular, we offer qualitative arguments that latent LMs are not suitable as a substitute for explicit KBs, but could play a major role for augmenting and curating KBs.
- Published
- 2021
15. BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
- Author
-
Kassner, Nora, Tafjord, Oyvind, Schütze, Hinrich, and Clark, Peter
- Subjects
Computer Science - Computation and Language - Abstract
Although pretrained language models (PTLMs) contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after specialized training. As a result, it can be hard to identify what the model actually "believes" about the world, making it susceptible to inconsistent behavior and simple errors. Our goal is to reduce these problems. Our approach is to embed a PTLM in a broader system that also includes an evolving, symbolic memory of beliefs -- a BeliefBank -- that records but then may modify the raw PTLM answers. We describe two mechanisms to improve belief consistency in the overall system. First, a reasoning component -- a weighted MaxSAT solver -- revises beliefs that significantly clash with others. Second, a feedback component issues future queries to the PTLM using known beliefs as context. We show that, in a controlled experimental setting, these two mechanisms result in more consistent beliefs in the overall system, improving both the accuracy and consistency of its answers over time. This is significant as it is a first step towards PTLM-based architectures with a systematic notion of belief, enabling them to construct a more coherent picture of the world, and improve over time without model retraining., Comment: EMNLP 2021 Camera Ready. arXiv admin note: substantial text overlap with arXiv:2104.08401
- Published
- 2021
16. Enriching a Model's Notion of Belief using a Persistent Memory
- Author
-
Kassner, Nora, Tafjord, Oyvind, Schutze, Hinrich, and Clark, Peter
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using specialized training techniques to reduce inconsistency. As a result, it can be hard to identify what the model actually "believes" about the world. Our goal is to reduce this problem, so systems are more globally consistent and accurate in their answers. Our approach is to add a memory component -- a BeliefBank -- that records a model's answers, and two mechanisms that use it to improve consistency among beliefs. First, a reasoning component -- a weighted SAT solver -- improves consistency by flipping answers that significantly clash with others. Second, a feedback component re-queries the model but using known beliefs as context. We show that, in a controlled experimental setting, these two mechanisms improve both accuracy and consistency. This is significant as it is a first step towards endowing models with an evolving memory, allowing them to construct a more coherent picture of the world., Comment: This is an old and now obsolete draft. See arXiv:2109.14723 ("BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief") for the final paper
- Published
- 2021
17. Static Embeddings as Efficient Knowledge Bases?
- Author
-
Dufter, Philipp, Kassner, Nora, and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as "Paris is the capital of [MASK]" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary., Comment: NAACL2021 CRV; first two authors contributed equally
- Published
- 2021
18. Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
- Author
-
Kassner, Nora, Dufter, Philipp, and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
Recently, it has been found that monolingual English language models can be used as knowledge bases. Instead of structural knowledge base queries, masked sentences such as "Paris is the capital of [MASK]" are used as probes. We translate the established benchmarks TREx and GoogleRE into 53 languages. Working with mBERT, we investigate three questions. (i) Can mBERT be used as a multilingual knowledge base? Most prior work only considers English. Extending research to multiple languages is important for diversity and accessibility. (ii) Is mBERT's performance as knowledge base language-independent or does it vary from language to language? (iii) A multilingual model is trained on more text, e.g., mBERT is trained on 104 Wikipedias. Can mBERT leverage this for better performance? We find that using mBERT as a knowledge base yields varying performance across languages and pooling predictions across languages improves performance. Conversely, mBERT exhibits a language bias; e.g., when queried in Italian, it tends to predict Italy as the country of origin., Comment: Accepted to EACL 2021
- Published
- 2021
19. Measuring and Improving Consistency in Pretrained Language Models
- Author
-
Elazar, Yanai, Kassner, Nora, Ravfogel, Shauli, Ravichander, Abhilasha, Hovy, Eduard, Schütze, Hinrich, and Goldberg, Yoav
- Subjects
Computer Science - Computation and Language - Abstract
Consistency of a model -- that is, the invariance of its behavior under meaning-preserving alternations in its input -- is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using ParaRel, we show that the consistency of all PLMs we experiment with is poor -- though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness., Comment: Accepted to the TACL journal, pre-MIT Press publication version
- Published
- 2021
20. Dirichlet-Smoothed Word Embeddings for Low-Resource Settings
- Author
-
Jungmaier, Jakob, Kassner, Nora, and Roth, Benjamin
- Subjects
Computer Science - Computation and Language - Abstract
Nowadays, classical count-based word embeddings using positive pointwise mutual information (PPMI) weighted co-occurrence matrices have been widely superseded by machine-learning-based methods like word2vec and GloVe. But these methods are usually applied using very large amounts of text data. In many cases, however, there is not much text data available, for example for specific domains or low-resource languages. This paper revisits PPMI by adding Dirichlet smoothing to correct its bias towards rare words. We evaluate on standard word similarity data sets and compare to word2vec and the recent state of the art for low-resource settings: Positive and Unlabeled (PU) Learning for word embeddings. The proposed method outperforms PU-Learning for low-resource settings and obtains competitive results for Maltese and Luxembourgish.
- Published
- 2020
21. Are Pretrained Language Models Symbolic Reasoners Over Knowledge?
- Author
-
Kassner, Nora, Krojer, Benno, and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
How can pretrained language models (PLMs) learn factual knowledge from the training set? We investigate the two most important mechanisms: reasoning and memorization. Prior work has attempted to quantify the number of facts PLMs learn, but we present, using synthetic data, the first study that investigates the causal relation between facts present in training and facts learned by the PLM. For reasoning, we show that PLMs seem to learn to apply some symbolic reasoning rules correctly but struggle with others, including two-hop reasoning. Further analysis suggests that even the application of learned reasoning rules is flawed. For memorization, we identify schema conformity (facts systematically supported by other facts) and frequency as key factors for its success., Comment: Accepted to CoNLL 2020
- Published
- 2020
22. BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA
- Author
-
Kassner, Nora and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
Khandelwal et al. (2020) use a k-nearest-neighbor (kNN) component to improve language model performance. We show that this idea is beneficial for open-domain question answering (QA). To improve the recall of facts encountered during training, we combine BERT (Devlin et al., 2019) with a traditional information retrieval step (IR) and a kNN search over a large datastore of an embedded text collection. Our contributions are as follows: i) BERT-kNN outperforms BERT on cloze-style QA by large margins without any further training. ii) We show that BERT often identifies the correct response category (e.g., US city), but only kNN recovers the factually correct answer (e.g., "Miami"). iii) Compared to BERT, BERT-kNN excels for rare facts. iv) BERT-kNN can easily handle facts not covered by BERT's training set, e.g., recent events., Comment: to appear in EMNLP Findings
- Published
- 2020
23. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
- Author
-
Kassner, Nora and Schütze, Hinrich
- Subjects
Computer Science - Computation and Language - Abstract
Building on Petroni et al. (2019), we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs). (1) Negation. We find that PLMs do not distinguish between negated ("Birds cannot [MASK]") and non-negated ("Birds can [MASK]") cloze questions. (2) Mispriming. Inspired by priming methods in human psychology, we add "misprimes" to cloze questions ("Talk? Birds can [MASK]"). We find that PLMs are easily distracted by misprimes. These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge., Comment: ACL 2020
- Published
- 2019
24. Hard to Place: Gay and Lesbian Foster Families and the Remaking of U.S. Family Policy
- Author
-
Kassner, Nora
- Subjects
Public policy ,American history ,LGBTQ studies ,family policy ,foster care ,gay ,lesbian ,queer families ,street-level bureaucracy - Abstract
In 1970, gays and lesbians across the United States routinely lost custody of children—even those they had birthed—in custody disputes. By the end of the twentieth century, not only did gay people stand a good chance of keeping custody of their children, they also had won the right to parent other people’s children through foster care and adoption. This dissertation examines the role of foster care in the remaking of lesbian and gay family rights and U.S. family more broadly. Between the 1970s and the 1990s, foster care systems were tasked with solving two unanticipated family crises: a perceived rise in the number of “runaway” or “street” youth in the nation’s cities, and “boarder babies,” children exposed to HIV-AIDS thought to be languishing in hospitals. As these crises strained foster care systems, social workers turned to white, affluent gay men and lesbians as foster parents. These individual, day-to-day decisions about where to place a child added up to a revolutionary impact on U.S. family policy. For many white gay men and lesbians, foster care became a venue for the expansion of gay parenting rights. For the children’s predominantly Black birth families, however, the expansion of gay and lesbian parenthood was a revolution, turning back to old ideas about race and ability to deny these families access to their children. By the turn of the twenty-first century, gay foster parents put parenting rights at the forefront of the national policy agenda, transforming the meaning of family in the United States.
- Published
- 2023
25. Language Models with Rationality
- Author
-
Kassner, Nora, primary, Tafjord, Oyvind, additional, Sabharwal, Ashish, additional, Richardson, Kyle, additional, Schuetze, Hinrich, additional, and Clark, Peter, additional
- Published
- 2023
- Full Text
- View/download PDF
26. "They Just Handed Me Somebody's Baby": Gay Foster Parents, Children with HIV-AIDS, and the "Colorblind" Family
- Author
-
Kassner, Nora, primary
- Published
- 2023
- Full Text
- View/download PDF
27. Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings
- Author
-
Atzeni, Mattia, primary, Plekhanov, Mikhail, additional, Dreyer, Frederic, additional, Kassner, Nora, additional, Merello, Simone, additional, Martin, Louis, additional, and Cancedda, Nicola, additional
- Published
- 2023
- Full Text
- View/download PDF
28. Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
- Author
-
ImaniGooghari, Ayyoob, primary, Lin, Peiqin, additional, Kargaran, Amir Hossein, additional, Severini, Silvia, additional, Jalili Sabet, Masoud, additional, Kassner, Nora, additional, Ma, Chunlan, additional, Schmid, Helmut, additional, Martins, André, additional, Yvon, François, additional, and Schütze, Hinrich, additional
- Published
- 2023
- Full Text
- View/download PDF
29. Measuring Causal Effects of Data Statistics on Language Model's 'Factual' Predictions
- Author
-
Elazar, Yanai, Kassner, Nora, Ravfogel, Shauli, Feder, Amir, Ravichander, Abhilasha, Mosbach, Marius, Belinkov, Yonatan, Schütze, Hinrich, and Goldberg, Yoav
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Computation and Language (cs.CL) - Abstract
Large amounts of training data are one of the major reasons for the high performance of state-of-the-art NLP models. But what exactly in the training data causes a model to make a certain prediction? We seek to answer this question by providing a language for describing how training data influences predictions, through a causal framework. Importantly, our framework bypasses the need to retrain expensive models and allows us to estimate causal effects based on observational data alone. Addressing the problem of extracting factual knowledge from pretrained language models (PLMs), we focus on simple data statistics such as co-occurrence counts and show that these statistics do influence the predictions of PLMs, suggesting that such models rely on shallow heuristics. Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models., We received a criticism regarding the validity of the causal formulation in this paper. We will address them in an upcoming version
- Published
- 2022
30. Anisotropic cerebral vascular architecture causes orientation dependency in cerebral blood flow and volume measured with dynamic susceptibility contrast magnetic resonance imaging
- Author
-
Hernández-Torres, Enedino, Kassner, Nora, Forkert, Nils Daniel, Wei, Luxi, Wiggermann, Vanessa, Daemen, Madeleine, Machan, Lindsay, Traboulsee, Anthony, Li, David, and Rauscher, Alexander
- Published
- 2017
- Full Text
- View/download PDF
31. EDIN: An End-to-end Benchmark and Pipeline for Unknown Entity Discovery and Indexing
- Author
-
Kassner, Nora, primary, Petroni, Fabio, additional, Plekhanov, Mikhail, additional, Riedel, Sebastian, additional, and Cancedda, Nicola, additional
- Published
- 2022
- Full Text
- View/download PDF
32. Review: Kids on the Street: Queer Kinship and Religion in San Francisco’s Tenderloin, by Joseph Plaster
- Author
-
Kassner, Nora
- Published
- 2024
- Full Text
- View/download PDF
33. Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
- Author
-
Kassner, Nora, primary, Dufter, Philipp, additional, and Schütze, Hinrich, additional
- Published
- 2021
- Full Text
- View/download PDF
34. Static Embeddings as Efficient Knowledge Bases?
- Author
-
Dufter, Philipp, primary, Kassner, Nora, additional, and Schütze, Hinrich, additional
- Published
- 2021
- Full Text
- View/download PDF
35. BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
- Author
-
Kassner, Nora, primary, Tafjord, Oyvind, additional, Schütze, Hinrich, additional, and Clark, Peter, additional
- Published
- 2021
- Full Text
- View/download PDF
36. Measuring and Improving Consistency in Pretrained Language Models
- Author
-
Elazar, Yanai, primary, Kassner, Nora, additional, Ravfogel, Shauli, additional, Ravichander, Abhilasha, additional, Hovy, Eduard, additional, Schütze, Hinrich, additional, and Goldberg, Yoav, additional
- Published
- 2021
- Full Text
- View/download PDF
37. Erratum: Measuring and Improving Consistency in Pretrained Language Models
- Author
-
Elazar, Yanai, primary, Kassner, Nora, additional, Ravfogel, Shauli, additional, Ravichander, Abhilasha, additional, Hovy, Eduard, additional, Schütze, Hinrich, additional, and Goldberg, Yoav, additional
- Published
- 2021
- Full Text
- View/download PDF
38. BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA
- Author
-
Kassner, Nora, primary and Schütze, Hinrich, additional
- Published
- 2020
- Full Text
- View/download PDF
39. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
- Author
-
Kassner, Nora, primary and Schütze, Hinrich, additional
- Published
- 2020
- Full Text
- View/download PDF
40. Are Pretrained Language Models Symbolic Reasoners over Knowledge?
- Author
-
Kassner, Nora, primary, Krojer, Benno, additional, and Schütze, Hinrich, additional
- Published
- 2020
- Full Text
- View/download PDF
41. Histogram based analysis of lung perfusion of children after congenital diaphragmatic hernia repair
- Author
-
Kassner, Nora, primary, Weis, Meike, additional, Zahn, Katrin, additional, Schaible, Thomas, additional, Schoenberg, Stefan O., additional, Schad, Lothar R., additional, and Zöllner, Frank G., additional
- Published
- 2018
- Full Text
- View/download PDF
42. Anisotropic cerebral vascular architecture causes orientation dependency in cerebral blood flow and volume measured with dynamic susceptibility contrast magnetic resonance imaging
- Author
-
Hernández-Torres, Enedino, primary, Kassner, Nora, additional, Forkert, Nils Daniel, additional, Wei, Luxi, additional, Wiggermann, Vanessa, additional, Daemen, Madeleine, additional, Machan, Lindsay, additional, Traboulsee, Anthony, additional, Li, David, additional, and Rauscher, Alexander, additional
- Published
- 2016
- Full Text
- View/download PDF
43. Hic Est Uxor Mihei: How Roman Funerary Portraits Carve the Ideal Freedwoman
- Author
-
Kassner, Nora and Kassner, Nora
- Abstract
This paper examines the depiction of Roman freedwomen (former slaves) in thirty-five late Republican and Augustan funerary portraits. Extant portraits utilize a complex visual and written vocabulary to reveal a wide variety of views of freedwomen’s status and agency. This paper relies upon analyses of the cultural climates of the late Republican and Augustan period, careful interrogation of the material evidence through the lens of both post-structuralist and affective theory, and the use of case studies. Ultimately, it argues that funerary portraits create diverse representations of the ideal freedwoman that become part of an ongoing cultural dialogue concerning the place of freedwomen in Roman society.
- Published
- 2014
44. Whose Town? The Rise of the Elite in Augustan Pompeii
- Author
-
Kassner, Nora and Kassner, Nora
- Abstract
During the Augustan period, Pompeii’s elite restructured city landmarks to augment their own power. This paper studies the intersection of class and urban geography in this key moment of Pompeii’s history, identifying how changing physical landmarks benefits or disadvantages multiple classes of Pompeian residents. Although the impact of the rise of Augustus on the city of Rome has been studied extensively, this paper supplements that research by studying physical changes within the south Italian setting of Pompeii. In the Augustan period, Pompeii’s urban environment increasingly emphasized major public spaces and elite-dominated monumental architecture over earlier neighborhood landmarks that gave prestige to multiple classes. Due to this shift, the power of Pompeii’s many non-elite classes decreased throughout the town while the elite capitalized on urban changes to increase their influence over Pompeii. Augustan Pompeii transitioned from a mixed-power to an elite-dominated city.
- Published
- 2013
45. Whose Town? The Rise of the Elite in Augustan Pompeii
- Author
-
Kassner, Nora, Kassner, Nora, Kassner, Nora, and Kassner, Nora
- Abstract
1
46. Hic Est Uxor Mihei: How Roman Funerary Portraits Carve the Ideal Freedwoman
- Author
-
Kassner, Nora, Kassner, Nora, Kassner, Nora, and Kassner, Nora
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.