1,156,490 results on '"Lawrence, A."'
Search Results
2. Search for gravitational waves emitted from SN 2023ixf
- Author
-
The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, Abac, A. G., Abbott, R., Abouelfettouh, I., Acernese, F., Ackley, K., Adhicary, S., Adhikari, N., Adhikari, R. X., Adkins, V. K., Agarwal, D., Agathos, M., Abchouyeh, M. Aghaei, Aguiar, O. D., Aguilar, I., Aiello, L., Ain, A., Akutsu, T., Albanesi, S., Alfaidi, R. A., Al-Jodah, A., Alléné, C., Allocca, A., Al-Shammari, S., Altin, P. A., Alvarez-Lopez, S., Amato, A., Amez-Droz, L., Amorosi, A., Amra, C., Ananyeva, A., Anderson, S. B., Anderson, W. G., Andia, M., Ando, M., Andrade, T., Andres, N., Andrés-Carcasona, M., Andrić, T., Anglin, J., Ansoldi, S., Antelis, J. M., Antier, S., Aoumi, M., Appavuravther, E. Z., Appert, S., Apple, S. K., Arai, K., Araya, A., Araya, M. C., Areeda, J. S., Argianas, L., Aritomi, N., Armato, F., Arnaud, N., Arogeti, M., Aronson, S. M., Ashton, G., Aso, Y., Assiduo, M., Melo, S. Assis de Souza, Aston, S. M., Astone, P., Attadio, F., Aubin, F., AultONeal, K., Avallone, G., Babak, S., Badaracco, F., Badger, C., Bae, S., Bagnasco, S., Bagui, E., Baier, J. G., Baiotti, L., Bajpai, R., Baka, T., Ball, M., Ballardin, G., Ballmer, S. W., Banagiri, S., Banerjee, B., Bankar, D., Baral, P., Barayoga, J. C., Barish, B. C., Barker, D., Barneo, P., Barone, F., Barr, B., Barsotti, L., Barsuglia, M., Barta, D., Bartoletti, A. M., Barton, M. A., Bartos, I., Basak, S., Basalaev, A., Bassiri, R., Basti, A., Bates, D. E., Bawaj, M., Baxi, P., Bayley, J. C., Baylor, A. C., Baynard II, P. A., Bazzan, M., Bedakihale, V. M., Beirnaert, F., Bejger, M., Belardinelli, D., Bell, A. S., Benedetto, V., Benoit, W., Bentley, J. D., Yaala, M. Ben, Bera, S., Berbel, M., Bergamin, F., Berger, B. K., Bernuzzi, S., Beroiz, M., Bersanetti, D., Bertolini, A., Betzwieser, J., Beveridge, D., Bevins, N., Bhandare, R., Bhardwaj, U., Bhatt, R., Bhattacharjee, D., Bhaumik, S., Bhowmick, S., Bianchi, A., Bilenko, I. A., Billingsley, G., Binetti, A., Bini, S., Birnholtz, O., Biscoveanu, S., Bisht, A., Bitossi, M., Bizouard, M. -A., Blackburn, J. K., Blagg, L. A., Blair, C. D., Blair, D. G., Bobba, F., Bode, N., Boileau, G., Boldrini, M., Bolingbroke, G. N., Bolliand, A., Bonavena, L. D., Bondarescu, R., Bondu, F., Bonilla, E., Bonilla, M. S., Bonino, A., Bonnand, R., Booker, P., Borchers, A., Boschi, V., Bose, S., Bossilkov, V., Boudart, V., Boudon, A., Bozzi, A., Bradaschia, C., Brady, P. R., Braglia, M., Branch, A., Branchesi, M., Brandt, J., Braun, I., Breschi, M., Briant, T., Brillet, A., Brinkmann, M., Brockill, P., Brockmueller, E., Brooks, A. F., Brown, B. C., Brown, D. D., Brozzetti, M. L., Brunett, S., Bruno, G., Bruntz, R., Bryant, J., Bucci, F., Buchanan, J., Bulashenko, O., Bulik, T., Bulten, H. J., Buonanno, A., Burtnyk, K., Buscicchio, R., Buskulic, D., Buy, C., Byer, R. L., Davies, G. S. Cabourn, Cabras, G., Cabrita, R., Cáceres-Barbosa, V., Cadonati, L., Cagnoli, G., Cahillane, C., Bustillo, J. Calderón, Callister, T. A., Calloni, E., Camp, J. B., Canepa, M., Santoro, G. Caneva, Cannon, K. C., Cao, H., Capistran, L. A., Capocasa, E., Capote, E., Carapella, G., Carbognani, F., Carlassara, M., Carlin, J. B., Carpinelli, M., Carrillo, G., Carter, J. J., Carullo, G., Diaz, J. Casanueva, Casentini, C., Castro-Lucas, S. Y., Caudill, S., Cavaglià, M., Cavalieri, R., Cella, G., Cerdá-Durán, P., Cesarini, E., Chaibi, W., Chakraborty, P., Subrahmanya, S. Chalathadka, Chan, J. C. L., Chan, M., Chandra, K., Chang, R. -J., Chao, S., Charlton, E. L., Charlton, P., Chassande-Mottin, E., Chatterjee, C., Chatterjee, Debarati, Chatterjee, Deep, Chaturvedi, M., Chaty, S., Chen, A., Chen, A. H. -Y., Chen, D., Chen, H., Chen, H. Y., Chen, J., Chen, K. H., Chen, Y., Chen, Yanbei, Chen, Yitian, Cheng, H. P., Chessa, P., Cheung, H. T., Cheung, S. Y., Chiadini, F., Chiarini, G., Chierici, R., Chincarini, A., Chiofalo, M. L., Chiummo, A., Chou, C., Choudhary, S., Christensen, N., Chua, S. S. Y., Chugh, P., Ciani, G., Ciecielag, P., Cieślar, M., Cifaldi, M., Ciolfi, R., Clara, F., Clark, J. A., Clarke, J., Clarke, T. A., Clearwater, P., Clesse, S., Coccia, E., Codazzo, E., Cohadon, P. -F., Colace, S., Colleoni, M., Collette, C. G., Collins, J., Colloms, S., Colombo, A., Colpi, M., Compton, C. M., Connolly, G., Conti, L., Corbitt, T. R., Cordero-Carrión, I., Corezzi, S., Cornish, N. J., Corsi, A., Cortese, S., Costa, C. A., Cottingham, R., Coughlin, M. W., Couineaux, A., Coulon, J. -P., Countryman, S. T., Coupechoux, J. -F., Couvares, P., Coward, D. M., Cowart, M. J., Coyne, R., Craig, K., Creed, R., Creighton, J. D. E., Creighton, T. D., Cremonese, P., Criswell, A. W., Crockett-Gray, J. C. G., Crook, S., Crouch, R., Csizmazia, J., Cudell, J. R., Cullen, T. J., Cumming, A., Cuoco, E., Cusinato, M., Dabadie, P., Canton, T. Dal, Dall'Osso, S., Pra, S. Dal, Dálya, G., D'Angelo, B., Danilishin, S., D'Antonio, S., Danzmann, K., Darroch, K. E., Dartez, L. P., Dasgupta, A., Datta, S., Dattilo, V., Daumas, A., Davari, N., Dave, I., Davenport, A., Davier, M., Davies, T. F., Davis, D., Davis, L., Davis, M. C., Davis, P. J., Dax, M., De Bolle, J., Deenadayalan, M., Degallaix, J., De Laurentis, M., Deléglise, S., De Lillo, F., Dell'Aquila, D., Del Pozzo, W., De Marco, F., De Matteis, F., D'Emilio, V., Demos, N., Dent, T., Depasse, A., DePergola, N., De Pietri, R., De Rosa, R., De Rossi, C., DeSalvo, R., De Simone, R., Dhani, A., Diab, R., Díaz, M. C., Di Cesare, M., Dideron, G., Didio, N. A., Dietrich, T., Di Fiore, L., Di Fronzo, C., Di Giovanni, M., Di Girolamo, T., Diksha, D., Di Michele, A., Ding, J., Di Pace, S., Di Palma, I., Di Renzo, F., Divyajyoti, Dmitriev, A., Doctor, Z., Dohmen, E., Doleva, P. P., Dominguez, D., D'Onofrio, L., Donovan, F., Dooley, K. L., Dooney, T., Doravari, S., Dorosh, O., Drago, M., Driggers, J. C., Ducoin, J. -G., Dunn, L., Dupletsa, U., D'Urso, D., Duval, H., Duverne, P. -A., Dwyer, S. E., Eassa, C., Ebersold, M., Eckhardt, T., Eddolls, G., Edelman, B., Edo, T. B., Edy, O., Effler, A., Eichholz, J., Einsle, H., Eisenmann, M., Eisenstein, R. A., Ejlli, A., Eleveld, R. M., Emma, M., Endo, K., Engl, A. J., Enloe, E., Errico, L., Essick, R. C., Estellés, H., Estevez, D., Etzel, T., Evans, M., Evstafyeva, T., Ewing, B. E., Ezquiaga, J. M., Fabrizi, F., Faedi, F., Fafone, V., Fairhurst, S., Farah, A. M., Farr, B., Farr, W. M., Favaro, G., Favata, M., Fays, M., Fazio, M., Feicht, J., Fejer, M. M., Felicetti, R., Fenyvesi, E., Ferguson, D. L., Ferraiuolo, S., Ferrante, I., Ferreira, T. A., Fidecaro, F., Figura, P., Fiori, A., Fiori, I., Fishbach, M., Fisher, R. P., Fittipaldi, R., Fiumara, V., Flaminio, R., Fleischer, S. M., Fleming, L. S., Floden, E., Foley, E. M., Fong, H., Font, J. A., Fornal, B., Forsyth, P. W. F., Franceschetti, K., Franchini, N., Frasca, S., Frasconi, F., Mascioli, A. Frattale, Frei, Z., Freise, A., Freitas, O., Frey, R., Frischhertz, W., Fritschel, P., Frolov, V. V., Fronzé, G. G., Fuentes-Garcia, M., Fujii, S., Fujimori, T., Fulda, P., Fyffe, M., Gadre, B., Gair, J. R., Galaudage, S., Galdi, V., Gallagher, H., Gallardo, S., Gallego, B., Gamba, R., Gamboa, A., Ganapathy, D., Ganguly, A., Garaventa, B., García-Bellido, J., Núñez, C. García, García-Quirós, C., Gardner, J. W., Gardner, K. A., Gargiulo, J., Garron, A., Garufi, F., Gasbarra, C., Gateley, B., Gayathri, V., Gemme, G., Gennai, A., Gennari, V., George, J., George, R., Gerberding, O., Gergely, L., Ghosh, Archisman, Ghosh, Sayantan, Ghosh, Shaon, Ghosh, Shrobana, Ghosh, Suprovo, Ghosh, Tathagata, Giacoppo, L., Giaime, J. A., Giardina, K. D., Gibson, D. R., Gibson, D. T., Gier, C., Giri, P., Gissi, F., Gkaitatzis, S., Glanzer, J., Glotin, F., Godfrey, J., Godwin, P., Goebbels, N. L., Goetz, E., Golomb, J., Lopez, S. Gomez, Goncharov, B., Gong, Y., González, G., Goodarzi, P., Goode, S., Goodwin-Jones, A. W., Gosselin, M., Göttel, A. S., Gouaty, R., Gould, D. W., Govorkova, K., Goyal, S., Grace, B., Grado, A., Graham, V., Granados, A. E., Granata, M., Granata, V., Gras, S., Grassia, P., Gray, A., Gray, C., Gray, R., Greco, G., Green, A. C., Green, S. M., Green, S. R., Gretarsson, A. M., Gretarsson, E. M., Griffith, D., Griffiths, W. L., Griggs, H. L., Grignani, G., Grimaldi, A., Grimaud, C., Grote, H., Guerra, D., Guetta, D., Guidi, G. M., Guimaraes, A. R., Gulati, H. K., Gulminelli, F., Gunny, A. M., Guo, H., Guo, W., Guo, Y., Gupta, Anchal, Gupta, Anuradha, Gupta, Ish, Gupta, N. C., Gupta, P., Gupta, S. K., Gupta, T., Gupte, N., Gurs, J., Gutierrez, N., Guzman, F., H, H. -Y., Haba, D., Haberland, M., Haino, S., Hall, E. D., Hamilton, E. Z., Hammond, G., Han, W. -B., Haney, M., Hanks, J., Hanna, C., Hannam, M. D., Hannuksela, O. A., Hanselman, A. G., Hansen, H., Hanson, J., Harada, R., Hardison, A. R., Haris, K., Harmark, T., Harms, J., Harry, G. M., Harry, I. W., Hart, J., Haskell, B., Haster, C. -J., Hathaway, J. S., Haughian, K., Hayakawa, H., Hayama, K., Hayes, R., Heffernan, A., Heidmann, A., Heintze, M. C., Heinze, J., Heinzel, J., Heitmann, H., Hellman, F., Hello, P., Helmling-Cornell, A. F., Hemming, G., Henderson-Sapir, O., Hendry, M., Heng, I. S., Hennes, E., Henshaw, C., Hertog, T., Heurs, M., Hewitt, A. L., Heyns, J., Higginbotham, S., Hild, S., Hill, S., Himemoto, Y., Hirata, N., Hirose, C., Hoang, S., Hochheim, S., Hofman, D., Holland, N. A., Holley-Bockelmann, K., Holmes, Z. J., Holz, D. E., Honet, L., Hong, C., Hornung, J., Hoshino, S., Hough, J., Hourihane, S., Howell, E. J., Hoy, C. G., Hrishikesh, C. A., Hsieh, H. -F., Hsiung, C., Hsu, H. C., Hsu, W. -F., Hu, P., Hu, Q., Huang, H. Y., Huang, Y. -J., Huddart, A. D., Hughey, B., Hui, D. C. Y., Hui, V., Husa, S., Huxford, R., Huynh-Dinh, T., Iampieri, L., Iandolo, G. A., Ianni, M., Iess, A., Imafuku, H., Inayoshi, K., Inoue, Y., Iorio, G., Iqbal, M. H., Irwin, J., Ishikawa, R., Isi, M., Ismail, M. A., Itoh, Y., Iwanaga, H., Iwaya, M., Iyer, B. R., JaberianHamedan, V., Jacquet, C., Jacquet, P. -E., Jadhav, S. J., Jadhav, S. P., Jain, T., James, A. L., James, P. A., Jamshidi, R., Janquart, J., Janssens, K., Janthalur, N. N., Jaraba, S., Jaranowski, P., Jaume, R., Javed, W., Jennings, A., Jia, W., Jiang, J., Kubisz, J., Johanson, C., Johns, G. R., Johnson, N. A., Johnston, M. C., Johnston, R., Johny, N., Jones, D. H., Jones, D. I., Jones, R., Jose, S., Joshi, P., Ju, L., Jung, K., Junker, J., Juste, V., Kajita, T., Kaku, I., Kalaghatgi, C., Kalogera, V., Kamiizumi, M., Kanda, N., Kandhasamy, S., Kang, G., Kanner, J. B., Kapadia, S. J., Kapasi, D. P., Karat, S., Karathanasis, C., Kashyap, R., Kasprzack, M., Kastaun, W., Kato, T., Katsavounidis, E., Katzman, W., Kaushik, R., Kawabe, K., Kawamoto, R., Kazemi, A., Keitel, D., Kelley-Derzon, J., Kennington, J., Kesharwani, R., Key, J. S., Khadela, R., Khadka, S., Khalili, F. Y., Khan, F., Khan, I., Khanam, T., Khursheed, M., Khusid, N. M., Kiendrebeogo, W., Kijbunchoo, N., Kim, C., Kim, J. C., Kim, K., Kim, M. H., Kim, S., Kim, Y. -M., Kimball, C., Kinley-Hanlon, M., Kinnear, M., Kissel, J. S., Klimenko, S., Knee, A. M., Knust, N., Kobayashi, K., Obergaulinger, M., Koch, P., Koehlenbeck, S. M., Koekoek, G., Kohri, K., Kokeyama, K., Koley, S., Kolitsidou, P., Kolstein, M., Komori, K., Kong, A. K. H., Kontos, A., Korobko, M., Kossak, R. V., Kou, X., Koushik, A., Kouvatsos, N., Kovalam, M., Kozak, D. B., Kranzhoff, S. L., Kringel, V., Krishnendu, N. V., Królak, A., Kruska, K., Kuehn, G., Kuijer, P., Kulkarni, S., Ramamohan, A. Kulur, Kumar, A., Kumar, Praveen, Kumar, Prayush, Kumar, Rahul, Kumar, Rakesh, Kume, J., Kuns, K., Kuntimaddi, N., Kuroyanagi, S., Kurth, N. J., Kuwahara, S., Kwak, K., Kwan, K., Kwok, J., Lacaille, G., Lagabbe, P., Laghi, D., Lai, S., Laity, A. H., Lakkis, M. H., Lalande, E., Lalleman, M., Lalremruati, P. C., Landry, M., Lane, B. B., Lang, R. N., Lange, J., Lantz, B., La Rana, A., La Rosa, I., Lartaux-Vollard, A., Lasky, P. D., Lawrence, J., Lawrence, M. N., Laxen, M., Lazzarini, A., Lazzaro, C., Leaci, P., Lecoeuche, Y. K., Lee, H. M., Lee, H. W., Lee, K., Lee, R. -K., Lee, R., Lee, S., Lee, Y., Legred, I. N., Lehmann, J., Lehner, L., Jean, M. Le, Lemaître, A., Lenti, M., Leonardi, M., Lequime, M., Leroy, N., Lesovsky, M., Letendre, N., Lethuillier, M., Levin, S. E., Levin, Y., Leyde, K., Li, A. K. Y., Li, K. L., Li, T. G. F., Li, X., Li, Z., Lihos, A., Lin, C-Y., Lin, C. -Y., Lin, E. T., Lin, F., Lin, H., Lin, L. C. -C., Lin, Y. -C., Linde, F., Linker, S. D., Littenberg, T. B., Liu, A., Liu, G. C., Liu, Jian, Villarreal, F. Llamas, Llobera-Querol, J., Lo, R. K. L., Locquet, J. -P., London, L. T., Longo, A., Lopez, D., Portilla, M. Lopez, Lorenzini, M., Lorenzo-Medina, A., Loriette, V., Lormand, M., Losurdo, G., Lott IV, T. P., Lough, J. D., Loughlin, H. A., Lousto, C. O., Lowry, M. J., Lu, N., Lück, H., Lumaca, D., Lundgren, A. P., Lussier, A. W., Ma, L. -T., Ma, S., Ma'arif, M., Macas, R., Macedo, A., MacInnis, M., Maciy, R. R., Macleod, D. M., MacMillan, I. A. O., Macquet, A., Macri, D., Maeda, K., Maenaut, S., Hernandez, I. Magaña, Magare, S. S., Magazzù, C., Magee, R. M., Maggio, E., Maggiore, R., Magnozzi, M., Mahesh, M., Mahesh, S., Maini, M., Majhi, S., Majorana, E., Makarem, C. N., Makelele, E., Malaquias-Reis, J. A., Mali, U., Maliakal, S., Malik, A., Man, N., Mandic, V., Mangano, V., Mannix, B., Mansell, G. L., Mansingh, G., Manske, M., Mantovani, M., Mapelli, M., Marchesoni, F., Pina, D. Marín, Marion, F., Márka, S., Márka, Z., Markosyan, A. S., Markowitz, A., Maros, E., Marsat, S., Martelli, F., Martin, I. W., Martin, R. M., Martinez, B. B., Martinez, M., Martinez, V., Martini, A., Martinovic, K., Martins, J. C., Martynov, D. V., Marx, E. J., Massaro, L., Masserot, A., Masso-Reid, M., Mastrodicasa, M., Mastrogiovanni, S., Matcovich, T., Matiushechkina, M., Matsuyama, M., Mavalvala, N., Maxwell, N., McCarrol, G., McCarthy, R., McClelland, D. E., McCormick, S., McCuller, L., McEachin, S., McElhenny, C., McGhee, G. I., McGinn, J., McGowan, K. B. M., McIver, J., McLeod, A., McRae, T., Meacher, D., Meijer, Q., Melatos, A., Mellaerts, S., Menendez-Vazquez, A., Menoni, C. S., Mera, F., Mercer, R. A., Mereni, L., Merfeld, K., Merilh, E. L., Mérou, J. R., Merritt, J. D., Merzougui, M., Messenger, C., Messick, C., Meyer-Conde, M., Meylahn, F., Mhaske, A., Miani, A., Miao, H., Michaloliakos, I., Michel, C., Michimura, Y., Middleton, H., Miller, A. L., Miller, S., Millhouse, M., Milotti, E., Milotti, V., Minenkov, Y., Mio, N., Mir, Ll. M., Mirasola, L., Miravet-Tenés, M., Miritescu, C. -A., Mishra, A. K., Mishra, A., Mishra, C., Mishra, T., Mitchell, A. L., Mitchell, J. G., Mitra, S., Mitrofanov, V. P., Mittleman, R., Miyakawa, O., Miyamoto, S., Miyoki, S., Mo, G., Mobilia, L., Mohapatra, S. R. P., Mohite, S. R., Molina-Ruiz, M., Mondal, C., Mondin, M., Montani, M., Moore, C. J., Moraru, D., More, A., More, S., Moreno, G., Morgan, C., Morisaki, S., Moriwaki, Y., Morras, G., Moscatello, A., Mourier, P., Mours, B., Mow-Lowry, C. M., Muciaccia, F., Mukherjee, Arunava, Mukherjee, D., Mukherjee, Samanwaya, Mukherjee, Soma, Mukherjee, Subroto, Mukherjee, Suvodip, Mukund, N., Mullavey, A., Munch, J., Mundi, J., Mungioli, C. L., Oberg, W. R. Munn, Murakami, Y., Murakoshi, M., Murray, P. G., Muusse, S., Nabari, D., Nadji, S. L., Nagar, A., Nagarajan, N., Nagler, K. N., Nakagaki, K., Nakamura, K., Nakano, H., Nakano, M., Nandi, D., Napolano, V., Narayan, P., Nardecchia, I., Narikawa, T., Narola, H., Naticchioni, L., Nayak, R. K., Neilson, J., Nelson, A., Nelson, T. J. N., Nery, M., Neunzert, A., Ng, S., Quynh, L. Nguyen, Nichols, S. A., Nielsen, A. B., Nieradka, G., Niko, A., Nishino, Y., Nishizawa, A., Nissanke, S., Nitoglia, E., Niu, W., Nocera, F., Norman, M., North, C., Novak, J., Siles, J. F. Nuño, Nuttall, L. K., Obayashi, K., Oberling, J., O'Dell, J., Oertel, M., Offermans, A., Oganesyan, G., Oh, J. J., Oh, K., O'Hanlon, T., Ohashi, M., Ohkawa, M., Ohme, F., Oliveira, A. S., Oliveri, R., O'Neal, B., Oohara, K., O'Reilly, B., Ormsby, N. D., Orselli, M., O'Shaughnessy, R., O'Shea, S., Oshima, Y., Oshino, S., Ossokine, S., Osthelder, C., Ota, I., Ottaway, D. J., Ouzriat, A., Overmier, H., Owen, B. J., Pace, A. E., Pagano, R., Page, M. A., Pai, A., Pal, A., Pal, S., Palaia, M. A., Pálfi, M., Palma, P. P., Palomba, C., Palud, P., Pan, H., Pan, J., Pan, K. C., Panai, R., Panda, P. K., Pandey, S., Panebianco, L., Pang, P. T. H., Pannarale, F., Pannone, K. A., Pant, B. C., Panther, F. H., Paoletti, F., Paolone, A., Papalexakis, E. E., Papalini, L., Papigkiotis, G., Paquis, A., Parisi, A., Park, B. -J., Park, J., Parker, W., Pascale, G., Pascucci, D., Pasqualetti, A., Passaquieti, R., Passenger, L., Passuello, D., Patane, O., Pathak, D., Pathak, M., Patra, A., Patricelli, B., Patron, A. S., Paul, K., Paul, S., Payne, E., Pearce, T., Pedraza, M., Pegna, R., Pele, A., Arellano, F. E. Peña, Penn, S., Penuliar, M. D., Perego, A., Pereira, Z., Perez, J. J., Périgois, C., Perna, G., Perreca, A., Perret, J., Perriès, S., Perry, J. W., Pesios, D., Petracca, S., Petrillo, C., Pfeiffer, H. P., Pham, H., Pham, K. A., Phukon, K. S., Phurailatpam, H., Piarulli, M., Piccari, L., Piccinni, O. J., Pichot, M., Piendibene, M., Piergiovanni, F., Pierini, L., Pierra, G., Pierro, V., Pietrzak, M., Pillas, M., Pilo, F., Pinard, L., Pinto, I. M., Pinto, M., Piotrzkowski, B. J., Pirello, M., Pitkin, M. D., Placidi, A., Placidi, E., Planas, M. L., Plastino, W., Poggiani, R., Polini, E., Pompili, L., Poon, J., Porcelli, E., Porter, E. K., Posnansky, C., Poulton, R., Powell, J., Pracchia, M., Pradhan, B. K., Pradier, T., Prajapati, A. K., Prasai, K., Prasanna, R., Prasia, P., Pratten, G., Principe, G., Principe, M., Prodi, G. A., Prokhorov, L., Prosposito, P., Puecher, A., Pullin, J., Punturo, M., Puppo, P., Pürrer, M., Qi, H., Qin, J., Quéméner, G., Quetschke, V., Quigley, C., Quinonez, P. J., Raab, F. J., Raabith, S. S., Raaijmakers, G., Raja, S., Rajan, C., Rajbhandari, B., Ramirez, K. E., Vidal, F. A. Ramis, Ramos-Buades, A., Rana, D., Ranjan, S., Ransom, K., Rapagnani, P., Ratto, B., Rawat, S., Ray, A., Raymond, V., Razzano, M., Read, J., Payo, M. Recaman, Regimbau, T., Rei, L., Reid, S., Reitze, D. H., Relton, P., Renzini, A. I., Rettegno, P., Revenu, B., Reyes, R., Rezaei, A. S., Ricci, F., Ricci, M., Ricciardone, A., Richardson, J. W., Richardson, M., Rijal, A., Riles, K., Riley, H. K., Rinaldi, S., Rittmeyer, J., Robertson, C., Robinet, F., Robinson, M., Rocchi, A., Rolland, L., Rollins, J. G., Romano, A. E., Romano, R., Romero, A., Romero-Shaw, I. M., Romie, J. H., Ronchini, S., Roocke, T. J., Rosa, L., Rosauer, T. J., Rose, C. A., Rosińska, D., Ross, M. P., Rossello, M., Rowan, S., Roy, S. K., Roy, S., Rozza, D., Ruggi, P., Ruhama, N., Morales, E. Ruiz, Ruiz-Rocha, K., Sachdev, S., Sadecki, T., Sadiq, J., Saffarieh, P., Sah, M. R., Saha, S. S., Saha, S., Sainrat, T., Menon, S. Sajith, Sakai, K., Sakellariadou, M., Sakon, S., Salafia, O. S., Salces-Carcoba, F., Salconi, L., Saleem, M., Salemi, F., Sallé, M., Salvador, S., Sanchez, A., Sanchez, E. J., Sanchez, J. H., Sanchez, L. E., Sanchis-Gual, N., Sanders, J. R., Sänger, E. M., Santoliquido, F., Saravanan, T. R., Sarin, N., Sasaoka, S., Sasli, A., Sassi, P., Sassolas, B., Satari, H., Sato, R., Sato, Y., Sauter, O., Savage, R. L., Sawada, T., Sawant, H. L., Sayah, S., Scacco, V., Schaetzl, D., Scheel, M., Schiebelbein, A., Schiworski, M. G., Schmidt, P., Schmidt, S., Schnabel, R., Schneewind, M., Schofield, R. M. S., Schouteden, K., Schulte, B. W., Schutz, B. F., Schwartz, E., Scialpi, M., Scott, J., Scott, S. M., Seetharamu, T. C., Seglar-Arroyo, M., Sekiguchi, Y., Sellers, D., Sengupta, A. S., Sentenac, D., Seo, E. G., Seo, J. W., Sequino, V., Serra, M., Servignat, G., Sevrin, A., Shaffer, T., Shah, U. S., Shaikh, M. A., Shao, L., Sharma, A. K., Sharma, P., Sharma-Chaudhary, S., Shaw, M. R., Shawhan, P., Shcheblanov, N. S., Sheridan, E., Shikano, Y., Shikauchi, M., Shimode, K., Shinkai, H., Shiota, J., Shoemaker, D. H., Shoemaker, D. M., Short, R. W., ShyamSundar, S., Sider, A., Siegel, H., Sieniawska, M., Sigg, D., Silenzi, L., Simmonds, M., Singer, L. P., Singh, A., Singh, D., Singh, M. K., Singh, S., Singha, A., Sintes, A. M., Sipala, V., Skliris, V., Slagmolen, B. J. J., Slaven-Blair, T. J., Smetana, J., Smith, J. R., Smith, L., Smith, R. J. E., Smith, W. J., Soldateschi, J., Somiya, K., Song, I., Soni, K., Soni, S., Sordini, V., Sorrentino, F., Sorrentino, N., Sotani, H., Soulard, R., Southgate, A., Spagnuolo, V., Spencer, A. P., Spera, M., Spinicelli, P., Spoon, J. B., Sprague, C. A., Srivastava, A. K., Stachurski, F., Steer, D. A., Steinlechner, J., Steinlechner, S., Stergioulas, N., Stevens, P., StPierre, M., Stratta, G., Strong, M. D., Strunk, A., Sturani, R., Stuver, A. L., Suchenek, M., Sudhagar, S., Sueltmann, N., Suleiman, L., Sullivan, K. D., Sun, L., Sunil, S., Suresh, J., Sutton, P. J., Suzuki, T., Suzuki, Y., Swinkels, B. L., Syx, A., Szczepańczyk, M. J., Szewczyk, P., Tacca, M., Tagoshi, H., Tait, S. C., Takahashi, H., Takahashi, R., Takamori, A., Takase, T., Takatani, K., Takeda, H., Takeshita, K., Talbot, C., Tamaki, M., Tamanini, N., Tanabe, D., Tanaka, K., Tanaka, S. J., Tanaka, T., Tang, D., Tanioka, S., Tanner, D. B., Tao, L., Tapia, R. D., Martín, E. N. Tapia San, Tarafder, R., Taranto, C., Taruya, A., Tasson, J. D., Teloi, M., Tenorio, R., Themann, H., Theodoropoulos, A., Thirugnanasambandam, M. P., Thomas, L. M., Thomas, M., Thomas, P., Thompson, J. E., Thondapu, S. R., Thorne, K. A., Thrane, E., Tissino, J., Tiwari, A., Tiwari, P., Tiwari, S., Tiwari, V., Todd, M. R., Toivonen, A. M., Toland, K., Tolley, A. E., Tomaru, T., Tomita, K., Tomura, T., Tong-Yu, C., Toriyama, A., Toropov, N., Torres-Forné, A., Torrie, C. I., Toscani, M., Melo, I. Tosta e, Tournefier, E., Trapananti, A., Travasso, F., Traylor, G., Trevor, M., Tringali, M. C., Tripathee, A., Troian, G., Troiano, L., Trovato, A., Trozzo, L., Trudeau, R. J., Tsang, T. T. L., Tso, R., Tsuchida, S., Tsukada, L., Tsutsui, T., Turbang, K., Turconi, M., Turski, C., Ubach, H., Uchikata, N., Uchiyama, T., Udall, R. P., Uehara, T., Uematsu, M., Ueno, K., Ueno, S., Undheim, V., Ushiba, T., Vacatello, M., Vahlbruch, H., Vaidya, N., Vajente, G., Vajpeyi, A., Valdes, G., Valencia, J., Valentini, M., Vallejo-Peña, S. A., Vallero, S., Valsan, V., van Bakel, N., van Beuzekom, M., van Dael, M., Brand, J. F. J. van den, Broeck, C. Van Den, Vander-Hyde, D. C., van der Sluys, M., Van de Walle, A., van Dongen, J., Vandra, K., van Haevermaet, H., van Heijningen, J. V., Van Hove, P., VanKeuren, M., Vanosky, J., van Putten, M. H. P. M., van Ranst, Z., van Remortel, N., Vardaro, M., Vargas, A. F., Varghese, J. J., Varma, V., Vasúth, M., Vecchio, A., Vedovato, G., Veitch, J., Veitch, P. J., Venikoudis, S., Venneberg, J., Verdier, P., Verkindt, D., Verma, B., Verma, P., Verma, Y., Vermeulen, S. M., Vetrano, F., Veutro, A., Vibhute, A. M., Viceré, A., Vidyant, S., Viets, A. D., Vijaykumar, A., Vilkha, A., Villa-Ortega, V., Vincent, E. T., Vinet, J. -Y., Viret, S., Virtuoso, A., Vitale, S., Vives, A., Vocca, H., Voigt, D., von Reis, E. R. G., von Wrangel, J. S. A., Vyatchanin, S. P., Wade, L. E., Wade, M., Wagner, K. J., Wajid, A., Walker, M., Wallace, G. S., Wallace, L., Wang, H., Wang, J. Z., Wang, W. H., Wang, Z., Waratkar, G., Warner, J., Was, M., Washimi, T., Washington, N. Y., Watarai, D., Wayt, K. E., Weaver, B. R., Weaver, B., Weaving, C. R., Webster, S. A., Weinert, M., Weinstein, A. J., Weiss, R., Wellmann, F., Wen, L., Weßels, P., Wette, K., Whelan, J. T., Whiting, B. F., Whittle, C., Wildberger, J. B., Wilk, O. S., Wilken, D., Wilkin, A. T., Willadsen, D. J., Willetts, K., Williams, D., Williams, M. J., Williams, N. S., Willis, J. L., Willke, B., Wils, M., Winterflood, J., Wipf, C. C., Woan, G., Woehler, J., Wofford, J. K., Wolfe, N. E., Wong, H. T., Wong, H. W. Y., Wong, I. C. F., Wright, J. L., Wright, M., Wu, C., Wu, D. S., Wu, H., Wuchner, E., Wysocki, D. M., Xu, V. A., Xu, Y., Yadav, N., Yamamoto, H., Yamamoto, K., Yamamoto, T. S., Yamamoto, T., Yamamura, S., Yamazaki, R., Yan, S., Yan, T., Yang, F. W., Yang, F., Yang, K. Z., Yang, Y., Yarbrough, Z., Yasui, H., Yeh, S. -W., Yelikar, A. B., Yin, X., Yokoyama, J., Yokozawa, T., Yoo, J., Yu, H., Yuan, S., Yuzurihara, H., Zadrożny, A., Zanolin, M., Zeeshan, M., Zelenova, T., Zendri, J. -P., Zeoli, M., Zerrad, M., Zevin, M., Zhang, A. C., Zhang, L., Zhang, R., Zhang, T., Zhang, Y., Zhao, C., Zhao, Yue, Zhao, Yuhang, Zheng, Y., Zhong, H., Zhou, R., Zhu, X. -J., Zhu, Z. -H., Zimmerman, A. B., Zucker, M. E., and Zweizig, J.
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
We present the results of a search for gravitational-wave transients associated with core-collapse supernova SN 2023ixf, which was observed in the galaxy Messier 101 via optical emission on 2023 May 19th, during the LIGO-Virgo-KAGRA 15th Engineering Run. We define a five-day on-source window during which an accompanying gravitational-wave signal may have occurred. No gravitational waves have been identified in data when at least two gravitational-wave observatories were operating, which covered $\sim 14\%$ of this five-day window. We report the search detection efficiency for various possible gravitational-wave emission models. Considering the distance to M101 (6.7 Mpc), we derive constraints on the gravitational-wave emission mechanism of core-collapse supernovae across a broad frequency spectrum, ranging from 50 Hz to 2 kHz where we assume the GW emission occurred when coincident data are available in the on-source window. Considering an ellipsoid model for a rotating proto-neutron star, our search is sensitive to gravitational-wave energy $1 \times 10^{-5} M_{\odot} c^2$ and luminosity $4 \times 10^{-5} M_{\odot} c^2/\text{s}$ for a source emitting at 50 Hz. These constraints are around an order of magnitude more stringent than those obtained so far with gravitational-wave data. The constraint on the ellipticity of the proto-neutron star that is formed is as low as $1.04$, at frequencies above $1200$ Hz, surpassing results from SN 2019ejj., Comment: Main paper: 6 pages, 4 figures and 1 table. Total with appendices: 20 pages, 4 figures, and 1 table
- Published
- 2024
3. A search using GEO600 for gravitational waves coincident with fast radio bursts from SGR 1935+2154
- Author
-
The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, Abac, A. G., Abbott, R., Abouelfettouh, I., Acernese, F., Ackley, K., Adhicary, S., Adhikari, N., Adhikari, R. X., Adkins, V. K., Agarwal, D., Agathos, M., Abchouyeh, M. Aghaei, Aguiar, O. D., Aguilar, I., Aiello, L., Ain, A., Ajith, P., Akutsu, T., Albanesi, S., Alfaidi, R. A., Al-Jodah, A., Alléné, C., Allocca, A., Al-Shammari, S., Altin, P. A., Alvarez-Lopez, S., Amato, A., Amez-Droz, L., Amorosi, A., Amra, C., Ananyeva, A., Anderson, S. B., Anderson, W. G., Andia, M., Ando, M., Andrade, T., Andres, N., Andrés-Carcasona, M., Andrić, T., Anglin, J., Ansoldi, S., Antelis, J. M., Antier, S., Aoumi, M., Appavuravther, E. Z., Appert, S., Apple, S. K., Arai, K., Araya, A., Araya, M. C., Areeda, J. S., Argianas, L., Aritomi, N., Armato, F., Arnaud, N., Arogeti, M., Aronson, S. M., Ashton, G., Aso, Y., Assiduo, M., Melo, S. Assis de Souza, Aston, S. M., Astone, P., Attadio, F., Aubin, F., AultONeal, K., Avallone, G., Azrad, D., Babak, S., Badaracco, F., Badger, C., Bae, S., Bagnasco, S., Bagui, E., Baier, J. G., Baiotti, L., Bajpai, R., Baka, T., Ball, M., Ballardin, G., Ballmer, S. W., Banagiri, S., Banerjee, B., Bankar, D., Baral, P., Barayoga, J. C., Barish, B. C., Barker, D., Barneo, P., Barone, F., Barr, B., Barsotti, L., Barsuglia, M., Barta, D., Bartoletti, A. M., Barton, M. A., Bartos, I., Basak, S., Basalaev, A., Bassiri, R., Basti, A., Bates, D. E., Bawaj, M., Baxi, P., Bayley, J. C., Baylor, A. C., Baynard II, P. A., Bazzan, M., Bedakihale, V. M., Beirnaert, F., Bejger, M., Belardinelli, D., Bell, A. S., Benedetto, V., Benoit, W., Bentley, J. D., Yaala, M. Ben, Bera, S., Berbel, M., Bergamin, F., Berger, B. K., Bernuzzi, S., Beroiz, M., Bersanetti, D., Bertolini, A., Betzwieser, J., Beveridge, D., Bevins, N., Bhandare, R., Bhardwaj, U., Bhatt, R., Bhattacharjee, D., Bhaumik, S., Bhowmick, S., Bianchi, A., Bilenko, I. A., Billingsley, G., Binetti, A., Bini, S., Birnholtz, O., Biscoveanu, S., Bisht, A., Bitossi, M., Bizouard, M. -A., Blackburn, J. K., Blagg, L. A., Blair, C. D., Blair, D. G., Bobba, F., Bode, N., Boileau, G., Boldrini, M., Bolingbroke, G. N., Bolliand, A., Bonavena, L. D., Bondarescu, R., Bondu, F., Bonilla, E., Bonilla, M. S., Bonino, A., Bonnand, R., Booker, P., Borchers, A., Boschi, V., Bose, S., Bossilkov, V., Boudart, V., Boudon, A., Bozzi, A., Bradaschia, C., Brady, P. R., Braglia, M., Branch, A., Branchesi, M., Brandt, J., Braun, I., Breschi, M., Briant, T., Brillet, A., Brinkmann, M., Brockill, P., Brockmueller, E., Brooks, A. F., Brown, B. C., Brown, D. D., Brozzetti, M. L., Brunett, S., Bruno, G., Bruntz, R., Bryant, J., Bucci, F., Buchanan, J., Bulashenko, O., Bulik, T., Bulten, H. J., Buonanno, A., Burtnyk, K., Buscicchio, R., Buskulic, D., Buy, C., Byer, R. L., Davies, G. S. Cabourn, Cabras, G., Cabrita, R., Cáceres-Barbosa, V., Cadonati, L., Cagnoli, G., Cahillane, C., Bustillo, J. Calderón, Callister, T. A., Calloni, E., Camp, J. B., Canepa, M., Santoro, G. Caneva, Cannon, K. C., Cao, H., Capistran, L. A., Capocasa, E., Capote, E., Carapella, G., Carbognani, F., Carlassara, M., Carlin, J. B., Carpinelli, M., Carrillo, G., Carter, J. J., Carullo, G., Diaz, J. Casanueva, Casentini, C., Castro-Lucas, S. Y., Caudill, S., Cavaglià, M., Cavalieri, R., Cella, G., Cerdá-Durán, P., Cesarini, E., Chaibi, W., Chakraborty, P., Subrahmanya, S. Chalathadka, Chan, J. C. L., Chan, M., Chandra, K., Chang, R. -J., Chao, S., Charlton, E. L., Charlton, P., Chassande-Mottin, E., Chatterjee, C., Chatterjee, Debarati, Chatterjee, Deep, Chaturvedi, M., Chaty, S., Chen, A., Chen, A. H. -Y., Chen, D., Chen, H., Chen, H. Y., Chen, J., Chen, K. H., Chen, Y., Chen, Yanbei, Chen, Yitian, Cheng, H. P., Chessa, P., Cheung, H. T., Cheung, S. Y., Chiadini, F., Chiarini, G., Chierici, R., Chincarini, A., Chiofalo, M. L., Chiummo, A., Chou, C., Choudhary, S., Christensen, N., Chua, S. S. Y., Chugh, P., Ciani, G., Ciecielag, P., Cieślar, M., Cifaldi, M., Ciolfi, R., Clara, F., Clark, J. A., Clarke, J., Clarke, T. A., Clearwater, P., Clesse, S., Coccia, E., Codazzo, E., Cohadon, P. -F., Colace, S., Colleoni, M., Collette, C. G., Collins, J., Colloms, S., Colombo, A., Colpi, M., Compton, C. M., Connolly, G., Conti, L., Corbitt, T. R., Cordero-Carrión, I., Corezzi, S., Cornish, N. J., Corsi, A., Cortese, S., Costa, C. A., Cottingham, R., Coughlin, M. W., Couineaux, A., Coulon, J. -P., Countryman, S. T., Coupechoux, J. -F., Couvares, P., Coward, D. M., Cowart, M. J., Coyne, R., Craig, K., Creed, R., Creighton, J. D. E., Creighton, T. D., Cremonese, P., Criswell, A. W., Crockett-Gray, J. C. G., Crook, S., Crouch, R., Csizmazia, J., Cudell, J. R., Cullen, T. J., Cumming, A., Cuoco, E., Cusinato, M., Dabadie, P., Canton, T. Dal, Dall'Osso, S., Pra, S. Dal, Dálya, G., D'Angelo, B., Danilishin, S., D'Antonio, S., Danzmann, K., Darroch, K. E., Dartez, L. P., Dasgupta, A., Datta, S., Dattilo, V., Daumas, A., Davari, N., Dave, I., Davenport, A., Davier, M., Davies, T. F., Davis, D., Davis, L., Davis, M. C., Davis, P. J., Dax, M., De Bolle, J., Deenadayalan, M., Degallaix, J., De Laurentis, M., Deléglise, S., De Lillo, F., Dell'Aquila, D., Del Pozzo, W., De Marco, F., De Matteis, F., D'Emilio, V., Demos, N., Dent, T., Depasse, A., DePergola, N., De Pietri, R., De Rosa, R., De Rossi, C., DeSalvo, R., De Simone, R., Dhani, A., Diab, R., Díaz, M. C., Di Cesare, M., Dideron, G., Didio, N. A., Dietrich, T., Di Fiore, L., Di Fronzo, C., Di Giovanni, M., Di Girolamo, T., Diksha, D., Di Michele, A., Ding, J., Di Pace, S., Di Palma, I., Di Renzo, F., Divyajyoti, Dmitriev, A., Doctor, Z., Dohmen, E., Doleva, P. P., Dominguez, D., D'Onofrio, L., Donovan, F., Dooley, K. L., Dooney, T., Doravari, S., Dorosh, O., Drago, M., Driggers, J. C., Ducoin, J. -G., Dunn, L., Dupletsa, U., D'Urso, D., Duval, H., Duverne, P. -A., Dwyer, S. E., Eassa, C., Ebersold, M., Eckhardt, T., Eddolls, G., Edelman, B., Edo, T. B., Edy, O., Effler, A., Eichholz, J., Einsle, H., Eisenmann, M., Eisenstein, R. A., Ejlli, A., Eleveld, R. M., Emma, M., Endo, K., Engl, A. J., Enloe, E., Errico, L., Essick, R. C., Estellés, H., Estevez, D., Etzel, T., Evans, M., Evstafyeva, T., Ewing, B. E., Ezquiaga, J. M., Fabrizi, F., Faedi, F., Fafone, V., Fairhurst, S., Farah, A. M., Farr, B., Farr, W. M., Favaro, G., Favata, M., Fays, M., Fazio, M., Feicht, J., Fejer, M. M., Felicetti, R. ., Fenyvesi, E., Ferguson, D. L., Ferraiuolo, S., Ferrante, I., Ferreira, T. A., Fidecaro, F., Figura, P., Fiori, A., Fiori, I., Fishbach, M., Fisher, R. P., Fittipaldi, R., Fiumara, V., Flaminio, R., Fleischer, S. M., Fleming, L. S., Floden, E., Foley, E. M., Fong, H., Font, J. A., Fornal, B., Forsyth, P. W. F., Franceschetti, K., Franchini, N., Frasca, S., Frasconi, F., Mascioli, A. Frattale, Frei, Z., Freise, A., Freitas, O., Frey, R., Frischhertz, W., Fritschel, P., Frolov, V. V., Fronzé, G. G., Fuentes-Garcia, M., Fujii, S., Fujimori, T., Fulda, P., Fyffe, M., Gadre, B., Gair, J. R., Galaudage, S., Galdi, V., Gallagher, H., Gallardo, S., Gallego, B., Gamba, R., Gamboa, A., Ganapathy, D., Ganguly, A., Garaventa, B., García-Bellido, J., Núñez, C. García, García-Quirós, C., Gardner, J. W., Gardner, K. A., Gargiulo, J., Garron, A., Garufi, F., Gasbarra, C., Gateley, B., Gayathri, V., Gemme, G., Gennai, A., Gennari, V., George, J., George, R., Gerberding, O., Gergely, L., Ghonge, S., Ghosh, Archisman, Ghosh, Sayantan, Ghosh, Shaon, Ghosh, Shrobana, Ghosh, Suprovo, Ghosh, Tathagata, Giacoppo, L., Giaime, J. A., Giardina, K. D., Gibson, D. R., Gibson, D. T., Gier, C., Giri, P., Gissi, F., Gkaitatzis, S., Glanzer, J., Glotin, F., Godfrey, J., Godwin, P., Goebbels, N. L., Goetz, E., Golomb, J., Lopez, S. Gomez, Goncharov, B., Gong, Y., González, G., Goodarzi, P., Goode, S., Goodwin-Jones, A. W., Gosselin, M., Göttel, A. S., Gouaty, R., Gould, D. W., Govorkova, K., Goyal, S., Grace, B., Grado, A., Graham, V., Granados, A. E., Granata, M., Granata, V., Gras, S., Grassia, P., Gray, A., Gray, C., Gray, R., Greco, G., Green, A. C., Green, S. M., Green, S. R., Gretarsson, A. M., Gretarsson, E. M., Griffith, D., Griffiths, W. L., Griggs, H. L., Grignani, G., Grimaldi, A., Grimaud, C., Grote, H., Guerra, D., Guetta, D., Guidi, G. M., Guimaraes, A. R., Gulati, H. K., Gulminelli, F., Gunny, A. M., Guo, H., Guo, W., Guo, Y., Gupta, Anchal, Gupta, Anuradha, Gupta, Ish, Gupta, N. C., Gupta, P., Gupta, S. K., Gupta, T., Gupte, N., Gurs, J., Gutierrez, N., Guzman, F., H, H. -Y., Haba, D., Haberland, M., Haino, S., Hall, E. D., Hamilton, E. Z., Hammond, G., Han, W. -B., Haney, M., Hanks, J., Hanna, C., Hannam, M. D., Hannuksela, O. A., Hanselman, A. G., Hansen, H., Hanson, J., Harada, R., Hardison, A. R., Haris, K., Harmark, T., Harms, J., Harry, G. M., Harry, I. W., Hart, J., Haskell, B., Haster, C. -J., Hathaway, J. S., Haughian, K., Hayakawa, H., Hayama, K., Hayes, R., Heffernan, A., Heidmann, A., Heintze, M. C., Heinze, J., Heinzel, J., Heitmann, H., Hellman, F., Hello, P., Helmling-Cornell, A. F., Hemming, G., Henderson-Sapir, O., Hendry, M., Heng, I. S., Hennes, E., Henshaw, C., Hertog, T., Heurs, M., Hewitt, A. L., Heyns, J., Higginbotham, S., Hild, S., Hill, S., Himemoto, Y., Hirata, N., Hirose, C., Ho, W. C. G., Hoang, S., Hochheim, S., Hofman, D., Holland, N. A., Holley-Bockelmann, K., Holmes, Z. J., Holz, D. E., Honet, L., Hong, C., Hornung, J., Hoshino, S., Hough, J., Hourihane, S., Howell, E. J., Hoy, C. G., Hrishikesh, C. A., Hsieh, H. -F., Hsiung, C., Hsu, H. C., Hsu, W. -F., Hu, P., Hu, Q., Huang, H. Y., Huang, Y. -J., Huddart, A. D., Hughey, B., Hui, D. C. Y., Hui, V., Husa, S., Huxford, R., Huynh-Dinh, T., Iampieri, L., Iandolo, G. A., Ianni, M., Iess, A., Imafuku, H., Inayoshi, K., Inoue, Y., Iorio, G., Iqbal, M. H., Irwin, J., Ishikawa, R., Isi, M., Ismail, M. A., Itoh, Y., Iwanaga, H., Iwaya, M., Iyer, B. R., JaberianHamedan, V., Jacquet, C., Jacquet, P. -E., Jadhav, S. J., Jadhav, S. P., Jain, T., James, A. L., James, P. A., Jamshidi, R., Janquart, J., Janssens, K., Janthalur, N. N., Jaraba, S., Jaranowski, P., Jaume, R., Javed, W., Jennings, A., Jia, W., Jiang, J., Kubisz, J., Johanson, C., Johns, G. R., Johnson, N. A., Johnston, M. C., Johnston, R., Johny, N., Jones, D. H., Jones, D. I., Jones, R., Jose, S., Joshi, P., Ju, L., Jung, K., Junker, J., Juste, V., Kajita, T., Kaku, I., Kalaghatgi, C., Kalogera, V., Kamiizumi, M., Kanda, N., Kandhasamy, S., Kang, G., Kanner, J. B., Kapadia, S. J., Kapasi, D. P., Karat, S., Karathanasis, C., Kashyap, R., Kasprzack, M., Kastaun, W., Kato, T., Katsavounidis, E., Katzman, W., Kaushik, R., Kawabe, K., Kawamoto, R., Kazemi, A., Keitel, D., Kelley-Derzon, J., Kennington, J., Kesharwani, R., Key, J. S., Khadela, R., Khadka, S., Khalili, F. Y., Khan, F., Khan, I., Khanam, T., Khursheed, M., Khusid, N. M., Kiendrebeogo, W., Kijbunchoo, N., Kim, C., Kim, J. C., Kim, K., Kim, M. H., Kim, S., Kim, Y. -M., Kimball, C., Kinley-Hanlon, M., Kinnear, M., Kissel, J. S., Klimenko, S., Knee, A. M., Knust, N., Kobayashi, K., Koch, P., Koehlenbeck, S. M., Koekoek, G., Kohri, K., Kokeyama, K., Koley, S., Kolitsidou, P., Kolstein, M., Komori, K., Kong, A. K. H., Kontos, A., Korobko, M., Kossak, R. V., Kou, X., Koushik, A., Kouvatsos, N., Kovalam, M., Kozak, D. B., Kranzhoff, S. L., Kringel, V., Krishnendu, N. V., Królak, A., Kruska, K., Kuehn, G., Kuijer, P., Kulkarni, S., Ramamohan, A. Kulur, Kumar, A., Kumar, Praveen, Kumar, Prayush, Kumar, Rahul, Kumar, Rakesh, Kume, J., Kuns, K., Kuntimaddi, N., Kuroyanagi, S., Kurth, N. J., Kuwahara, S., Kwak, K., Kwan, K., Kwok, J., Lacaille, G., Lagabbe, P., Laghi, D., Lai, S., Laity, A. H., Lakkis, M. H., Lalande, E., Lalleman, M., Lalremruati, P. C., Landry, M., Lane, B. B., Lang, R. N., Lange, J., Lantz, B., La Rana, A., La Rosa, I., Lartaux-Vollard, A., Lasky, P. D., Lawrence, J., Lawrence, M. N., Laxen, M., Lazzarini, A., Lazzaro, C., Leaci, P., Lecoeuche, Y. K., Lee, H. M., Lee, H. W., Lee, K., Lee, R. -K., Lee, R., Lee, S., Lee, Y., Legred, I. N., Lehmann, J., Lehner, L., Jean, M. Le, Lemaître, A., Lenti, M., Leonardi, M., Lequime, M., Leroy, N., Lesovsky, M., Letendre, N., Lethuillier, M., Levin, S. E., Levin, Y., Leyde, K., Li, A. K. Y., Li, K. L., Li, T. G. F., Li, X., Li, Z., Lihos, A., Lin, C-Y., Lin, C. -Y., Lin, E. T., Lin, F., Lin, H., Lin, L. C. -C., Lin, Y. -C., Linde, F., Linker, S. D., Littenberg, T. B., Liu, A., Liu, G. C., Liu, Jian, Villarreal, F. Llamas, Llobera-Querol, J., Lo, R. K. L., Locquet, J. -P., London, L. T., Longo, A., Lopez, D., Portilla, M. Lopez, Lorenzini, M., Lorenzo-Medina, A., Loriette, V., Lormand, M., Losurdo, G., Lott IV, T. P., Lough, J. D., Loughlin, H. A., Lousto, C. O., Lowry, M. J., Lu, N., Lück, H., Lumaca, D., Lundgren, A. P., Lussier, A. W., Ma, L. -T., Ma, S., Ma'arif, M., Macas, R., Macedo, A., MacInnis, M., Maciy, R. R., Macleod, D. M., MacMillan, I. A. O., Macquet, A., Macri, D., Maeda, K., Maenaut, S., Hernandez, I. Magaña, Magare, S. S., Magazzù, C., Magee, R. M., Maggio, E., Maggiore, R., Magnozzi, M., Mahesh, M., Mahesh, S., Maini, M., Majhi, S., Majorana, E., Makarem, C. N., Makelele, E., Malaquias-Reis, J. A., Mali, U., Maliakal, S., Malik, A., Man, N., Mandic, V., Mangano, V., Mannix, B., Mansell, G. L., Mansingh, G., Manske, M., Mantovani, M., Mapelli, M., Marchesoni, F., Pina, D. Marín, Marion, F., Márka, S., Márka, Z., Markosyan, A. S., Markowitz, A., Maros, E., Marsat, S., Martelli, F., Martin, I. W., Martin, R. M., Martinez, B. B., Martinez, M., Martinez, V., Martini, A., Martinovic, K., Martins, J. C., Martynov, D. V., Marx, E. J., Massaro, L., Masserot, A., Masso-Reid, M., Mastrodicasa, M., Mastrogiovanni, S., Matcovich, T., Matiushechkina, M., Matsuyama, M., Mavalvala, N., Maxwell, N., McCarrol, G., McCarthy, R., McCormick, S., McCuller, L., McEachin, S., McElhenny, C., McGhee, G. I., McGinn, J., McGowan, K. B. M., McIver, J., McLeod, A., McRae, T., Meacher, D., Meijer, Q., Melatos, A., Mellaerts, S., Menendez-Vazquez, A., Menoni, C. S., Mera, F., Mercer, R. A., Mereni, L., Merfeld, K., Merilh, E. L., Mérou, J. R., Merritt, J. D., Merzougui, M., Messenger, C., Messick, C., Meyer-Conde, M., Meylahn, F., Mhaske, A., Miani, A., Miao, H., Michaloliakos, I., Michel, C., Michimura, Y., Middleton, H., Miller, A. L., Miller, S., Millhouse, M., Milotti, E., Milotti, V., Minenkov, Y., Mio, N., Mir, Ll. M., Mirasola, L., Miravet-Tenés, M., Miritescu, C. -A., Mishra, A. K., Mishra, A., Mishra, C., Mishra, T., Mitchell, A. L., Mitchell, J. G., Mitra, S., Mitrofanov, V. P., Mittleman, R., Miyakawa, O., Miyamoto, S., Miyoki, S., Mo, G., Mobilia, L., Mohapatra, S. R. P., Mohite, S. R., Molina-Ruiz, M., Mondal, C., Mondin, M., Montani, M., Moore, C. J., Moraru, D., More, A., More, S., Moreno, G., Morgan, C., Morisaki, S., Moriwaki, Y., Morras, G., Moscatello, A., Mourier, P., Mours, B., Mow-Lowry, C. M., Muciaccia, F., Mukherjee, Arunava, Mukherjee, D., Mukherjee, Samanwaya, Mukherjee, Soma, Mukherjee, Subroto, Mukherjee, Suvodip, Mukund, N., Mullavey, A., Munch, J., Mundi, J., Mungioli, C. L., Oberg, W. R. Munn, Murakami, Y., Murakoshi, M., Murray, P. G., Muusse, S., Nabari, D., Nadji, S. L., Nagar, A., Nagarajan, N., Nagler, K. N., Nakagaki, K., Nakamura, K., Nakano, H., Nakano, M., Nandi, D., Napolano, V., Narayan, P., Nardecchia, I., Narola, H., Naticchioni, L., Nayak, R. K., Neilson, J., Nelson, A., Nelson, T. J. N., Nery, M., Neunzert, A., Ng, S., Quynh, L. Nguyen, Nichols, S. A., Nielsen, A. B., Nieradka, G., Niko, A., Nishino, Y., Nishizawa, A., Nissanke, S., Nitoglia, E., Niu, W., Nocera, F., Norman, M., North, C., Novak, J., Siles, J. F. Nuño, Nuttall, L. K., Obayashi, K., Oberling, J., O'Dell, J., Oertel, M., Offermans, A., Oganesyan, G., Oh, J. J., Oh, K., O'Hanlon, T., Ohashi, M., Ohkawa, M., Ohme, F., Oliveira, A. S., Oliveri, R., O'Neal, B., Oohara, K., O'Reilly, B., Ormsby, N. D., Orselli, M., O'Shaughnessy, R., O'Shea, S., Oshima, Y., Oshino, S., Ossokine, S., Osthelder, C., Ota, I., Ottaway, D. J., Ouzriat, A., Overmier, H., Owen, B. J., Pace, A. E., Pagano, R., Page, M. A., Pai, A., Pal, A., Pal, S., Palaia, M. A., Pálfi, M., Palma, P. P., Palomba, C., Palud, P., Pan, H., Pan, J., Pan, K. C., Panai, R., Panda, P. K., Pandey, S., Panebianco, L., Pang, P. T. H., Pannarale, F., Pannone, K. A., Pant, B. C., Panther, F. H., Paoletti, F., Paolone, A., Papalexakis, E. E., Papalini, L., Papigkiotis, G., Paquis, A., Parisi, A., Park, B. -J., Park, J., Parker, W., Pascale, G., Pascucci, D., Pasqualetti, A., Passaquieti, R., Passenger, L., Passuello, D., Patane, O., Pathak, D., Pathak, M., Patra, A., Patricelli, B., Patron, A. S., Paul, K., Paul, S., Payne, E., Pearce, T., Pedraza, M., Pegna, R., Pele, A., Arellano, F. E. Peña, Penn, S., Penuliar, M. D., Perego, A., Pereira, Z., Perez, J. J., Périgois, C., Perna, G., Perreca, A., Perret, J., Perriès, S., Perry, J. W., Pesios, D., Petracca, S., Petrillo, C., Pfeiffer, H. P., Pham, H., Pham, K. A., Phukon, K. S., Phurailatpam, H., Piarulli, M., Piccari, L., Piccinni, O. J., Pichot, M., Piendibene, M., Piergiovanni, F., Pierini, L., Pierra, G., Pierro, V., Pietrzak, M., Pillas, M., Pilo, F., Pinard, L., Pinto, I. M., Pinto, M., Piotrzkowski, B. J., Pirello, M., Pitkin, M. D., Placidi, A., Placidi, E., Planas, M. L., Plastino, W., Poggiani, R., Polini, E., Pompili, L., Poon, J., Porcelli, E., Porter, E. K., Posnansky, C., Poulton, R., Powell, J., Pracchia, M., Pradhan, B. K., Pradier, T., Prajapati, A. K., Prasai, K., Prasanna, R., Prasia, P., Pratten, G., Principe, G., Principe, M., Prodi, G. A., Prokhorov, L., Prosposito, P., Puecher, A., Pullin, J., Punturo, M., Puppo, P., Pürrer, M., Qi, H., Qin, J., Quéméner, G., Quetschke, V., Quigley, C., Quinonez, P. J., Quitzow-James, R., Raab, F. J., Raabith, S. S., Raaijmakers, G., Raja, S., Rajan, C., Rajbhandari, B., Ramirez, K. E., Vidal, F. A. Ramis, Ramos-Buades, A., Rana, D., Ranjan, S., Ransom, K., Rapagnani, P., Ratto, B., Rawat, S., Ray, A., Raymond, V., Razzano, M., Read, J., Payo, M. Recaman, Regimbau, T., Rei, L., Reid, S., Reitze, D. H., Relton, P., Renzini, A. I., Rettegno, P., Revenu, B., Reyes, R., Rezaei, A. S., Ricci, F., Ricci, M., Ricciardone, A., Richardson, J. W., Richardson, M., Rijal, A., Riles, K., Riley, H. K., Rinaldi, S., Rittmeyer, J., Robertson, C., Robinet, F., Robinson, M., Rocchi, A., Rolland, L., Rollins, J. G., Romano, A. E., Romano, R., Romero, A., Romero-Shaw, I. M., Romie, J. H., Ronchini, S., Roocke, T. J., Rosa, L., Rosauer, T. J., Rose, C. A., Rosińska, D., Ross, M. P., Rossello, M., Rowan, S., Roy, S. K., Roy, S., Rozza, D., Ruggi, P., Ruhama, N., Morales, E. Ruiz, Ruiz-Rocha, K., Sachdev, S., Sadecki, T., Sadiq, J., Saffarieh, P., Sah, M. R., Saha, S. S., Saha, S., Sainrat, T., Menon, S. Sajith, Sakai, K., Sakellariadou, M., Sakon, S., Salafia, O. S., Salces-Carcoba, F., Salconi, L., Saleem, M., Salemi, F., Sallé, M., Salvador, S., Sanchez, A., Sanchez, E. J., Sanchez, J. H., Sanchez, L. E., Sanchis-Gual, N., Sanders, J. R., Sänger, E. M., Santoliquido, F., Saravanan, T. R., Sarin, N., Sasaoka, S., Sasli, A., Sassi, P., Sassolas, B., Satari, H., Sato, R., Sato, Y., Sauter, O., Savage, R. L., Sawada, T., Sawant, H. L., Sayah, S., Scacco, V., Schaetzl, D., Scheel, M., Schiebelbein, A., Schiworski, M. G., Schmidt, P., Schmidt, S., Schnabel, R., Schneewind, M., Schofield, R. M. S., Schouteden, K., Schulte, B. W., Schutz, B. F., Schwartz, E., Scialpi, M., Scott, J., Scott, S. M., Seetharamu, T. C., Seglar-Arroyo, M., Sekiguchi, Y., Sellers, D., Sengupta, A. S., Sentenac, D., Seo, E. G., Seo, J. W., Sequino, V., Serra, M., Servignat, G., Sevrin, A., Shaffer, T., Shah, U. S., Shaikh, M. A., Shao, L., Sharma, A. K., Sharma, P., Sharma-Chaudhary, S., Shaw, M. R., Shawhan, P., Shcheblanov, N. S., Sheridan, E., Shikano, Y., Shikauchi, M., Shimode, K., Shinkai, H., Shiota, J., Shoemaker, D. H., Shoemaker, D. M., Short, R. W., ShyamSundar, S., Sider, A., Siegel, H., Sieniawska, M., Sigg, D., Silenzi, L., Simmonds, M., Singer, L. P., Singh, A., Singh, D., Singh, M. K., Singh, S., Singha, A., Sintes, A. M., Sipala, V., Skliris, V., Slagmolen, B. J. J., Slaven-Blair, T. J., Smetana, J., Smith, J. R., Smith, L., Smith, R. J. E., Smith, W. J., Soldateschi, J., Somiya, K., Song, I., Soni, K., Soni, S., Sordini, V., Sorrentino, F., Sorrentino, N., Sotani, H., Soulard, R., Southgate, A., Spagnuolo, V., Spencer, A. P., Spera, M., Spinicelli, P., Spoon, J. B., Sprague, C. A., Srivastava, A. K., Stachurski, F., Steer, D. A., Steinlechner, J., Steinlechner, S., Stergioulas, N., Stevens, P., StPierre, M., Stratta, G., Strong, M. D., Strunk, A., Sturani, R., Stuver, A. L., Suchenek, M., Sudhagar, S., Sueltmann, N., Suleiman, L., Sullivan, K. D., Sun, L., Sunil, S., Suresh, J., Sutton, P. J., Suzuki, T., Suzuki, Y., Swinkels, B. L., Syx, A., Szczepańczyk, M. J., Szewczyk, P., Tacca, M., Tagoshi, H., Tait, S. C., Takahashi, H., Takahashi, R., Takamori, A., Takase, T., Takatani, K., Takeda, H., Takeshita, K., Talbot, C., Tamaki, M., Tamanini, N., Tanabe, D., Tanaka, K., Tanaka, S. J., Tanaka, T., Tang, D., Tanioka, S., Tanner, D. B., Tao, L., Tapia, R. D., Martín, E. N. Tapia San, Tarafder, R., Taranto, C., Taruya, A., Tasson, J. D., Teloi, M., Tenorio, R., Themann, H., Theodoropoulos, A., Thirugnanasambandam, M. P., Thomas, L. M., Thomas, M., Thomas, P., Thompson, J. E., Thondapu, S. R., Thorne, K. A., Thrane, E., Tissino, J., Tiwari, A., Tiwari, P., Tiwari, S., Tiwari, V., Todd, M. R., Toivonen, A. M., Toland, K., Tolley, A. E., Tomaru, T., Tomita, K., Tomura, T., Tong-Yu, C., Toriyama, A., Toropov, N., Torres-Forné, A., Torrie, C. I., Toscani, M., Melo, I. Tosta e, Tournefier, E., Trapananti, A., Travasso, F., Traylor, G., Trevor, M., Tringali, M. C., Tripathee, A., Troian, G., Troiano, L., Trovato, A., Trozzo, L., Trudeau, R. J., Tsang, T. T. L., Tso, R., Tsuchida, S., Tsukada, L., Tsutsui, T., Turbang, K., Turconi, M., Turski, C., Ubach, H., Uchiyama, T., Udall, R. P., Uehara, T., Uematsu, M., Ueno, K., Ueno, S., Undheim, V., Ushiba, T., Vacatello, M., Vahlbruch, H., Vaidya, N., Vajente, G., Vajpeyi, A., Valdes, G., Valencia, J., Valentini, M., Vallejo-Peña, S. A., Vallero, S., Valsan, V., van Bakel, N., van Beuzekom, M., van Dael, M., Brand, J. F. J. van den, Broeck, C. Van Den, Vander-Hyde, D. C., van der Sluys, M., Van de Walle, A., van Dongen, J., Vandra, K., van Haevermaet, H., van Heijningen, J. V., Van Hove, P., VanKeuren, M., Vanosky, J., van Putten, M. H. P. M., van Ranst, Z., van Remortel, N., Vardaro, M., Vargas, A. F., Varghese, J. J., Varma, V., Vasúth, M., Vecchio, A., Vedovato, G., Veitch, J., Veitch, P. J., Venikoudis, S., Venneberg, J., Verdier, P., Verkindt, D., Verma, B., Verma, P., Verma, Y., Vermeulen, S. M., Vetrano, F., Veutro, A., Vibhute, A. M., Viceré, A., Vidyant, S., Viets, A. D., Vijaykumar, A., Vilkha, A., Villa-Ortega, V., Vincent, E. T., Vinet, J. -Y., Viret, S., Virtuoso, A., Vitale, S., Vives, A., Vocca, H., Voigt, D., von Reis, E. R. G., von Wrangel, J. S. A., Vyatchanin, S. P., Wade, L. E., Wade, M., Wagner, K. J., Wajid, A., Walker, M., Wallace, G. S., Wallace, L., Wang, H., Wang, J. Z., Wang, W. H., Wang, Z., Waratkar, G., Warner, J., Was, M., Washimi, T., Washington, N. Y., Watarai, D., Wayt, K. E., Weaver, B. R., Weaver, B., Weaving, C. R., Webster, S. A., Weinert, M., Weinstein, A. J., Weiss, R., Wellmann, F., Wen, L., Weßels, P., Wette, K., Whelan, J. T., Whiting, B. F., Whittle, C., Wildberger, J. B., Wilk, O. S., Wilken, D., Wilkin, A. T., Willadsen, D. J., Willetts, K., Williams, D., Williams, M. J., Williams, N. S., Willis, J. L., Willke, B., Wils, M., Winterflood, J., Wipf, C. C., Woan, G., Woehler, J., Wofford, J. K., Wolfe, N. E., Wong, H. T., Wong, H. W. Y., Wong, I. C. F., Wright, J. L., Wright, M., Wu, C., Wu, D. S., Wu, H., Wuchner, E., Wysocki, D. M., Xu, V. A., Xu, Y., Yadav, N., Yamamoto, H., Yamamoto, K., Yamamoto, T. S., Yamamoto, T., Yamamura, S., Yamazaki, R., Yan, S., Yan, T., Yang, F. W., Yang, F., Yang, K. Z., Yang, Y., Yarbrough, Z., Yasui, H., Yeh, S. -W., Yelikar, A. B., Yin, X., Yokoyama, J., Yokozawa, T., Yoo, J., Yu, H., Yuan, S., Yuzurihara, H., Zadrożny, A., Zanolin, M., Zeeshan, M., Zelenova, T., Zendri, J. -P., Zeoli, M., Zerrad, M., Zevin, M., Zhang, A. C., Zhang, L., Zhang, R., Zhang, T., Zhang, Y., Zhao, C., Zhao, Yue, Zhao, Yuhang, Zheng, Y., Zhong, H., Zhou, R., Zhu, X. -J., Zhu, Z. -H., Zucker, M. E., and Zweizig, J.
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
The magnetar SGR 1935+2154 is the only known Galactic source of fast radio bursts (FRBs). FRBs from SGR 1935+2154 were first detected by CHIME/FRB and STARE2 in 2020 April, after the conclusion of the LIGO, Virgo, and KAGRA Collaborations' O3 observing run. Here we analyze four periods of gravitational wave (GW) data from the GEO600 detector coincident with four periods of FRB activity detected by CHIME/FRB, as well as X-ray glitches and X-ray bursts detected by NICER and NuSTAR close to the time of one of the FRBs. We do not detect any significant GW emission from any of the events. Instead, using a short-duration GW search (for bursts $\leq$ 1 s) we derive 50\% (90\%) upper limits of $10^{48}$ ($10^{49}$) erg for GWs at 300 Hz and $10^{49}$ ($10^{50}$) erg at 2 kHz, and constrain the GW-to-radio energy ratio to $\leq 10^{14} - 10^{16}$. We also derive upper limits from a long-duration search for bursts with durations between 1 and 10 s. These represent the strictest upper limits on concurrent GW emission from FRBs., Comment: 15 pages of text including references, 4 figures, 5 tables
- Published
- 2024
4. Interim Report on the Implementation and Impact of Developmental Education Curricular Reform in California Community Colleges
- Author
-
Research for Action (RFA), Texas Education Research Center, Kri Burkander, Dae Kim, Lauren Schudde, Mark Duffy, Maja Pehrson, Nancy Lawrence, Taylor Stenley, Elizabeth Jackson, Wonsun Ryu, and Lindsey Liu
- Abstract
Research for Action (RFA) in partnership with the University of Texas at Austin is engaged in a five-year mixed-methods study of the reforms associated with California AB 705. Over the course of the study, our team will assess the implementation, impact, and cost effectiveness of reforms associated with the law. This report first offers a descriptive quantitative analysis of short-term outcome (enrollment and completion) trends in math and English at the state level. This descriptive analysis examines the relationship between AB 705 and course enrollment and completion, which will serve as the basis for the quasi-experimental study in subsequent project years. The second part of the report presents findings from institutional site visits aimed at understanding how institutions have implemented the reforms, who is involved in implementation, and how implementation is experienced by students.
- Published
- 2024
5. RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts
- Author
-
Wijk, Hjalmar, Lin, Tao, Becker, Joel, Jawhar, Sami, Parikh, Neev, Broadley, Thomas, Chan, Lawrence, Chen, Michael, Clymer, Josh, Dhyani, Jai, Ericheva, Elena, Garcia, Katharyn, Goodrich, Brian, Jurkovic, Nikola, Kinniment, Megan, Lajko, Aron, Nix, Seraphina, Sato, Lucas, Saunders, William, Taran, Maksym, West, Ben, and Barnes, Elizabeth
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Frontier AI safety policies highlight automation of AI research and development (R&D) by AI agents as an important capability to anticipate. However, there exist few evaluations for AI R&D capabilities, and none that are highly realistic and have a direct comparison to human performance. We introduce RE-Bench (Research Engineering Benchmark, v1), which consists of 7 challenging, open-ended ML research engineering environments and data from 71 8-hour attempts by 61 distinct human experts. We confirm that our experts make progress in the environments given 8 hours, with 82% of expert attempts achieving a non-zero score and 24% matching or exceeding our strong reference solutions. We compare humans to several public frontier models through best-of-k with varying time budgets and agent designs, and find that the best AI agents achieve a score 4x higher than human experts when both are given a total time budget of 2 hours per environment. However, humans currently display better returns to increasing time budgets, narrowly exceeding the top AI agent scores given an 8-hour budget, and achieving 2x the score of the top AI agent when both are given 32 total hours (across different attempts). Qualitatively, we find that modern AI agents possess significant expertise in many ML topics -- e.g. an agent wrote a faster custom Triton kernel than any of our human experts' -- and can generate and test solutions over ten times faster than humans, at much lower cost. We open-source the evaluation environments, human expert data, analysis code and agent trajectories to facilitate future research.
- Published
- 2024
6. Bayesian 'Deep' Process Convolutions: An Application in Cosmology
- Author
-
Moran, Kelly R., Payne, Richard, Lawrence, Earl, Higdon, David, Walsh, Stephen A., Booth, Annie S., Kwan, Juliana, Day, Amber, Habib, Salman, and Heitmann, Katrin
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Instrumentation and Methods for Astrophysics ,Statistics - Applications - Abstract
The nonlinear matter power spectrum in cosmology describes how matter density fluctuations vary with scale in the universe, providing critical insights into large-scale structure formation. The matter power spectrum includes both smooth regions and highly oscillatory features. Cosmologists rely on noisy, multi-resolution realizations of large N-body simulations to study these phenomena, which require appropriate smoothing techniques to learn about underlying structures. We introduce a Bayesian Deep Process Convolution (DPC) model that flexibly adapts its smoothness parameter across the input space, enabling it to capture both smooth and variable structure within a single framework. The DPC model leverages common patterns across related functions to improve estimation in regions with sparse data. Compared to existing methods, the DPC model offers superior accuracy and uncertainty quantification in simulated data, and qualitatively superior performance with the cosmological data. This methodology will be useful in cosmology and other fields requiring flexible modeling of smooth nonstationary surfaces.
- Published
- 2024
7. A qualitative analysis of remote patient monitoring: how a paradox mindset can support balancing emotional tensions in the design of healthcare technologies
- Author
-
Jonassen, Zoe, Lawrence, Katharine, Wiesenfeld, Batia Mishan, Feuerriegel, Stefan, and Mann, Devin
- Subjects
Computer Science - Human-Computer Interaction - Abstract
Remote patient monitoring (RPM) is the use of digital technologies to improve patient care at a distance. However, current RPM solutions are often biased toward tech-savvy patients. To foster health equity, researchers have studied how to address the socio-economic and cognitive needs of diverse patient groups, but their emotional needs have remained largely neglected. We perform the first qualitative study to explore the emotional needs of diverse patients around RPM. Specifically, we conduct a thematic analysis of 18 interviews and 4 focus groups at a large US healthcare organization. We identify emotional needs that lead to four emotional tensions within and across stakeholder groups when applying an equity focus to the design and implementation of RPM technologies. The four emotional tensions are making diverse patients feel: (i) heard vs. exploited; (ii) seen vs. deprioritized for efficiency; (iii) empowered vs. anxious; and (iv) cared for vs. detached from care. To manage these emotional tensions across stakeholders, we develop design recommendations informed by a paradox mindset (i.e., "both-and" rather than "and-or" strategies)., Comment: Accepted at CSCW 2025
- Published
- 2024
8. CHANCES, The Chilean Cluster Galaxy Evolution Survey: selection and initial characterization of clusters and superclusters
- Author
-
Sifón, Cristóbal, Finoguenov, Alexis, Haines, Christopher P., Jaffé, Yara, Amrutha, B. M., Demarco, Ricardo, Lima, E. V. R., Lima-Dias, Ciria, Méndez-Hernández, Hugo, Merluzzi, Paola, Monachesi, Antonela, Teixeira, Gabriel S. M., Tejos, Nicolas, Araya-Araya, Pablo, Argudo-Fernández, Maria, Baier-Soto, Raúl, Bilton, Lawrence E., Bom, C. R., Calderón, Juan Pablo, Cassarà, Letizia P., Comparat, Johan, Courtois, H. M., D'Ago, Giuseppe, Dupuy, Alexandra, Fritz, Alexander, Haack, Rodrigo F., Herpich, Fabio R., Ibar, E., Kuchner, Ulrike, Lopes, Amanda R., Lopez, Sebastian, Lösch, Elismar, McGee, Sean, de Oliveira, C. Mendes, Morelli, Lorenzo, Moretti, Alessia, Pallero, Diego, Piraino-Cerda, Franco, Pompei, Emanuela, Rescigno, U., Smith, Rory, Castelli, Analía V. Smith, Sodré Jr, Laerte, and Tempel, Elmo
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
CHANCES, the CHileAN Cluster galaxy Evolution Survey, will study the evolution of galaxies in and around ${\sim}$150 massive galaxy clusters, from the local universe out to z=0.45. CHANCES will use the new 4MOST Spectroscopic Survey Facility on the VISTA 4m telescope to obtain spectra for ${\sim}$500,000 galaxies with magnitudes $r_\mathrm{AB} < 20.5$, providing comprehensive spectroscopic coverage of each cluster out to $5r_{200}$. Its wide and deep scope will trace massive and dwarf galaxies from the surrounding filaments and groups to the cores of galaxy clusters, enabling the study of galaxy pre-processing and the role of the evolving environment on galaxy evolution. In this paper we present and characterize the sample of clusters and superclusters to be targeted by CHANCES. We used literature catalogues based on X-ray emission and Sunyaev-Zel'dovich effect to define the cluster sample in a homogeneous way, with attention to cluster mass and redshift, as well as the availability of ancillary data. We calibrated literature mass estimates from various surveys against each other and provide an initial mass estimate for each cluster, which we used to define the radial extent of the 4MOST coverage. We also present an initial assessment of the structure surrounding these clusters based on the redMaPPer red-sequence algorithm as a preview of some of the science CHANCES will enable., Comment: 11 pages, 9 figures, plus references and appendix containing catalog tables, submitted to A&A
- Published
- 2024
9. Open Catalyst Experiments 2024 (OCx24): Bridging Experiments and Computational Models
- Author
-
Abed, Jehad, Kim, Jiheon, Shuaibi, Muhammed, Wander, Brook, Duijf, Boris, Mahesh, Suhas, Lee, Hyeonseok, Gharakhanyan, Vahe, Hoogland, Sjoerd, Irtem, Erdem, Lan, Janice, Schouten, Niels, Vijayakumar, Anagha Usha, Hattrick-Simpers, Jason, Kitchin, John R., Ulissi, Zachary W., van Vugt, Aaike, Sargent, Edward H., Sinton, David, and Zitnick, C. Lawrence
- Subjects
Condensed Matter - Materials Science ,Physics - Chemical Physics - Abstract
The search for low-cost, durable, and effective catalysts is essential for green hydrogen production and carbon dioxide upcycling to help in the mitigation of climate change. Discovery of new catalysts is currently limited by the gap between what AI-accelerated computational models predict and what experimental studies produce. To make progress, large and diverse experimental datasets are needed that are reproducible and tested at industrially-relevant conditions. We address these needs by utilizing a comprehensive high-throughput characterization and experimental pipeline to create the Open Catalyst Experiments 2024 (OCX24) dataset. The dataset contains 572 samples synthesized using both wet and dry methods with X-ray fluorescence and X-ray diffraction characterization. We prepared 441 gas diffusion electrodes, including replicates, and evaluated them using zero-gap electrolysis for carbon dioxide reduction (CO$_2$RR) and hydrogen evolution reactions (HER) at current densities up to $300$ mA/cm$^2$. To find correlations with experimental outcomes and to perform computational screens, DFT-verified adsorption energies for six adsorbates were calculated on $\sim$20,000 inorganic materials requiring 685 million AI-accelerated relaxations. Remarkably from this large set of materials, a data driven Sabatier volcano independently identified Pt as being a top candidate for HER without having any experimental measurements on Pt or Pt-alloy samples. We anticipate the availability of experimental data generated specifically for AI training, such as OCX24, will significantly improve the utility of computational models in selecting materials for experimental screening., Comment: 38 pages, 22 figures
- Published
- 2024
10. Kepler frequency and moment of inertia of rotating neutron stars with chaotic magnetic field
- Author
-
Pattersons, Muhammad Lawrence and Zen, Freddy Permana
- Subjects
General Relativity and Quantum Cosmology ,Astrophysics - High Energy Astrophysical Phenomena - Abstract
Rotating neutron stars (NSs) are crucial objects of study, as our understanding of them relies significantly on observational data from these rotating stars. Observations suggest that the magnetic fields of NSs range from approximately $10^{8-15}$ G. In this work, we compute the Kepler frequency and moment of inertia for rotating NSs under the influence of a chaotic magnetic field. We utilize an equation of state (EOS) incorporating nuclei in the crust and hyperons in the core, with the Hartle-Thorne formalism applied to address the rotational aspects. A magnetic field ansatz is selected, in which the magnetic field is coupled to the energy density. To examine the impact of a chaotic magnetic field on the Kepler frequency and moment of inertia, we vary the magnetic field strength. Our results indicate that an increase in magnetic field strength enhances the Kepler frequency of rotating NSs. For the moment of inertia, the effect of magnetic field variation is minimal at lower masses but becomes more pronounced as the mass exceeds $M=0.5 M_\odot$, where moment of inertia increases with increasing magnetic field. Furthermore, our results for the moment of inertia comply with constraint derived from pulsar mass measurements, data from gravitational wave events GW170817 and GW190425, and X-ray observations of emission from hotspots on NS surfaces measured by NICER., Comment: 7 pages, 2 figures, to be submitted as a conference paper
- Published
- 2024
11. Overtones and Nonlinearities in Binary Black Hole Ringdowns
- Author
-
Giesler, Matthew, Ma, Sizheng, Mitman, Keefe, Oshita, Naritaka, Teukolsky, Saul A., Boyle, Michael, Deppe, Nils, Kidder, Lawrence E., Moxon, Jordan, Nelli, Kyle C., Pfeiffer, Harald P., Scheel, Mark A., Throwe, William, and Vu, Nils L.
- Subjects
General Relativity and Quantum Cosmology - Abstract
Using high-accuracy numerical relativity waveforms, we confirm the presence of numerous overtones of the $\ell=2$, $m=2$ quasinormal mode early in the ringdown of binary black hole mergers. We do this by demonstrating the stability of the mode amplitudes at different fit times, ruling out the possibility that a linear superposition of modes unphysically fits a highly nonlinear part of the waveform. We also find a number of previously unidentified subdominant second-order quasinormal modes in the $(2,2)$ mode. Even though these modes are mathematically nonlinear, they nevertheless confirm the validity of perturbation theory as a good approximation for describing much of the ringdown., Comment: 17 pages, 14 figures
- Published
- 2024
12. AI and the Future of Work in Africa White Paper
- Author
-
O'Neill, Jacki, Marivate, Vukosi, Glover, Barbara, Karanu, Winnie, Tadesse, Girmaw Abebe, Gyekye, Akua, Makena, Anne, Rosslyn-Smith, Wesley, Grollnek, Matthew, Wayua, Charity, Baguma, Rehema, Maduke, Angel, Spencer, Sarah, Kandie, Daniel, Maari, Dennis Ndege, Mutangana, Natasha, Axmed, Maxamed, Kamau, Nyambura, Adamu, Muhammad, Swaniker, Frank, Gatuguti, Brian, Donner, Jonathan, Graham, Mark, Mumo, Janet, Mbindyo, Caroline, N'Guessan, Charlette, Githinji, Irene, Makhafola, Lesego, Kruger, Sean, Etyang, Olivia, Onando, Mulang, Sevilla, Joe, Sambuli, Nanjira, Mbaya, Martin, Breloff, Paul, Anapey, Gideon M., Mogaleemang, Tebogo L., Nghonyama, Tiyani, Wanyoike, Muthoni, Mbuli, Bhekani, Nderu, Lawrence, Nyabero, Wambui, Alam, Uzma, Olaleye, Kayode, Njenga, Caroline, Sellen, Abigail, Kairo, David, Chabikwa, Rutendo, Abdulhamid, Najeeb G., Kubasu, Ketry, Okolo, Chinasa T., Akpo, Eugenia, Budu, Joel, Karambal, Issa, Berkoh, Joseph, Wasswa, William, Njagwi, Muchai, Burnet, Rob, Ochanda, Loise, de Bod, Hanlie, Ankrah, Elizabeth, Kinyunyu, Selemani, Kariuki, Mutembei, Kiyimba, Kizito, Eleshin, Farida, Madeje, Lillian Secelela, Muraga, Catherine, Nganga, Ida, Gichoya, Judy, Maina, Tabbz, Maina, Samuel, Mercy, Muchai, Ochieng, Millicent, and Nyairo, Stephanie
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Artificial Intelligence - Abstract
This white paper is the output of a multidisciplinary workshop in Nairobi (Nov 2023). Led by a cross-organisational team including Microsoft Research, NEPAD, Lelapa AI, and University of Oxford. The workshop brought together diverse thought-leaders from various sectors and backgrounds to discuss the implications of Generative AI for the future of work in Africa. Discussions centred around four key themes: Macroeconomic Impacts; Jobs, Skills and Labour Markets; Workers' Perspectives and Africa-Centris AI Platforms. The white paper provides an overview of the current state and trends of generative AI and its applications in different domains, as well as the challenges and risks associated with its adoption and regulation. It represents a diverse set of perspectives to create a set of insights and recommendations which aim to encourage debate and collaborative action towards creating a dignified future of work for everyone across Africa.
- Published
- 2024
13. Deep Learning for Fetal Inflammatory Response Diagnosis in the Umbilical Cord
- Author
-
Ayad, Marina A., Nateghi, Ramin, Sharma, Abhishek, Chillrud, Lawrence, Seesillapachai, Tilly, Cooper, Lee A. D., and Goldstein, Jeffery A.
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Inflammation of the umbilical cord can be seen as a result of ascending intrauterine infection or other inflammatory stimuli. Acute fetal inflammatory response (FIR) is characterized by infiltration of the umbilical cord by fetal neutrophils, and can be associated with neonatal sepsis or fetal inflammatory response syndrome. Recent advances in deep learning in digital pathology have demonstrated favorable performance across a wide range of clinical tasks, such as diagnosis and prognosis. In this study we classified FIR from whole slide images (WSI). We digitized 4100 histological slides of umbilical cord stained with hematoxylin and eosin(H&E) and extracted placental diagnoses from the electronic health record. We build models using attention-based whole slide learning models. We compared strategies between features extracted by a model (ConvNeXtXLarge) pretrained on non-medical images (ImageNet), and one pretrained using histopathology images (UNI). We trained multiple iterations of each model and combined them into an ensemble. The predictions from the ensemble of models trained using UNI achieved an overall balanced accuracy of 0.836 on the test dataset. In comparison, the ensembled predictions using ConvNeXtXLarge had a lower balanced accuracy of 0.7209. Heatmaps generated from top accuracy model appropriately highlighted arteritis in cases of FIR 2. In FIR 1, the highest performing model assigned high attention to areas of activated-appearing stroma in Wharton's Jelly. However, other high-performing models assigned attention to umbilical vessels. We developed models for diagnosis of FIR from placental histology images, helping reduce interobserver variability among pathologists. Future work may examine the utility of these models for identifying infants at risk of systemic inflammatory response or early onset neonatal sepsis.
- Published
- 2024
14. Developing a Foundation Model for Predicting Material Failure
- Author
-
Marcato, Agnese, Santos, Javier E., Pachalieva, Aleksandra, Gao, Kai, Hill, Ryley, Rougier, Esteban, Kang, Qinjun, Hyman, Jeffrey, Hunter, Abigail, Chua, Janel, Lawrence, Earl, Viswanathan, Hari, and O'Malley, Daniel
- Subjects
Physics - Geophysics - Abstract
Understanding material failure is critical for designing stronger and lighter structures by identifying weaknesses that could be mitigated. Existing full-physics numerical simulation techniques involve trade-offs between speed, accuracy, and the ability to handle complex features like varying boundary conditions, grid types, resolution, and physical models. We present the first foundation model specifically designed for predicting material failure, leveraging large-scale datasets and a high parameter count (up to 3B) to significantly improve the accuracy of failure predictions. In addition, a large language model provides rich context embeddings, enabling our model to make predictions across a diverse range of conditions. Unlike traditional machine learning models, which are often tailored to specific systems or limited to narrow simulation conditions, our foundation model is designed to generalize across different materials and simulators. This flexibility enables the model to handle a range of material properties and conditions, providing accurate predictions without the need for retraining or adjustments for each specific case. Our model is capable of accommodating diverse input formats, such as images and varying simulation conditions, and producing a range of outputs, from simulation results to effective properties. It supports both Cartesian and unstructured grids, with design choices that allow for seamless updates and extensions as new data and requirements emerge. Our results show that increasing the scale of the model leads to significant performance gains (loss scales as $N^{-1.6}$, compared to language models which often scale as $N^{-0.5}$)., Comment: Accepted at NeurIPS 2024 "Foundation Models for Science: Progress, Opportunities, and Challenges" Workshop
- Published
- 2024
15. The Systems Engineering Approach in Times of Large Language Models
- Author
-
Cabrera, Christian, Bastidas, Viviana, Schooling, Jennifer, and Lawrence, Neil D.
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computers and Society ,Computer Science - Software Engineering - Abstract
Using Large Language Models (LLMs) to address critical societal problems requires adopting this novel technology into socio-technical systems. However, the complexity of such systems and the nature of LLMs challenge such a vision. It is unlikely that the solution to such challenges will come from the Artificial Intelligence (AI) community itself. Instead, the Systems Engineering approach is better equipped to facilitate the adoption of LLMs by prioritising the problems and their context before any other aspects. This paper introduces the challenges LLMs generate and surveys systems research efforts for engineering AI-based systems. We reveal how the systems engineering principles have supported addressing similar issues to the ones LLMs pose and discuss our findings to provide future directions for adopting LLMs., Comment: This paper has been accepted for the upcoming 58th Hawaii International Conference on System Sciences (HICSS-58)
- Published
- 2024
16. Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning
- Author
-
Francis, Lawrence, Guda, Blessed, and Biyabani, Ahmed
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Artificial Intelligence - Abstract
In reinforcement learning-based (RL-based) traffic signal control (TSC), decisions on the signal timing are made based on the available information on vehicles at a road intersection. This forms the state representation for the RL environment which can either be high-dimensional containing several variables or a low-dimensional vector. Current studies suggest that using high dimensional state representations does not lead to improved performance on TSC. However, we argue, with experimental results, that the use of high dimensional state representations can, in fact, lead to improved TSC performance with improvements up to 17.9% of the average waiting time. This high-dimensional representation is obtainable using the cost-effective vehicle-to-infrastructure (V2I) communication, encouraging its adoption for TSC. Additionally, given the large size of the state, we identified the need to have computational efficient models and explored model compression via pruning., Comment: Under Review
- Published
- 2024
17. CQUESST: A dynamical stochastic framework for predicting soil-carbon sequestration
- Author
-
Pagendam, Dan, Baldock, Jeff, Clifford, David, Farquharson, Ryan, Murray, Lawrence, Beare, Mike, Curtin, Denis, and Cressie, Noel
- Subjects
Statistics - Applications ,Statistics - Computation - Abstract
A statistical framework we call CQUESST (Carbon Quantification and Uncertainty from Evolutionary Soil STochastics), which models carbon sequestration and cycling in soils, is applied to a long-running agricultural experiment that controls for crop type, tillage, and season. The experiment, known as the Millenium Tillage Trial (MTT), ran on 42 field-plots for ten years from 2000-2010; here CQUESST is used to model soil carbon dynamically in six pools, in each of the 42 agricultural plots, and on a monthly time step for a decade. We show how CQUESST can be used to estimate soil-carbon cycling rates under different treatments. Our methods provide much-needed statistical tools for quantitatively inferring the effectiveness of different experimental treatments on soil-carbon sequestration. The decade-long data are of multiple observation types, and these interacting time series are ingested into a fully Bayesian model that has a dynamic stochastic model of multiple pools of soil carbon at its core. CQUESST's stochastic model is motivated by the deterministic RothC soil-carbon model based on nonlinear difference equations. We demonstrate how CQUESST can estimate soil-carbon fluxes for different experimental treatments while acknowledging uncertainties in soil-carbon dynamics, in physical parameters, and in observations. CQUESST is implemented efficiently in the probabilistic programming language Stan using its MapReduce parallelization, and it scales well for large numbers of field-plots, using software libraries that allow for computation to be shared over multiple nodes of high-performance computing clusters.
- Published
- 2024
18. Echoes from Beyond: Detecting Gravitational Wave Quantum Imprints with LISA
- Author
-
Deppe, Nils, Heisenberg, Lavinia, Inchauspé, Henri, Kidder, Lawrence E., Maibach, David, Ma, Sizheng, Moxon, Jordan, Nelli, Kyle C., Throwe, William, and Vu, Nils L.
- Subjects
General Relativity and Quantum Cosmology ,Astrophysics - Cosmology and Nongalactic Astrophysics ,High Energy Physics - Theory - Abstract
We assess the prospects for detecting gravitational wave echoes arising due to the quantum nature of black hole horizons with LISA. In a recent proposal, Bekenstein's black hole area quantization is connected to a discrete absorption spectrum for black holes in the context of gravitational radiation. Consequently, for incoming radiation at the black hole horizon, not all frequencies are absorbed, raising the possibility that the unabsorbed radiation is reflected, producing an echo-like signal closely following the binary coalescence waveform. In this work, we further develop this proposal by introducing a robust, phenomenologically motivated model for black hole reflectivity. Using this model, we calculate the resulting echoes for an ensemble of Numerical Relativity waveforms and examine their detectability with the LISA space-based interferometer. Our analysis demonstrates promising detection prospects and shows that, upon detection, LISA provides a direct probe of the Bekenstein-Hawking entropy. In addition, we find that the information extractable from LISA data offers valuable constraints on a wide range of quantum gravity theories., Comment: 9 pages, 8 Figures
- Published
- 2024
19. From Word Vectors to Multimodal Embeddings: Techniques, Applications, and Future Directions For Large Language Models
- Author
-
Zhang, Charles, Peng, Benji, Sun, Xintian, Niu, Qian, Liu, Junyu, Chen, Keyu, Li, Ming, Feng, Pohsun, Bi, Ziqian, Liu, Ming, Zhang, Yichao, Fei, Cheng, Yin, Caitlyn Heqi, Yan, Lawrence KQ, and Wang, Tianyang
- Subjects
Computer Science - Computation and Language - Abstract
Word embeddings and language models have transformed natural language processing (NLP) by facilitating the representation of linguistic elements in continuous vector spaces. This review visits foundational concepts such as the distributional hypothesis and contextual similarity, tracing the evolution from sparse representations like one-hot encoding to dense embeddings including Word2Vec, GloVe, and fastText. We examine both static and contextualized embeddings, underscoring advancements in models such as ELMo, BERT, and GPT and their adaptations for cross-lingual and personalized applications. The discussion extends to sentence and document embeddings, covering aggregation methods and generative topic models, along with the application of embeddings in multimodal domains, including vision, robotics, and cognitive science. Advanced topics such as model compression, interpretability, numerical encoding, and bias mitigation are analyzed, addressing both technical challenges and ethical implications. Additionally, we identify future research directions, emphasizing the need for scalable training techniques, enhanced interpretability, and robust grounding in non-textual modalities. By synthesizing current methodologies and emerging trends, this survey offers researchers and practitioners an in-depth resource to push the boundaries of embedding-based language models., Comment: 21 pages
- Published
- 2024
20. Astronomaly Protege: Discovery Through Human-Machine Collaboration
- Author
-
Lochner, Michelle and Rudnick, Lawrence
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Astrophysics of Galaxies - Abstract
Modern telescopes generate catalogs of millions of objects with the potential for new scientific discoveries, but this is beyond what can be examined visually. Here we introduce Astronomaly: Protege, an extension of the general purpose machine learning-based active anomaly detection framework Astronomaly. Protege is designed to provide well-selected recommendations for visual inspection, based on a small amount of optimized human labeling. The resulting sample contains rare or unusual sources which are simultaneously as diverse as the human trainer chooses, and of scientific interest to them. We train Protege on images from the MeerKAT Galaxy Cluster Legacy Survey, leveraging the self-supervised deep learning algorithm Bootstrap Your Own Latent to find a low-dimensional representation of the radio galaxy cutouts. By operating in this feature space, Protege is able to recommend interesting sources with completely different morphologies in image space to those it has been trained on. This provides important advantages over similarity searches, which can only find more examples of known sources, or blind anomaly detection, which selects unusual, but not necessarily scientifically interesting sources. Using an evaluation subset, we show that, with minimal training, Protege provides excellent recommendations and find that it is even able to recommend sources that the authors missed. We briefly highlight some of Protege's top recommendations, which include X- and circular-shaped sources, filamentary structures and one-sided structures. These results illustrate the power of an optimized human-machine collaboration such as Protege to make unexpected discoveries in samples beyond human-accessible scales., Comment: 36 pages, 25 figures. Submitted to AJ. Code available at https://github.com/MichelleLochner/astronomaly and catalogue of interesting radio sources and png cutouts available at https://github.com/MichelleLochner/mgcls.protege
- Published
- 2024
21. The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation
- Author
-
Stewart, Lawrence, Trager, Matthew, Gonugondla, Sujan Kumar, and Soatto, Stefano
- Subjects
Computer Science - Machine Learning - Abstract
Speculative decoding aims to speed up autoregressive generation of a language model by verifying in parallel the tokens generated by a smaller draft model.In this work, we explore the effectiveness of learning-free, negligible-cost draft strategies, namely $N$-grams obtained from the model weights and the context. While the predicted next token of the base model is rarely the top prediction of these simple strategies, we observe that it is often within their top-$k$ predictions for small $k$. Based on this, we show that combinations of simple strategies can achieve significant inference speedups over different tasks. The overall performance is comparable to more complex methods, yet does not require expensive preprocessing or modification of the base model, and allows for seamless `plug-and-play' integration into pipelines.
- Published
- 2024
22. Minkowski ideals and rings
- Author
-
Agnarsson, Geir and Lawrence, Jim
- Subjects
Mathematics - Combinatorics ,Mathematics - Commutative Algebra ,13B25, 13C05, 52B11 - Abstract
\emph{Minkowski rings} are certain rings of simple functions on the Euclidean space $W = {\mathbb{R}}^d$ with multiplicative structure derived from Minkowski addition of convex polytopes. When the ring is (finitely) generated by a set ${\cal{P}}$ of indicator functions of $n$ polytopes then the ring can be presented as ${\mathbb{C}}[x_1,\ldots,x_n]/I$ when viewed as a ${\mathbb{C}}$-algebra, where $I$ is the ideal describing all the relations implied by identities among Minkowski sums of elements of ${\cal{P}}$. We discuss in detail the $1$-dimensional case, the $d$-dimensional box case and the affine Coxeter arrangement in ${\mathbb{R}}^2$ where the convex sets are formed by closed half-planes with bounding lines making the regular triangular grid in ${\mathbb{R}}^2$. We also consider, for a given polytope $P$, the Minkowski ring $M^\pm_F(P)$ of the collection ${\cal{F}}(P)$ of the nonempty faces of $P$ and their multiplicative inverses. Finally we prove some general properties of identities in the Minkowski ring of ${\cal{F}}(P)$; in particular, we show that Minkowski rings behave well under Cartesian product, namely that $M^\pm_F(P\times Q) \cong M^{\pm}_F(P)\otimes M^{\pm}_F(Q)$ as ${\mathbb{C}}$-algebras where $P$ and $Q$ are polytopes., Comment: 39 pages, comments and related references welcomed
- Published
- 2024
23. MuCol Milestone Report No. 5: Preliminary Parameters
- Author
-
Accettura, Carlotta, Adrian, Simon, Agarwal, Rohit, Ahdida, Claudia, Aimé, Chiara, Aksoy, Avni, Alberghi, Gian Luigi, Alden, Siobhan, Alfonso, Luca, Amapane, Nicola, Amorim, David, Andreetto, Paolo, Anulli, Fabio, Appleby, Rob, Apresyan, Artur, Asadi, Pouya, Mahmoud, Mohammed Attia, Auchmann, Bernhard, Back, John, Badea, Anthony, Bae, Kyu Jung, Bahng, E. J., Balconi, Lorenzo, Balli, Fabrice, Bandiera, Laura, Barbagallo, Carmelo, Barlow, Roger, Bartoli, Camilla, Bartosik, Nazar, Barzi, Emanuela, Batsch, Fabian, Bauce, Matteo, Begel, Michael, Berg, J. Scott, Bersani, Andrea, Bertarelli, Alessandro, Bertinelli, Francesco, Bertolin, Alessandro, Bhat, Pushpalatha, Bianchi, Clarissa, Bianco, Michele, Bishop, William, Black, Kevin, Boattini, Fulvio, Bogacz, Alex, Bonesini, Maurizio, Bordini, Bernardo, de Sousa, Patricia Borges, Bottaro, Salvatore, Bottura, Luca, Boyd, Steven, Breschi, Marco, Broggi, Francesco, Brunoldi, Matteo, Buffat, Xavier, Buonincontri, Laura, Burrows, Philip Nicholas, Burt, Graeme Campbell, Buttazzo, Dario, Caiffi, Barbara, Calatroni, Sergio, Calviani, Marco, Calzaferri, Simone, Calzolari, Daniele, Cantone, Claudio, Capdevilla, Rodolfo, Carli, Christian, Carrelli, Carlo, Casaburo, Fausto, Casarsa, Massimo, Castelli, Luca, Catanesi, Maria Gabriella, Cavallucci, Lorenzo, Cavoto, Gianluca, Celiberto, Francesco Giovanni, Celona, Luigi, Cemmi, Alessia, Ceravolo, Sergio, Cerri, Alessandro, Cerutti, Francesco, Cesarini, Gianmario, Cesarotti, Cari, Chancé, Antoine, Charitonidis, Nikolaos, Chiesa, Mauro, Chiggiato, Paolo, Ciccarella, Vittoria Ludovica, Puviani, Pietro Cioli, Colaleo, Anna, Colao, Francesco, Collamati, Francesco, Costa, Marco, Craig, Nathaniel, Curtin, David, Damerau, Heiko, Da Molin, Giacomo, D'Angelo, Laura, Dasu, Sridhara, de Blas, Jorge, De Curtis, Stefania, De Gersem, Herbert, Delahaye, Jean-Pierre, Del Moro, Tommaso, Denisov, Dmitri, Denizli, Haluk, Dermisek, Radovan, Valdor, Paula Desiré, Desponds, Charlotte, Di Luzio, Luca, Di Meco, Elisa, Diociaiuti, Eleonora, Di Petrillo, Karri Folan, Di Sarcina, Ilaria, Dorigo, Tommaso, Dreimanis, Karlis, Pree, Tristan du, Yildiz, Hatice Duran, Edgecock, Thomas, Fabbri, Siara, Fabbrichesi, Marco, Farinon, Stefania, Ferrand, Guillaume, Somoza, Jose Antonio Ferreira, Fieg, Max, Filthaut, Frank, Fox, Patrick, Franceschini, Roberto, Ximenes, Rui Franqueira, Gallinaro, Michele, Garcia-Sciveres, Maurice, Garcia-Tabares, Luis, Gargiulo, Ruben, Garion, Cedric, Garzelli, Maria Vittoria, Gast, Marco, Generoso, Lisa, Gerber, Cecilia E., Giambastiani, Luca, Gianelle, Alessio, Gianfelice-Wendt, Eliana, Gibson, Stephen, Gilardoni, Simone, Giove, Dario Augusto, Giovinco, Valentina, Giraldin, Carlo, Glioti, Alfredo, Gorzawski, Arkadiusz, Greco, Mario, Grojean, Christophe, Grudiev, Alexej, Gschwendtner, Edda, Gueli, Emanuele, Guilhaudin, Nicolas, Han, Chengcheng, Han, Tao, Hauptman, John Michael, Herndon, Matthew, Hillier, Adrian D, Hillman, Micah, Holmes, Tova Ray, Homiller, Samuel, Jana, Sudip, Jindariani, Sergo, Johannesson, Sofia, Johnson, Benjamin, Jones, Owain Rhodri, Jurj, Paul-Bogdan, Kahn, Yonatan, Kamath, Rohan, Kario, Anna, Karpov, Ivan, Kelliher, David, Kilian, Wolfgang, Kitano, Ryuichiro, Kling, Felix, Kolehmainen, Antti, Kong, K. C., Kosse, Jaap, Krintiras, Georgios, Krizka, Karol, Kumar, Nilanjana, Kvikne, Erik, Kyle, Robert, Laface, Emanuele, Lane, Kenneth, Latina, Andrea, Lechner, Anton, Lee, Junghyun, Lee, Lawrence, Lee, Seh Wook, Lefevre, Thibaut, Leonardi, Emanuele, Lerner, Giuseppe, Li, Peiran, Li, Qiang, Li, Tong, Li, Wei, Lindroos, Mats, Lipton, Ronald, Liu, Da, Liu, Miaoyuan, Liu, Zhen, Voti, Roberto Li, Lombardi, Alessandra, Lomte, Shivani, Long, Kenneth, Longo, Luigi, Lorenzo, José, Losito, Roberto, Low, Ian, Lu, Xianguo, Lucchesi, Donatella, Luo, Tianhuan, Lupato, Anna, Ma, Yang, Machida, Shinji, Madlener, Thomas, Magaletti, Lorenzo, Maggi, Marcello, Durand, Helene Mainaud, Maltoni, Fabio, Manczak, Jerzy Mikolaj, Mandurrino, Marco, Marchand, Claude, Mariani, Francesco, Marin, Stefano, Mariotto, Samuele, Martin-Haugh, Stewart, Masullo, Maria Rosaria, Mauro, Giorgio Sebastiano, Mazzolari, Andrea, Mękała, Krzysztof, Mele, Barbara, Meloni, Federico, Meng, Xiangwei, Mentink, Matthias, Métral, Elias, Miceli, Rebecca, Milas, Natalia, Mohammadi, Abdollah, Moll, Dominik, Montella, Alessandro, Morandin, Mauro, Morrone, Marco, Mulder, Tim, Musenich, Riccardo, Nardecchia, Marco, Nardi, Federico, Nenna, Felice, Neuffer, David, Newbold, David, Novelli, Daniel, Olvegård, Maja, Onel, Yasar, Orestano, Domizia, Osborne, John, Otten, Simon, Torres, Yohan Mauricio Oviedo, Paesani, Daniele, Griso, Simone Pagan, Pagani, Davide, Pal, Kincso, Palmer, Mark, Pampaloni, Alessandra, Panci, Paolo, Pani, Priscilla, Papaphilippou, Yannis, Paparella, Rocco, Paradisi, Paride, Passeri, Antonio, Pasternak, Jaroslaw, Pastrone, Nadia, Pellecchia, Antonello, Piccinini, Fulvio, Piekarz, Henryk, Pieloni, Tatiana, Plouin, Juliette, Portone, Alfredo, Potamianos, Karolos, Potdevin, Joséphine, Prestemon, Soren, Puig, Teresa, Qiang, Ji, Quettier, Lionel, Rabemananjara, Tanjona Radonirina, Radicioni, Emilio, Radogna, Raffaella, Rago, Ilaria Carmela, Ratkus, Andris, Resseguie, Elodie, Reuter, Juergen, Ribani, Pier Luigi, Riccardi, Cristina, Ricciardi, Stefania, Robens, Tania, Robert, Youri, Rogers, Chris, Rojo, Juan, Romagnoni, Marco, Ronald, Kevin, Rosser, Benjamin, Rossi, Carlo, Rossi, Lucio, Rozanov, Leo, Ruhdorfer, Maximilian, Ruiz, Richard, Saini, Saurabh, Sala, Filippo, Salierno, Claudia, Salmi, Tiina, Salvini, Paola, Salvioni, Ennio, Sammut, Nicholas, Santini, Carlo, Saputi, Alessandro, Sarra, Ivano, Scarantino, Giuseppe, Schneider-Muntau, Hans, Schulte, Daniel, Scifo, Jessica, Sen, Tanaji, Senatore, Carmine, Senol, Abdulkadir, Sertore, Daniele, Sestini, Lorenzo, Rêgo, Ricardo César Silva, Simone, Federica Maria, Skoufaris, Kyriacos, Sorbello, Gino, Sorbi, Massimo, Sorti, Stefano, Soubirou, Lisa, Spataro, David, Queiroz, Farinaldo S., Stamerra, Anna, Stapnes, Steinar, Stark, Giordon, Statera, Marco, Stechauner, Bernd Michael, Su, Shufang, Su, Wei, Sun, Xiaohu, Sytov, Alexei, Tang, Jian, Tang, Jingyu, Taylor, Rebecca, Kate, Herman Ten, Testoni, Pietro, Thiele, Leonard Sebastian, Garcia, Rogelio Tomas, Topp-Mugglestone, Max, Torims, Toms, Torre, Riccardo, Tortora, Luca, Tortora, Ludovico, Trifinopoulos, Sokratis, Udongwo, Sosoho-Abasi, Vai, Ilaria, Valente, Riccardo Umberto, van Rienen, Ursula, Van Weelderen, Rob, Vanwelde, Marion, Velev, Gueorgui, Venditti, Rosamaria, Vendrasco, Adam, Verna, Adriano, Vernassa, Gianluca, Verweij, Arjan, Verwilligen, Piet, Villamizar, Yoxara, Vittorio, Ludovico, Vitulo, Paolo, Vojskovic, Isabella, Wang, Dayong, Wang, Lian-Tao, Wang, Xing, Wendt, Manfred, Widorski, Markus, Wozniak, Mariusz, Wu, Yongcheng, Wulzer, Andrea, Xie, Keping, Yang, Yifeng, Yap, Yee Chinn, Yonehara, Katsuya, Yoo, Hwi Dong, You, Zhengyun, Zanetti, Marco, Zaza, Angela, Zhang, Liang, Zhu, Ruihu, Zlobin, Alexander, Zuliani, Davide, and Zurita, José Francisco
- Subjects
Physics - Accelerator Physics - Abstract
This document is comprised of a collection of updated preliminary parameters for the key parts of the muon collider. The updated preliminary parameters follow on from the October 2023 Tentative Parameters Report. Particular attention has been given to regions of the facility that are believed to hold greater technical uncertainty in their design and that have a strong impact on the cost and power consumption of the facility. The data is collected from a collaborative spreadsheet and transferred to overleaf.
- Published
- 2024
- Full Text
- View/download PDF
24. Active Prompt Tuning Enables Gpt-40 To Do Efficient Classification Of Microscopy Images
- Author
-
Kandiyana, Abhiram, Mouton, Peter R., Kolinko, Yaroslav, Hall, Lawrence O., and Goldgof, Dmitry
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Traditional deep learning-based methods for classifying cellular features in microscopy images require time- and labor-intensive processes for training models. Among the current limitations are major time commitments from domain experts for accurate ground truth preparation; and the need for a large amount of input image data. We previously proposed a solution that overcomes these challenges using OpenAI's GPT-4(V) model on a pilot dataset (Iba-1 immuno-stained tissue sections from 11 mouse brains). Results on the pilot dataset were equivalent in accuracy and with a substantial improvement in throughput efficiency compared to the baseline using a traditional Convolutional Neural Net (CNN)-based approach. The present study builds upon this framework using a second unique and substantially larger dataset of microscopy images. Our current approach uses a newer and faster model, GPT-4o, along with improved prompts. It was evaluated on a microscopy image dataset captured at low (10x) magnification from cresyl-violet-stained sections through the cerebellum of a total of 18 mouse brains (9 Lurcher mice, 9 wild-type controls). We used our approach to classify these images either as a control group or Lurcher mutant. Using 6 mice in the prompt set the results were correct classification for 11 out of the 12 mice (92%) with 96% higher efficiency, reduced image requirements, and lower demands on time and effort of domain experts compared to the baseline method (snapshot ensemble of CNN models). These results confirm that our approach is effective across multiple datasets from different brain regions and magnifications, with minimal overhead.
- Published
- 2024
25. Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement
- Author
-
Cao, Bryan Bo, O'Gorman, Lawrence, Coss, Michael, and Jain, Shubham
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,68T45 ,I.4.0 ,I.4.9 - Abstract
We propose Few-Class Arena (FCA), as a unified benchmark with focus on testing efficient image classification models for few classes. A wide variety of benchmark datasets with many classes (80-1000) have been created to assist Computer Vision architectural evolution. An increasing number of vision models are evaluated with these many-class datasets. However, real-world applications often involve substantially fewer classes of interest (2-10). This gap between many and few classes makes it difficult to predict performance of the few-class applications using models trained on the available many-class datasets. To date, little has been offered to evaluate models in this Few-Class Regime. We conduct a systematic evaluation of the ResNet family trained on ImageNet subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional Neural Networks and Transformer architectures over ten datasets by using our newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset difficulty and a more efficient selection of models, we incorporate a difficulty measure as a function of class similarity. FCA offers a new tool for efficient machine learning in the Few-Class Regime, with goals ranging from a new efficient class similarity proposal, to lightweight model architecture design, to a new scaling law. FCA is user-friendly and can be easily extended to new models and datasets, facilitating future research work. Our benchmark is available at https://github.com/fewclassarena/fca., Comment: 9 pages, 27 pages including References and Appendix, 20 figures, 5 tables
- Published
- 2024
26. The 2024 Active Metamaterials Roadmap
- Author
-
Pope, Simon A., Roth, Diane J., Bansal, Aakash, Mousa, Mostafa, Rezanejad, Ashkan, Forte, Antonio E., Nash, Geoff. R., Singleton, Lawrence, Langfeldt, Felix, Cheer, Jordan, Henthorn, Stephen, Hooper, Ian R., Hendry, Euan, Powell, Alex W., Souslov, Anton, Plum, Eric, Sun, Kai, de Groot, C. H., Muskens, Otto L., Shields, Joe, De Galarreta, Carlota Ruiz, Wright, C. David, Kocabas, Coskun, Ergoktas, M. Said, Xiao, Jianling, Schulz, Sebastian A., Di Falco, Andrea, Krasavin, Alexey V., Zayats, Anatoly V., and Galiffi, Emanuele
- Subjects
Physics - Applied Physics ,Physics - Optics - Abstract
Active metamaterials are engineered structures that possess novel properties that can be changed after the point of manufacture. Their novel properties arise predominantly from their physical structure, as opposed to their chemical composition and can be changed through means such as direct energy addition into wave paths, or physically changing/morphing the structure in response to both a user or environmental input. Active metamaterials are currently of wide interest to the physics community and encompass a range of sub-domains in applied physics (e.g. photonic, microwave, acoustic, mechanical, etc.). They possess the potential to provide solutions that are more suitable to specific applications, or which allow novel properties to be produced which cannot be achieved with passive metamaterials, such as time-varying or gain enhancement effects. They have the potential to help solve some of the important current and future problems faced by the advancement of modern society, such as achieving net-zero, sustainability, healthcare and equality goals. Despite their huge potential, the added complexity of their design and operation, compared to passive metamaterials creates challenges to the advancement of the field, particularly beyond theoretical and lab-based experiments. This roadmap brings together experts in all types of active metamaterials and across a wide range of areas of applied physics. The objective is to provide an overview of the current state of the art and the associated current/future challenges, with the hope that the required advances identified create a roadmap for the future advancement and application of this field.
- Published
- 2024
27. EigenVI: score-based variational inference with orthogonal function expansions
- Author
-
Cai, Diana, Modi, Chirag, Margossian, Charles C., Gower, Robert M., Blei, David M., and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
We develop EigenVI, an eigenvalue-based approach for black-box variational inference (BBVI). EigenVI constructs its variational approximations from orthogonal function expansions. For distributions over $\mathbb{R}^D$, the lowest order term in these expansions provides a Gaussian variational approximation, while higher-order terms provide a systematic way to model non-Gaussianity. These approximations are flexible enough to model complex distributions (multimodal, asymmetric), but they are simple enough that one can calculate their low-order moments and draw samples from them. EigenVI can also model other types of random variables (e.g., nonnegative, bounded) by constructing variational approximations from different families of orthogonal functions. Within these families, EigenVI computes the variational approximation that best matches the score function of the target distribution by minimizing a stochastic estimate of the Fisher divergence. Notably, this optimization reduces to solving a minimum eigenvalue problem, so that EigenVI effectively sidesteps the iterative gradient-based optimizations that are required for many other BBVI algorithms. (Gradient-based methods can be sensitive to learning rates, termination criteria, and other tunable hyperparameters.) We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb. On these distributions, we find that EigenVI is more accurate than existing methods for Gaussian BBVI., Comment: 25 pages, 9 figures. Advances in Neural Information Processing Systems (NeurIPS), 2024
- Published
- 2024
28. Deep Learning and Machine Learning -- Natural Language Processing: From Theory to Application
- Author
-
Chen, Keyu, Fei, Cheng, Bi, Ziqian, Liu, Junyu, Peng, Benji, Zhang, Sen, Pan, Xuanhe, Xu, Jiawei, Wang, Jinlang, Yin, Caitlyn Heqi, Zhang, Yichao, Feng, Pohsun, Wen, Yizhu, Wang, Tianyang, Li, Ming, Ren, Jintao, Niu, Qian, Chen, Silin, Hsieh, Weiche, Yan, Lawrence K. Q., Liang, Chia Xin, Xu, Han, Tseng, Hong-Ming, Song, Xinyuan, and Liu, Ming
- Subjects
Computer Science - Computation and Language ,Computer Science - Human-Computer Interaction - Abstract
With a focus on natural language processing (NLP) and the role of large language models (LLMs), we explore the intersection of machine learning, deep learning, and artificial intelligence. As artificial intelligence continues to revolutionize fields from healthcare to finance, NLP techniques such as tokenization, text classification, and entity recognition are essential for processing and understanding human language. This paper discusses advanced data preprocessing techniques and the use of frameworks like Hugging Face for implementing transformer-based models. Additionally, it highlights challenges such as handling multilingual data, reducing bias, and ensuring model robustness. By addressing key aspects of data processing and model fine-tuning, this work aims to provide insights into deploying effective and ethically sound AI solutions., Comment: 255 pages
- Published
- 2024
29. The SOFIA Massive (SOMA) Star Formation Q-band follow-up I. Carbon-chain chemistry of intermediate-mass protostars
- Author
-
Taniguchi, Kotomi, Gorai, Prasanta, Tan, Jonathan C., Gomez-Garrido, Miguel, Fedriani, Ruben, Yang, Yao-Lun, Sridharan, T. K., Tanaka, Kei, Saito, Masao, Zhang, Yichen, Morgan, Lawrence, Cosentino, Giuliana, and Law, Chi-Yan
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - Solar and Stellar Astrophysics - Abstract
Evidence for similar chemical characteristics around low- and high-mass protostars has been found: in particular, a variety of carbon-chain species and complex organic molecules (COMs) are formed around them. On the other hand, the chemical compositions around intermediate-mass (IM; $2 M_{\odot} < m_* <8 M_{\odot}$) protostars have not been studied with large samples. In particular, it is unclear the extent to which carbon-chain species are formed around them. We aim to obtain the chemical compositions, particularly focusing on carbon-chain species, towards a sample of IM protostars. We have conducted Q-band (31.5-50 GHz) line survey observations towards eleven mainly intermediate-mass protostars with the Yebes 40 m radio telescope. The target protostars were selected from a sub-sample of the source list of the SOFIA Massive (SOMA) Star Formation project. Nine carbon-chain species (HC$_3$N, HC$_5$N, C$_3$H, C$_4$H, $linear-$H$_2$CCC, $cyclic-$C$_3$H$_2$, CCS, C$_3$S, and CH$_3$CCH), three COMs (CH$_3$OH, CH$_3$CHO, and CH$_3$CN), H$_2$CCO, HNCO, and four simple sulfur (S)-bearing species ($^{13}$CS, C$^{34}$S, HCS$^+$, H$_2$CS) have been detected. The rotational temperatures of HC$_5$N are derived to be $\sim20-30$ K in three IM protostars and they are very similar compared to those around low- and high-mass protostars. These results indicate that carbon-chain molecules are formed in lukewarm ($\sim20-30$ K) gas around the IM protostars by the Warm Carbon-Chain Chemistry (WCCC) process. Carbon-chain formation occurs ubiquitously in the warm gas around protostars across a wide range of stellar masses. Carbon-chain molecules and COMs coexist around most of the target IM protostars, which is similar to the situation in low- and high-mass protostars. The chemical characteristics around protostars are common in the low-, intermediate- and high-mass regimes., Comment: Accepted for publication in the Astronomy and Astrophysics (A&A)
- Published
- 2024
30. Batch, match, and patch: low-rank approximations for score-based variational inference
- Author
-
Modi, Chirag, Cai, Diana, and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
Black-box variational inference (BBVI) scales poorly to high dimensional problems when it is used to estimate a multivariate Gaussian approximation with a full covariance matrix. In this paper, we extend the batch-and-match (BaM) framework for score-based BBVI to problems where it is prohibitively expensive to store such covariance matrices, let alone to estimate them. Unlike classical algorithms for BBVI, which use gradient descent to minimize the reverse Kullback-Leibler divergence, BaM uses more specialized updates to match the scores of the target density and its Gaussian approximation. We extend the updates for BaM by integrating them with a more compact parameterization of full covariance matrices. In particular, borrowing ideas from factor analysis, we add an extra step to each iteration of BaM -- a patch -- that projects each newly updated covariance matrix into a more efficiently parameterized family of diagonal plus low rank matrices. We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.
- Published
- 2024
31. Relieving scale disparity in binary black hole simulations
- Author
-
Wittek, Nikolas A., Barack, Leor, Pfeiffer, Harald P., Pound, Adam, Deppe, Nils, Kidder, Lawrence E., Macedo, Alexandra, Nelli, Kyle C., Throwe, William, and Vu, Nils L.
- Subjects
General Relativity and Quantum Cosmology - Abstract
Worldtube excision is a method of reducing computational burden in Numerical Relativity simulations of binary black holes in situations where there is a good analytical model of the geometry around (one or both of) the objects. Two such scenarios of relevance in gravitational-wave astronomy are (1) the case of mass-disparate systems, and (2) the early inspiral when the separation is still large. Here we illustrate the utility and flexibility of this technique with simulations of the fully self-consistent radiative evolution in the model problem of a scalar charge orbiting a Schwarzschild black hole under the effect of scalar-field radiation reaction. We explore a range of orbital configurations, including inspirals with large eccentricity (which we follow through to the final plunge and ringdown) and hyperbolic scattering., Comment: 8 pages, 4 figures
- Published
- 2024
32. CCAT: LED Mapping and Characterization of the 280 GHz TiN KID Array
- Author
-
Middleton, Alicia, Choi, Steve K., Walker, Samantha, Austermann, Jason, Burgoyne, James R., Butler, Victoria, Chapman, Scott C., Crites, Abigail T., Duell, Cody J., Freundt, Rodrigo G., Huber, Anthony I., Huber, Zachary B., Hubmayr, Johannes, Keller, Ben, Lin, Lawrence T., Niemack, Michael D., Patel, Darshan, Sinclair, Adrian K., Smith, Ema, Vaskuri, Anna, Vavagiakis, Eve M., Vissers, Michael, Wang, Yuhan, and Wheeler, Jordan
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Prime-Cam, one of the primary instruments for the Fred Young Submillimeter Telescope (FYST) developed by the CCAT Collaboration, will house up to seven instrument modules, with the first operating at 280 GHz. Each module will include three arrays of superconducting microwave kinetic inductance detectors (KIDs). The first KID array fabricated for the 280 GHz module uses titanium-nitride (TiN) as the superconducting material and has 3,456 individual detectors, while the other two arrays use aluminum. This paper presents the design and laboratory characterization of the 280 GHz TiN array, which is cooled below its critical temperature to ~0.1 K and read out over six RF feedlines. LED mapping, a technique for matching the measured resonant frequency of a detector to its physical position, was performed on the array so that the results can be used to lithographically trim the KID capacitors and increase the yield of the array by reducing frequency collisions. We present the methods and results of LED mapping the 280 GHz TiN KID array before deployment on FYST., Comment: 4 pages, 4 figures, submitted to IEEE Transactions on Applied Superconductivity (IEEE TAS)
- Published
- 2024
33. Large Language Model Benchmarks in Medical Tasks
- Author
-
Yan, Lawrence K. Q., Li, Ming, Zhang, Yichao, Yin, Caitlyn Heqi, Fei, Cheng, Peng, Benji, Bi, Ziqian, Feng, Pohsun, Chen, Keyu, Liu, Junyu, and Niu, Qian
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
With the increasing application of large language models (LLMs) in the medical domain, evaluating these models' performance using benchmark datasets has become crucial. This paper presents a comprehensive survey of various benchmark datasets employed in medical LLM tasks. These datasets span multiple modalities including text, image, and multimodal benchmarks, focusing on different aspects of medical knowledge such as electronic health records (EHRs), doctor-patient dialogues, medical question-answering, and medical image captioning. The survey categorizes the datasets by modality, discussing their significance, data structure, and impact on the development of LLMs for clinical tasks such as diagnosis, report generation, and predictive decision support. Key benchmarks include MIMIC-III, MIMIC-IV, BioASQ, PubMedQA, and CheXpert, which have facilitated advancements in tasks like medical report generation, clinical summarization, and synthetic data generation. The paper summarizes the challenges and opportunities in leveraging these benchmarks for advancing multimodal medical intelligence, emphasizing the need for datasets with a greater degree of language diversity, structured omics data, and innovative approaches to synthesis. This work also provides a foundation for future research in the application of LLMs in medicine, contributing to the evolving field of medical artificial intelligence., Comment: 25 pages, 5 tables
- Published
- 2024
34. A Two-Week $IXPE$ Monitoring Campaign on Mrk 421
- Author
-
Maksym, W. Peter, Liodakis, Ioannis, Saade, M. Lynne, Kim, Dawoon E., Middei, Riccardo, Di Gesu, Laura, Kiehlmann, Sebastian, Matzeu, Gabriele, Agudo, Iván, Marscher, Alan P., Ehlert, Steven R., Jorstad, Svetlana G., Kaaret, Philip, Marshall, Herman L., Pacciani, Luigi, Perri, Matteo, Puccetti, Simonetta, Kouch, Pouya M., Lindfors, Elina, Aceituno, Francisco José, Bonnoli, Giacomo, Casanova, Víctor, Escudero, Juan, Agís-González, Beatriz, Husillos, César, Morcuende, Daniel, Otero-Santos, Jorge, Sota, Alfredo, Piirola, Vilppu, Imazawa, Ryo, Sasada, Mahito, Fukazawa, Yasushi, Kawabata, Koji S., Uemura, Makoto, Mizuno, Tsunefumi, Nakaoka, Tatsuya, Akitaya, Hiroshi, McCall, Callum, Jermak, Helen E., Steele, Iain A., Borman, George A., Grishina, Tatiana S., Hagen-Thorn, Vladimir A., Kopatskaya, Evgenia N., Larionova, Elena G., Morozova, Daria A., Savchenko, Sergey S., Shishkina, Ekaterina V., Troitskiy, Ivan S., Troitskaya, Yulia V., Vasilyev, Andrey A., Zhovtan, Alexey V., Myserlis, Ioannis, Gurwell, Mark, Keating, Garrett, Rao, Ramprasad, Pauley, Colt, Angelakis, Emmanouil, Kraus, Alexander, Berdyugin, Andrei V., Kagitani, Masato, Kravtsov, Vadim, Poutanen, Juri, Sakanoi, Takeshi, Kang, Sincheol, Lee, Sang-Sung, Kim, Sang-Hyun, Cheong, Whee Yeon, Jeong, Hyeon-Woo, Song, Chanwoo, Blinov, Dmitry, Shablovinskaya, Elena, Antonelli, Lucio Angelo, Bachetti, Matteo, Baldini, Luca, Baumgartner, Wayne H., Bellazzini, Ronaldo, Bianchi, Stefano, Bongiorno, Stephen D., Bonino, Raffaella, Brez, Alessandro, Bucciantini, Niccoló, Capitanio, Fiamma, Castellano, Simone, Cavazzuti, Elisabetta, Chen, Chien-Ting, Ciprini, Stefano, Costa, Enrico, De Rosa, Alessandra, Del Monte, Ettore, Di Lalla, Niccoló, Di Marco, Alessandro, Donnarumma, Immacolata, Doroshenko, Victor, Dovčiak, Michal, Enoto, Teruaki, Evangelista, Yuri, Fabiani, Sergio, Ferrazzoli, Riccardo, Garcia, Javier A., Gunji, Shuichi, Hayashida, Kiyoshi, Heyl, Jeremy, Iwakiri, Wataru, Karas, Vladimir, Kislat, Fabian, Kitaguchi, Takao, Kolodziejczak, Jeffery J., Krawczynski, Henric, La Monaca, Fabio, Latronico, Luca, Maldera, Simone, Manfreda, Alberto, Marin, Frédéric, Marinucci, Andrea, Massaro, Francesco, Matt, Giorgio, Mitsuishi, Ikuyuki, Muleri, Fabio, Negro, Michela, Ng, C. -Y., O'Dell, Stephen L., Omodei, Nicola, Oppedisano, Chiara, Papitto, Alessandro, Pavlov, George G., Peirson, Abel Lawrence, Pesce-Rollins, Melissa, Petrucci, Pierre-Olivier, Pilia, Maura, Possenti, Andrea, Ramsey, Brian D., Rankin, John, Ratheesh, Ajay, Roberts, Oliver J., Romani, Roger W., Sgró, Carmelo, Slane, Patrick, Soffitta, Paolo, Spandre, Gloria, Swartz, Douglas A., Tamagawa, Toru, Tavecchio, Fabrizio, Taverna, Roberto, Tawara, Yuzuru, Tennant, Allyn F., Thomas, Nicholas E., Tombesi, Francesco, Trois, Alessio, Tsygankov, Sergey S., Turolla, Roberto, Vink, Jacco, Weisskopf, Martin C., Wu, Kinwah, Xie, Fei, and Zane, Silvia
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
X-ray polarization is a unique new probe of the particle acceleration in astrophysical jets made possible through the Imaging X-ray Polarimetry Explorer. Here we report on the first dense X-ray polarization monitoring campaign on the blazar Mrk 421. Our observations were accompanied by an even denser radio and optical polarization campaign. We find significant short-timescale variability in both X-ray polarization degree and angle, including a $\sim90^\circ$ angle rotation about the jet axis. We attribute this to random variations of the magnetic field, consistent with the presence of turbulence but also unlikely to be explained by turbulence alone. At the same time, the degree of lower-energy polarization is significantly lower and shows no more than mild variability. Our campaign provides further evidence for a scenario in which energy-stratified shock-acceleration of relativistic electrons, combined with a turbulent magnetic field, is responsible for optical to X-ray synchrotron emission in blazar jets., Comment: 23 pages, including 8 pages of appendices. 12 figures, 3 tables. Submitted to ApJ
- Published
- 2024
35. VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web Tasks
- Author
-
Jang, Lawrence, Li, Yinheng, Ding, Charles, Lin, Justin, Liang, Paul Pu, Zhao, Dan, Bonatti, Rogerio, and Koishida, Kazuhito
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Videos are often used to learn or extract the necessary information to complete tasks in ways different than what text and static imagery alone can provide. However, many existing agent benchmarks neglect long-context video understanding, instead focusing on text or static image inputs. To bridge this gap, we introduce VideoWebArena (VideoWA), a benchmark for evaluating the capabilities of long-context multimodal agents for video understanding. VideoWA consists of 2,021 web agent tasks based on manually crafted video tutorials, which total almost four hours of content. For our benchmark, we define a taxonomy of long-context video-based agent tasks with two main areas of focus: skill retention and factual retention. While skill retention tasks evaluate whether an agent can use a given human demonstration to complete a task efficiently, the factual retention task evaluates whether an agent can retrieve instruction-relevant information from a video to complete a task. We find that the best model achieves 13.3% success on factual retention tasks and 45.8% on factual retention QA pairs, far below human performance at 73.9% and 79.3%, respectively. On skill retention tasks, long-context models perform worse with tutorials than without, exhibiting a 5% performance decrease in WebArena tasks and a 10.3% decrease in VisualWebArena tasks. Our work highlights the need to improve the agentic abilities of long-context multimodal models and provides a testbed for future development with long-context video agents.
- Published
- 2024
36. Guiding Reinforcement Learning with Incomplete System Dynamics
- Author
-
Wang, Shuyuan, Duan, Jingliang, Lawrence, Nathan P., Loewen, Philip D., Forbes, Michael G., Gopaluni, R. Bhushan, and Zhang, Lixian
- Subjects
Computer Science - Robotics ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Model-free reinforcement learning (RL) is inherently a reactive method, operating under the assumption that it starts with no prior knowledge of the system and entirely depends on trial-and-error for learning. This approach faces several challenges, such as poor sample efficiency, generalization, and the need for well-designed reward functions to guide learning effectively. On the other hand, controllers based on complete system dynamics do not require data. This paper addresses the intermediate situation where there is not enough model information for complete controller design, but there is enough to suggest that a model-free approach is not the best approach either. By carefully decoupling known and unknown information about the system dynamics, we obtain an embedded controller guided by our partial model and thus improve the learning efficiency of an RL-enhanced approach. A modular design allows us to deploy mainstream RL algorithms to refine the policy. Simulation results show that our method significantly improves sample efficiency compared with standard RL methods on continuous control tasks, and also offers enhanced performance over traditional control approaches. Experiments on a real ground vehicle also validate the performance of our method, including generalization and robustness., Comment: Accepted to IROS 2024
- Published
- 2024
37. Graph Transformers Dream of Electric Flow
- Author
-
Cheng, Xiang, Carin, Lawrence, and Sra, Suvrit
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
We show theoretically and empirically that the linear Transformer, when applied to graph data, can implement algorithms that solve canonical problems such as electric flow and eigenvector decomposition. The input to the Transformer is simply the graph incidence matrix; no other explicit positional encoding information is provided. We present explicit weight configurations for implementing each such graph algorithm, and we bound the errors of the constructed Transformers by the errors of the underlying algorithms. Our theoretical findings are corroborated by experiments on synthetic data. Additionally, on a real-world molecular regression task, we observe that the linear Transformer is capable of learning a more effective positional encoding than the default one based on Laplacian eigenvectors. Our work is an initial step towards elucidating the inner-workings of the Transformer for graph data.
- Published
- 2024
38. From an attention economy to an ecology of attending. A manifesto
- Author
-
Bombaerts, Gunter, Hannes, Tom, Adam, Martin, Aloisi, Alessandra, Anderson, Joel, Berger, Lawrence, Bettera, Stefano Davide, Campo, Enrico, Candiotto, Laura, Panizza, Silvia Caprioglio, Citton, Yves, DâAngelo, Diego, Dennis, Matthew, Depraz, Nathalie, Doran, Peter, Drechsler, Wolfgang, Duane, Bill, Edelglass, William, Eisenberger, Iris, McGuire, Beverley Foulks, Fredriksson, Antony, Gill, Karamjit S., Hershock, Peter D., Hongladarom, Soraj, Jacobs, Beth, Karsai, Gábor, Lennerfors, Thomas, Lim, Jeanne, Lin, Chien-Te, Losoncz, Mark, Loy, David, Marin, Lavinia, Marosán, Bence Péter, Mascarello, Chiara, McMahan, David, Park, Jin Y., Petek, Nina, Puzio, Anna, Schaubroek, Katrien, Schlieter, Jens, Schroeder, Brian, Shakya, Shobhit, Shi, Juewei, Solomonova, Elizaveta, Tormen, Francesco, Uttam, Jitendra, Van Vugt, Marieke, Vörös, Sebastjan, Wehrle, Maren, Wellner, Galit, Wirth, Jason M., Witkowski, Olaf, Wongkitrungrueng, Apiradee, Wright, Dale S., and Zheng, Yutong
- Subjects
Computer Science - Computers and Society - Abstract
As the signatories of this manifesto, we denounce the attention economy as inhumane and a threat to our sociopolitical and ecological well-being. We endorse policymakers' efforts to address the negative consequences of the attention economy's technology, but add that these approaches are often limited in their criticism of the systemic context of human attention. Starting from Buddhist philosophy, we advocate a broader approach: an ecology of attending, that centers on conceptualizing, designing, and using attention (1) in an embedded way and (2) focused on the alleviating of suffering. With 'embedded' we mean that attention is not a neutral, isolated mechanism but a meaning-engendering part of an 'ecology' of bodily, sociotechnical and moral frameworks. With 'focused on the alleviation of suffering' we explicitly move away from the (often implicit) conception of attention as a tool for gratifying desires., Comment: 21 pages, 1 figure
- Published
- 2024
39. Discrepancies of spanning trees in dense graphs
- Author
-
Hollom, Lawrence, Lichev, Lyuben, Mond, Adva, and Portier, Julien
- Subjects
Mathematics - Combinatorics - Abstract
We address several related problems on combinatorial discrepancy of trees in a setting introduced by Erd\H{o}s, F\"{u}redi, Loebl and S\'{o}s. Given a fixed tree $T$ on $n$ vertices and an edge-colouring of the complete graph $K_n$, for every colour, we find a copy of $T$ in $K_n$ where the number of edges in that colour significantly exceeds its expected count in a uniformly random embedding. This resolves a problem posed by Erd\H{o}s, F\"{u}redi, Loebl and S\'{o}s by generalising their work from two to many colours. Furthermore, if $T$ has maximum degree $\Delta\leq\epsilon n$ for sufficiently small $\epsilon > 0$ and the edge-colouring of $K_n$ is both balanced and ``not too close'' to one particular instance, we show that, for every colour, there is a copy of $T$ in $K_n$ where that colour appears on linearly more edges than any other colour. Several related examples are provided to demonstrate the necessity of the introduced structural restrictions. Our proofs combine saturation arguments for the existence of particular coloured substructures and analysis of conveniently defined local exchanges. Using similar methods, we investigate the existence of copies of a graph $H$ with prescribed number of edges in each colour in $2$-edge-coloured dense host graphs. In particular, for a graph $H$ with bounded maximum degree and balanced $2$-edge-colourings $\mathbf{c}$ of a host graph $G$ with minimum degree at least $(1-\epsilon)n$ for some $\epsilon > 0$, we show that, for any sufficiently large $n$ and sufficiently small $\epsilon$, there exists a copy of $H$ where the number of edges in the two colours differ by at most $2$. Moreover, we completely characterise the pairs $(H,\mathbf{c})$ for which the difference of $2$ cannot be improved, refuting a conjecture by Mohr, Pardey, and Rautenbach.
- Published
- 2024
40. Jailbreaking and Mitigation of Vulnerabilities in Large Language Models
- Author
-
Peng, Benji, Bi, Ziqian, Niu, Qian, Liu, Ming, Feng, Pohsun, Wang, Tianyang, Yan, Lawrence K. Q., Wen, Yizhu, Zhang, Yichao, and Yin, Caitlyn Heqi
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Large Language Models (LLMs) have transformed artificial intelligence by advancing natural language understanding and generation, enabling applications across fields beyond healthcare, software engineering, and conversational systems. Despite these advancements in the past few years, LLMs have shown considerable vulnerabilities, particularly to prompt injection and jailbreaking attacks. This review analyzes the state of research on these vulnerabilities and presents available defense strategies. We roughly categorize attack approaches into prompt-based, model-based, multimodal, and multilingual, covering techniques such as adversarial prompting, backdoor injections, and cross-modality exploits. We also review various defense mechanisms, including prompt filtering, transformation, alignment techniques, multi-agent defenses, and self-regulation, evaluating their strengths and shortcomings. We also discuss key metrics and benchmarks used to assess LLM safety and robustness, noting challenges like the quantification of attack success in interactive contexts and biases in existing datasets. Identifying current research gaps, we suggest future directions for resilient alignment strategies, advanced defenses against evolving attacks, automation of jailbreak detection, and consideration of ethical and societal impacts. This review emphasizes the need for continued research and cooperation within the AI community to enhance LLM security and ensure their safe deployment.
- Published
- 2024
41. Elastic Shape Registration of Surfaces in 3D Space with Gradient Descent and Dynamic Programming
- Author
-
Bernal, Javier and Lawrence, Jim
- Subjects
Computer Science - Graphics - Abstract
Algorithms based on gradient descent for computing the elastic shape registration of two simple surfaces in 3-dimensional space and therefore the elastic shape distance between them have been proposed by Kurtek, Jermyn, et al., and more recently by Riseth. Their algorithms are designed to minimize a distance function between the surfaces by rotating and reparametrizing one of the surfaces, the minimization for reparametrizing based on a gradient descent approach that may terminate at a local solution. On the other hand, Bernal and Lawrence have proposed a similar algorithm, the minimization for reparametrizing based on dynamic programming thus producing a partial not necessarily optimal elastic shape registration of the surfaces. Accordingly, Bernal and Lawrence have proposed to use the rotation and reparametrization computed with their algorithm as the initial solution to any algorithm based on a gradient descent approach for reparametrizing. Here we present results from doing exactly that. We also describe and justify the gradient descent approach that is used for reparametrizing one of the surfaces., Comment: arXiv admin note: substantial text overlap with arXiv:2409.16462
- Published
- 2024
- Full Text
- View/download PDF
42. Movie Gen: A Cast of Media Foundation Models
- Author
-
Polyak, Adam, Zohar, Amit, Brown, Andrew, Tjandra, Andros, Sinha, Animesh, Lee, Ann, Vyas, Apoorv, Shi, Bowen, Ma, Chih-Yao, Chuang, Ching-Yao, Yan, David, Choudhary, Dhruv, Wang, Dingkang, Sethi, Geet, Pang, Guan, Ma, Haoyu, Misra, Ishan, Hou, Ji, Wang, Jialiang, Jagadeesh, Kiran, Li, Kunpeng, Zhang, Luxin, Singh, Mannat, Williamson, Mary, Le, Matt, Yu, Matthew, Singh, Mitesh Kumar, Zhang, Peizhao, Vajda, Peter, Duval, Quentin, Girdhar, Rohit, Sumbaly, Roshan, Rambhatla, Sai Saketh, Tsai, Sam, Azadi, Samaneh, Datta, Samyak, Chen, Sanyuan, Bell, Sean, Ramaswamy, Sharadh, Sheynin, Shelly, Bhattacharya, Siddharth, Motwani, Simran, Xu, Tao, Li, Tianhe, Hou, Tingbo, Hsu, Wei-Ning, Yin, Xi, Dai, Xiaoliang, Taigman, Yaniv, Luo, Yaqiao, Liu, Yen-Cheng, Wu, Yi-Chiao, Zhao, Yue, Kirstain, Yuval, He, Zecheng, He, Zijian, Pumarola, Albert, Thabet, Ali, Sanakoyeu, Artsiom, Mallya, Arun, Guo, Baishan, Araya, Boris, Kerr, Breena, Wood, Carleigh, Liu, Ce, Peng, Cen, Vengertsev, Dimitry, Schonfeld, Edgar, Blanchard, Elliot, Juefei-Xu, Felix, Nord, Fraylie, Liang, Jeff, Hoffman, John, Kohler, Jonas, Fire, Kaolin, Sivakumar, Karthik, Chen, Lawrence, Yu, Licheng, Gao, Luya, Georgopoulos, Markos, Moritz, Rashel, Sampson, Sara K., Li, Shikai, Parmeggiani, Simone, Fine, Steve, Fowler, Tara, Petrovic, Vladan, and Du, Yuming
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
We present Movie Gen, a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. We also show additional capabilities such as precise instruction-based video editing and generation of personalized videos based on a user's image. Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation. Our largest video generation model is a 30B parameter transformer trained with a maximum context length of 73K video tokens, corresponding to a generated video of 16 seconds at 16 frames-per-second. We show multiple technical innovations and simplifications on the architecture, latent spaces, training objectives and recipes, data curation, evaluation protocols, parallelization techniques, and inference optimizations that allow us to reap the benefits of scaling pre-training data, model size, and training compute for training large scale media generation models. We hope this paper helps the research community to accelerate progress and innovation in media generation models. All videos from this paper are available at https://go.fb.me/MovieGenResearchVideos.
- Published
- 2024
43. Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models
- Author
-
Barroso-Luque, Luis, Shuaibi, Muhammed, Fu, Xiang, Wood, Brandon M., Dzamba, Misko, Gao, Meng, Rizvi, Ammar, Zitnick, C. Lawrence, and Ulissi, Zachary W.
- Subjects
Condensed Matter - Materials Science ,Computer Science - Artificial Intelligence ,Physics - Computational Physics - Abstract
The ability to discover new materials with desirable properties is critical for numerous applications from helping mitigate climate change to advances in next generation computing hardware. AI has the potential to accelerate materials discovery and design by more effectively exploring the chemical space compared to other computational methods or by trial-and-error. While substantial progress has been made on AI for materials data, benchmarks, and models, a barrier that has emerged is the lack of publicly available training data and open pre-trained models. To address this, we present a Meta FAIR release of the Open Materials 2024 (OMat24) large-scale open dataset and an accompanying set of pre-trained models. OMat24 contains over 110 million density functional theory (DFT) calculations focused on structural and compositional diversity. Our EquiformerV2 models achieve state-of-the-art performance on the Matbench Discovery leaderboard and are capable of predicting ground-state stability and formation energies to an F1 score above 0.9 and an accuracy of 20 meV/atom, respectively. We explore the impact of model size, auxiliary denoising objectives, and fine-tuning on performance across a range of datasets including OMat24, MPtraj, and Alexandria. The open release of the OMat24 dataset and models enables the research community to build upon our efforts and drive further advancements in AI-assisted materials science., Comment: 19 pages
- Published
- 2024
44. Variational Inference in Location-Scale Families: Exact Recovery of the Mean and Correlation Matrix
- Author
-
Margossian, Charles C. and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
Given an intractable target density $p$, variational inference (VI) attempts to find the best approximation $q$ from a tractable family $Q$. This is typically done by minimizing the exclusive Kullback-Leibler divergence, $\text{KL}(q||p)$. In practice, $Q$ is not rich enough to contain $p$, and the approximation is misspecified even when it is a unique global minimizer of $\text{KL}(q||p)$. In this paper, we analyze the robustness of VI to these misspecifications when $p$ exhibits certain symmetries and $Q$ is a location-scale family that shares these symmetries. We prove strong guarantees for VI not only under mild regularity conditions but also in the face of severe misspecifications. Namely, we show that (i) VI recovers the mean of $p$ when $p$ exhibits an \textit{even} symmetry, and (ii) it recovers the correlation matrix of $p$ when in addition~$p$ exhibits an \textit{elliptical} symmetry. These guarantees hold for the mean even when $q$ is factorized and $p$ is not, and for the correlation matrix even when~$q$ and~$p$ behave differently in their tails. We analyze various regimes of Bayesian inference where these symmetries are useful idealizations, and we also investigate experimentally how VI behaves in their absence.
- Published
- 2024
45. Mastering AI: Big Data, Deep Learning, and the Evolution of Large Language Models -- Blockchain and Applications
- Author
-
Feng, Pohsun, Bi, Ziqian, Yan, Lawrence K. Q., Wen, Yizhu, Peng, Benji, Liu, Junyu, Yin, Caitlyn Heqi, Wang, Tianyang, Chen, Keyu, Zhang, Sen, Li, Ming, Xu, Jiawei, Liu, Ming, Pan, Xuanhe, Wang, Jinlang, and Niu, Qian
- Subjects
Computer Science - Cryptography and Security - Abstract
This article provides a detailed exploration of blockchain technology and its applications across various fields. It begins with an introduction to cryptography fundamentals, including symmetric and asymmetric encryption, and their roles in ensuring security and trust within blockchain systems. The article then delves into the structure and mechanics of Bitcoin and Ethereum, covering topics such as proof-of-work, proof-of-stake, and smart contracts. Additionally, it highlights practical applications of blockchain in industries like decentralized finance (DeFi), supply chain management, and identity authentication. The discussion also extends to consensus mechanisms and scalability challenges in blockchain, offering insights into emerging technologies like Layer 2 solutions and cross-chain interoperability. The article concludes by addressing the current state of academic research on blockchain and its potential future developments., Comment: This book contains 241 pages and 5 figures
- Published
- 2024
46. State of NLP in Kenya: A Survey
- Author
-
Amol, Cynthia Jayne, Chimoto, Everlyn Asiko, Gesicho, Rose Delilah, Gitau, Antony M., Etori, Naome A., Kinyanjui, Caringtone, Ndung'u, Steven, Moruye, Lawrence, Ooko, Samson Otieno, Kitonga, Kavengi, Muhia, Brian, Gitau, Catherine, Ndolo, Antony, Wanzare, Lilian D. A., Kahira, Albert Njoroge, and Tombe, Ronald
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Kenya, known for its linguistic diversity, faces unique challenges and promising opportunities in advancing Natural Language Processing (NLP) technologies, particularly for its underrepresented indigenous languages. This survey provides a detailed assessment of the current state of NLP in Kenya, emphasizing ongoing efforts in dataset creation, machine translation, sentiment analysis, and speech recognition for local dialects such as Kiswahili, Dholuo, Kikuyu, and Luhya. Despite these advancements, the development of NLP in Kenya remains constrained by limited resources and tools, resulting in the underrepresentation of most indigenous languages in digital spaces. This paper uncovers significant gaps by critically evaluating the available datasets and existing NLP models, most notably the need for large-scale language models and the insufficient digital representation of Indigenous languages. We also analyze key NLP applications: machine translation, information retrieval, and sentiment analysis-examining how they are tailored to address local linguistic needs. Furthermore, the paper explores the governance, policies, and regulations shaping the future of AI and NLP in Kenya and proposes a strategic roadmap to guide future research and development efforts. Our goal is to provide a foundation for accelerating the growth of NLP technologies that meet Kenya's diverse linguistic demands., Comment: 21 pages
- Published
- 2024
47. Use of What-if Scenarios to Help Explain Artificial Intelligence Models for Neonatal Health
- Author
-
Mamun, Abdullah, Devoe, Lawrence D., Evans, Mark I., Britt, David W., Klein-Seetharaman, Judith, and Ghasemzadeh, Hassan
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Early detection of intrapartum risk enables interventions to potentially prevent or mitigate adverse labor outcomes such as cerebral palsy. Currently, there is no accurate automated system to predict such events to assist with clinical decision-making. To fill this gap, we propose "Artificial Intelligence (AI) for Modeling and Explaining Neonatal Health" (AIMEN), a deep learning framework that not only predicts adverse labor outcomes from maternal, fetal, obstetrical, and intrapartum risk factors but also provides the model's reasoning behind the predictions made. The latter can provide insights into what modifications in the input variables of the model could have changed the predicted outcome. We address the challenges of imbalance and small datasets by synthesizing additional training data using Adaptive Synthetic Sampling (ADASYN) and Conditional Tabular Generative Adversarial Networks (CTGAN). AIMEN uses an ensemble of fully-connected neural networks as the backbone for its classification with the data augmentation supported by either ADASYN or CTGAN. AIMEN, supported by CTGAN, outperforms AIMEN supported by ADASYN in classification. AIMEN can predict a high risk for adverse labor outcomes with an average F1 score of 0.784. It also provides counterfactual explanations that can be achieved by changing 2 to 3 attributes on average. Resources available: https://github.com/ab9mamun/AIMEN., Comment: 17 pages, 8 figures
- Published
- 2024
48. Unravelling the multiscale surface mechanics of soft solids
- Author
-
Bain, Nicolas, Wilen, Lawrence A., Gerber, Dominic, Zu, Mengjie, Goodrich, Carl P., Duraivel, Senthilkumar, Varma, Kaarthik, Koganti, Harsha, Style, Robert W., and Dufresne, Eric R.
- Subjects
Condensed Matter - Soft Condensed Matter - Abstract
The softer a material is, the more its mechanics are sensitive to interfaces. In soft gels, an elastic polymeric network is filled with free-flowing molecules. In theory, either of these components could dominate the material interfacial properties. In practice, current measurements cannot distinguish between the two, nor can they rule out material inhomogeneities, which could modulate the apparent properties of the interfaces. Here, we introduce an experimental approach that elucidates the interfacial mechanics of soft solids. Coupling quantum dots, controlled deformations, and precise confocal measurements, we fully separate the material inhomogeneities of a silicone gel from its true interfacial properties. We quantify a gradient in bulk elastic properties near the surface, with a characteristic length scale of about 20 {\mu}m. In addition, we observe a surface excess elasticity, whose associated gradient is unresolvable with light microscopy. The composition of the external medium has a strong affect on the observed value of the surface elasticity. Thus, we conclude that the surface elasticity of this silicone network is an interfacial property., Comment: 8 pages, 3 figures
- Published
- 2024
49. Impact of chaotic magnetic field on mass-radius relation of rotating neutron stars
- Author
-
Pattersons, Muhammad Lawrence, Zen, Freddy Permana, and Hikmawan, Getbogi
- Subjects
Astrophysics - High Energy Astrophysical Phenomena ,General Relativity and Quantum Cosmology - Abstract
Observations reveal that magnetic fields on neutron stars (NSs) are in the range of $10^{8-15}$ G. Apart from being celestial bodies, NSs are normally rotating. In this work, we study the impact of a chaotic magnetic field on the mass-radius relation of the rotating NSs. We employ an equation of state of NSs with the nuclei in the crust and hyperons in the core. We use Hartle-Thorne formalism as an approximation of the rotating NSs. For the magnetic field ansatz, we use the one coupled to the energy density. We find that the magnetic field can decrease radius of NS. NSs formed with stronger chaotic magnetic fields exhibit a lower maximum mass compared to those formed with weaker chaotic magnetic fields. In contrast, the increment of the magnetic field can increase the compactness and deformation of rotating NSs., Comment: 7 pages, 4 figures, to be submitted as a conference paper
- Published
- 2024
50. More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
- Author
-
Shaier, Sagi, Pereira, Francisco, von der Wense, Katharina, Hunter, Lawrence E, and Jones, Matt
- Subjects
Computer Science - Machine Learning - Abstract
The evolution of biological neural systems has led to both modularity and sparse coding, which enables efficiency in energy usage, and robustness across the diversity of tasks in the lifespan. In contrast, standard neural networks rely on dense, non-specialized architectures, where all model parameters are simultaneously updated to learn multiple tasks, leading to representation interference. Current sparse neural network approaches aim to alleviate this issue, but are often hindered by limitations such as 1) trainable gating functions that cause representation collapse; 2) non-overlapping experts that result in redundant computation and slow learning; and 3) reliance on explicit input or task IDs that impose significant constraints on flexibility and scalability. In this paper we propose Conditionally Overlapping Mixture of ExperTs (COMET), a general deep learning method that addresses these challenges by inducing a modular, sparse architecture with an exponential number of overlapping experts. COMET replaces the trainable gating function used in Sparse Mixture of Experts with a fixed, biologically inspired random projection applied to individual input representations. This design causes the degree of expert overlap to depend on input similarity, so that similar inputs tend to share more parameters. This facilitates positive knowledge transfer, resulting in faster learning and improved generalization. We demonstrate the effectiveness of COMET on a range of tasks, including image classification, language modeling, and regression, using several popular deep learning architectures.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.