10,752,761 results on '"Chen, Be"'
Search Results
2. The Kennedy Krieger Curriculum: Equipping Frontline Clinicians to Improve Care for Children with Behavioral, Emotional, and Developmental Disorders
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
3. Cover
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
4. Step 3: Goals and Objectives: . . . focusing the curriculum
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
5. Dissemination: . . . making it count twice
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
6. Overview: A Six-Step Approach to Curriculum Development
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
7. Curriculum Development for Larger Programs
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
8. Step 2: Targeted Needs Assessment: . . . refining the foundation
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
9. Curriculum Maintenance and Enhancement: . . . keeping the curriculum vibrant
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
10. Step 4: Educational Strategies: . . . accomplishing educational objectives
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
11. Curricula That Address Community Needs and Health Equity
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
12. Topics in Interdisciplinary Medicine: High-Value Health Care
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
13. Step 5: Implementation: . . . making the curriculum a reality
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
14. Step 1: Problem Identification and General Needs Assessment: . . . building the foundation for meaningful objectives
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
15. Step 6: Evaluation and Feedback: . . . assessing the achievement of objectives and promoting continuous improvement
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
16. Title Page, Copyright, Dedication
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
17. Preface
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
18. Index
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
19. Appendix B. Curricular, Faculty Development, and Funding Resources
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
20. List of Contributors
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
21. Appendix A. Example Curricula
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
22. Neurology Graduate Training Program in Zambia
- Author
-
Chen, Belinda Y., Tackett, Sean A., Hughes, Mark T., Kern, David E., and Thomas, Patricia A.
- Published
- 2022
23. A nomogram for predicting nutritional risk before gastric cancer surgery
- Author
-
Li, Changhua, Liu, Jinlu, Wang, Congjun, Luo, Yihuan, Qin, Lanhui, Chen, Peiyin, and Chen, Junqiang
- Published
- 2024
24. The impact of tea consumption on the risk of depression: A Mendelian randomization and Bayesian weighting algorithm study
- Author
-
Zhuo, Guifeng, Chen, Wei, Zhang, Jinzhi, Su, Mingyang, Zhu, Xiaomin, Pu, Shanshan, Liao, Naibing, Huang, Deqing, Chen, Xiangyi, and Wu, Lin
- Published
- 2024
25. Reliability and validity of five balance assessments battery in individuals with schizophrenia
- Author
-
Lin, I-Chen, Chen, Fu-Chen, Chen, Chia-Hsiang, and Chen, Ming-De
- Published
- 2024
26. Breaking the Pre-Planning Barrier: Real-Time Adaptive Coordination of Mission and Charging UAVs Using Graph Reinforcement Learning
- Author
-
Hu, Yuhan, Sun, Yirong, Chen, Yanjun, and Chen, Xinghao
- Subjects
Computer Science - Multiagent Systems - Abstract
Unmanned Aerial Vehicles (UAVs) are pivotal in applications such as search and rescue and environmental monitoring, excelling in intelligent perception tasks. However, their limited battery capacity hinders long-duration and long-distance missions. Charging UAVs (CUAVs) offers a potential solution by recharging mission UAVs (MUAVs), but existing methods rely on impractical pre-planned routes, failing to enable organic cooperation and limiting mission efficiency. We introduce a novel multi-agent deep reinforcement learning model named \textbf{H}eterogeneous \textbf{G}raph \textbf{A}ttention \textbf{M}ulti-agent Deep Deterministic Policy Gradient (HGAM), designed to dynamically coordinate MUAVs and CUAVs. This approach maximizes data collection, geographical fairness, and energy efficiency by allowing UAVs to adapt their routes in real-time to current task demands and environmental conditions without pre-planning. Our model uses heterogeneous graph attention networks (GATs) to present heterogeneous agents and facilitate efficient information exchange. It operates within an actor-critic framework. Simulation results show that our model significantly improves cooperation among heterogeneous UAVs, outperforming existing methods in several metrics, including data collection rate and charging efficiency.
- Published
- 2025
27. DeepFlow: Serverless Large Language Model Serving at Scale
- Author
-
Hu, Junhao, Xu, Jiang, He, Yulong, Chen, Yuetao, Dan, Gengyuan, Liu, Zhixia, Zhang, Baoquan, Wan, Shining, Dong, Zhiyu, Xu, Hao, Ren, Zhihao, Liu, Jiang, Meng, Jie, He, Chao, Xie, Tao, Lin, Dayun, Zhang, Qin, Yu, Yue, Feng, Hao, Chen, Xusheng, and Shan, Yizhou
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
This paper introduces DeepFlow, a scalable and serverless AI platform designed to efficiently serve large language models (LLMs) at scale in cloud environments. DeepFlow addresses key challenges such as resource allocation, serving efficiency, and cold start latencies through four main design components. First, it uses a simple serverless abstraction called the request-job-task model, which helps manage AI workloads across post-training and model serving tasks. Second, it builds an in-house serving engine FlowServe using a microkernel-inspired design, NPU-centric execution, and SPMD-based parallelism to optimize LLM serving. The system also includes novel scheduling policies tailored for both PD-disaggregated and PD-colocated configurations. With optimizations like pre-warmed pods, DRAM pre-loading, and NPU-fork, DeepFlow can scale up to 64 instances in seconds. DeepFlow has been in production for over a year, operating on a large Ascend NPU cluster and providing industrystandard APIs for fine-tuning, agent serving, and model serving to our customers.
- Published
- 2025
28. PAID: A Framework of Product-Centric Advertising Image Design
- Author
-
Chen, Hongyu, Zhou, Min, Jiang, Jing, Chen, Jiale, Lu, Yang, Xiao, Bo, Ge, Tiezheng, and Zheng, Bo
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In E-commerce platforms, a full advertising image is composed of a background image and marketing taglines. Automatic ad image design reduces human costs and plays a crucial role. For the convenience of users, a novel automatic framework named Product-Centric Advertising Image Design (PAID) is proposed in this work. PAID takes the product foreground image, required taglines, and target size as input and creates an ad image automatically. PAID consists of four sequential stages: prompt generation, layout generation, background image generation, and graphics rendering. Different expert models are trained to conduct these sub-tasks. A visual language model (VLM) based prompt generation model is leveraged to produce a product-matching background prompt. The layout generation model jointly predicts text and image layout according to the background prompt, product, and taglines to achieve the best harmony. An SDXL-based layout-controlled inpainting model is trained to generate an aesthetic background image. Previous ad image design methods take a background image as input and then predict the layout of taglines, which limits the spatial layout due to fixed image content. Innovatively, our PAID adjusts the stages to produce an unrestricted layout. To complete the PAID framework, we created two high-quality datasets, PITA and PIL. Extensive experimental results show that PAID creates more visually pleasing advertising images than previous methods.
- Published
- 2025
29. Humanity's Last Exam
- Author
-
Phan, Long, Gatti, Alice, Han, Ziwen, Li, Nathaniel, Hu, Josephina, Zhang, Hugh, Shi, Sean, Choi, Michael, Agrawal, Anish, Chopra, Arnav, Khoja, Adam, Kim, Ryan, Hausenloy, Jason, Zhang, Oliver, Mazeika, Mantas, Anderson, Daron, Nguyen, Tung, Mahmood, Mobeen, Feng, Fiona, Feng, Steven Y., Zhao, Haoran, Yu, Michael, Gangal, Varun, Zou, Chelsea, Wang, Zihan, Wang, Jessica P., Kumar, Pawan, Pokutnyi, Oleksandr, Gerbicz, Robert, Popov, Serguei, Levin, John-Clark, Kazakov, Mstyslav, Schmitt, Johannes, Galgon, Geoff, Sanchez, Alvaro, Lee, Yongki, Yeadon, Will, Sauers, Scott, Roth, Marc, Agu, Chidozie, Riis, Søren, Giska, Fabian, Utpala, Saiteja, Giboney, Zachary, Goshu, Gashaw M., Xavier, Joan of Arc, Crowson, Sarah-Jane, Naiya, Mohinder Maheshbhai, Burns, Noah, Finke, Lennart, Cheng, Zerui, Park, Hyunwoo, Fournier-Facio, Francesco, Wydallis, John, Nandor, Mark, Singh, Ankit, Gehrunger, Tim, Cai, Jiaqi, McCarty, Ben, Duclosel, Darling, Nam, Jungbae, Zampese, Jennifer, Hoerr, Ryan G., Bacho, Aras, Loume, Gautier Abou, Galal, Abdallah, Cao, Hangrui, Garretson, Alexis C, Sileo, Damien, Ren, Qiuyu, Cojoc, Doru, Arkhipov, Pavel, Qazi, Usman, Li, Lianghui, Motwani, Sumeet, de Witt, Christian Schroeder, Taylor, Edwin, Veith, Johannes, Singer, Eric, Hartman, Taylor D., Rissone, Paolo, Jin, Jaehyeok, Shi, Jack Wei Lun, Willcocks, Chris G., Robinson, Joshua, Mikov, Aleksandar, Prabhu, Ameya, Tang, Longke, Alapont, Xavier, Uro, Justine Leon, Zhou, Kevin, Santos, Emily de Oliveira, Maksimov, Andrey Pupasov, Vendrow, Edward, Zenitani, Kengo, Guillod, Julien, Li, Yuqi, Vendrow, Joshua, Kuchkin, Vladyslav, Ze-An, Ng, Marion, Pierre, Efremov, Denis, Lynch, Jayson, Liang, Kaiqu, Gritsevskiy, Andrew, Martinez, Dakotah, Pageler, Ben, Crispino, Nick, Zvonkine, Dimitri, Fraga, Natanael Wildner, Soori, Saeed, Press, Ori, Tang, Henry, Salazar, Julian, Green, Sean R., Brüssel, Lina, Twayana, Moon, Dieuleveut, Aymeric, Rogers, T. Ryan, Zhang, Wenjin, Li, Bikun, Yang, Jinzhou, Rao, Arun, Loiseau, Gabriel, Kalinin, Mikhail, Lukas, Marco, Manolescu, Ciprian, Mishra, Subrata, Kamdoum, Ariel Ghislain Kemogne, Kreiman, Tobias, Hogg, Tad, Jin, Alvin, Bosio, Carlo, Sun, Gongbo, Coppola, Brian P, Tarver, Tim, Heidinger, Haline, Sayous, Rafael, Ivanov, Stefan, Cavanagh, Joseph M, Shen, Jiawei, Imperial, Joseph Marvin, Schwaller, Philippe, Senthilkuma, Shaipranesh, Bran, Andres M, Dehghan, Ali, Algaba, Andres, Verbeken, Brecht, Noever, David, P V, Ragavendran, Schut, Lisa, Sucholutsky, Ilia, Zheltonozhskii, Evgenii, Lim, Derek, Stanley, Richard, Sivarajan, Shankar, Yang, Tong, Maar, John, Wykowski, Julian, Oller, Martí, Sandlin, Jennifer, Sahu, Anmol, Hu, Yuzheng, Fish, Sara, Heydari, Nasser, Apronti, Archimedes, Rawal, Kaivalya, Vilchis, Tobias Garcia, Zu, Yuexuan, Lackner, Martin, Koppel, James, Nguyen, Jeremy, Antonenko, Daniil S., Chern, Steffi, Zhao, Bingchen, Arsene, Pierrot, Goldfarb, Alan, Ivanov, Sergey, Poświata, Rafał, Wang, Chenguang, Li, Daofeng, Crisostomi, Donato, Achilleos, Andrea, Myklebust, Benjamin, Sen, Archan, Perrella, David, Kaparov, Nurdin, Inlow, Mark H, Zang, Allen, Thornley, Elliott, Orel, Daniil, Poritski, Vladislav, Ben-David, Shalev, Berger, Zachary, Whitfill, Parker, Foster, Michael, Munro, Daniel, Ho, Linh, Hava, Dan Bar, Kuchkin, Aleksey, Lauff, Robert, Holmes, David, Sommerhage, Frank, Schneider, Keith, Kazibwe, Zakayo, Stambaugh, Nate, Singh, Mukhwinder, Magoulas, Ilias, Clarke, Don, Kim, Dae Hyun, Dias, Felipe Meneguitti, Elser, Veit, Agarwal, Kanu Priya, Vilchis, Victor Efren Guadarrama, Klose, Immo, Demian, Christoph, Anantheswaran, Ujjwala, Zweiger, Adam, Albani, Guglielmo, Li, Jeffery, Daans, Nicolas, Radionov, Maksim, Rozhoň, Václav, Ma, Ziqiao, Stump, Christian, Berkani, Mohammed, Platnick, Jacob, Nevirkovets, Volodymyr, Basler, Luke, Piccardo, Marco, Jeanplong, Ferenc, Cohen, Niv, Tkadlec, Josef, Rosu, Paul, Padlewski, Piotr, Barzowski, Stanislaw, Montgomery, Kyle, Menezes, Aline, Patel, Arkil, Wang, Zixuan, Tucker-Foltz, Jamie, Stade, Jack, Goertzen, Tom, Kazemi, Fereshteh, Milbauer, Jeremiah, Ambay, John Arnold, Shukla, Abhishek, Labrador, Yan Carlos Leyva, Givré, Alan, Wolff, Hew, Rossbach, Vivien, Aziz, Muhammad Fayez, Kaddar, Younesse, Chen, Yanxu, Zhang, Robin, Pan, Jiayi, Terpin, Antonio, Muennighoff, Niklas, Schoelkopf, Hailey, Zheng, Eric, Carmi, Avishy, Jones, Adam, Shah, Jainam, Brown, Ethan D. L., Zhu, Kelin, Bartolo, Max, Wheeler, Richard, Ho, Andrew, Barkan, Shaul, Wang, Jiaqi, Stehberger, Martin, Kretov, Egor, Sridhar, Kaustubh, EL-Wasif, Zienab, Zhang, Anji, Pyda, Daniel, Tam, Joanna, Cunningham, David M., Goryachev, Vladimir, Patramanis, Demosthenes, Krause, Michael, Redenti, Andrew, Bugas, Daniel, Aldous, David, Lai, Jesyin, Coleman, Shannon, Bahaloo, Mohsen, Xu, Jiangnan, Lee, Sangwon, Zhao, Sandy, Tang, Ning, Cohen, Michael K., Carroll, Micah, Paradise, Orr, Kirchner, Jan Hendrik, Steinerberger, Stefan, Ovchynnikov, Maksym, Matos, Jason O., Shenoy, Adithya, Junior, Benedito Alves de Oliveira, Wang, Michael, Nie, Yuzhou, Giordano, Paolo, Petersen, Philipp, Sztyber-Betley, Anna, Shukla, Priti, Crozier, Jonathan, Pinto, Antonella, Verma, Shreyas, Joshi, Prashant, Yong, Zheng-Xin, Tee, Allison, Andréoletti, Jérémy, Weller, Orion, Singhal, Raghav, Zhang, Gang, Ivanov, Alexander, Khoury, Seri, Mostaghimi, Hamid, Thaman, Kunvar, Chen, Qijia, Khánh, Tran Quoc, Loader, Jacob, Cavalleri, Stefano, Szlyk, Hannah, Brown, Zachary, Roberts, Jonathan, Alley, William, Sun, Kunyang, Stendall, Ryan, Lamparth, Max, Reuel, Anka, Wang, Ting, Xu, Hanmeng, Raparthi, Sreenivas Goud, Hernández-Cámara, Pablo, Martin, Freddie, Malishev, Dmitry, Preu, Thomas, Korbak, Tomek, Abramovitch, Marcus, Williamson, Dominic, Chen, Ziye, Bálint, Biró, Bari, M Saiful, Kassani, Peyman, Wang, Zihao, Ansarinejad, Behzad, Goswami, Laxman Prasad, Sun, Yewen, Elgnainy, Hossam, Tordera, Daniel, Balabanian, George, Anderson, Earth, Kvistad, Lynna, Moyano, Alejandro José, Maheshwari, Rajat, Sakor, Ahmad, Eron, Murat, McAlister, Isaac C., Gimenez, Javier, Enyekwe, Innocent, O., Andrew Favre D., Shah, Shailesh, Zhou, Xiaoxiang, Kamalov, Firuz, Clark, Ronald, Abdoli, Sherwin, Santens, Tim, Meer, Khalida, Wang, Harrison K, Ramakrishnan, Kalyan, Chen, Evan, Tomasiello, Alessandro, De Luca, G. Bruno, Looi, Shi-Zhuo, Le, Vinh-Kha, Kolt, Noam, Mündler, Niels, Semler, Avi, Rodman, Emma, Drori, Jacob, Fossum, Carl J, Jagota, Milind, Pradeep, Ronak, Fan, Honglu, Shah, Tej, Eicher, Jonathan, Chen, Michael, Thaman, Kushal, Merrill, William, Harris, Carter, Gross, Jason, Gusev, Ilya, Sharma, Asankhaya, Agnihotri, Shashank, Zhelnov, Pavel, Usawasutsakorn, Siranut, Mofayezi, Mohammadreza, Bogdanov, Sergei, Piperski, Alexander, Carauleanu, Marc, Zhang, David K., Ler, Dylan, Leventov, Roman, Soroko, Ignat, Jansen, Thorben, Lauer, Pascal, Duersch, Joshua, Taamazyan, Vage, Morak, Wiktor, Ma, Wenjie, Held, William, Huy, Tran Đuc, Xian, Ruicheng, Zebaze, Armel Randy, Mohamed, Mohanad, Leser, Julian Noah, Yuan, Michelle X, Yacar, Laila, Lengler, Johannes, Shahrtash, Hossein, Oliveira, Edson, Jackson, Joseph W., Gonzalez, Daniel Espinosa, Zou, Andy, Chidambaram, Muthu, Manik, Timothy, Haffenden, Hector, Stander, Dashiell, Dasouqi, Ali, Shen, Alexander, Duc, Emilien, Golshani, Bita, Stap, David, Uzhou, Mikalai, Zhidkovskaya, Alina Borisovna, Lewark, Lukas, Vincze, Mátyás, Wehr, Dustin, Tang, Colin, Hossain, Zaki, Phillips, Shaun, Muzhen, Jiang, Ekström, Fredrik, Hammon, Angela, Patel, Oam, Remy, Nicolas, Farhidi, Faraz, Medley, George, Mohammadzadeh, Forough, Peñaflor, Madellene, Kassahun, Haile, Friedrich, Alena, Sparrow, Claire, Sakal, Taom, Dhamane, Omkar, Mirabadi, Ali Khajegili, Hallman, Eric, Battaglia, Mike, Maghsoudimehrabani, Mohammad, Hoang, Hieu, Amit, Alon, Hulbert, Dave, Pereira, Roberto, Weber, Simon, Mensah, Stephen, Andre, Nathan, Peristyy, Anton, Harjadi, Chris, Gupta, Himanshu, Malina, Stephen, Albanie, Samuel, Cai, Will, Mehkary, Mustafa, Reidegeld, Frank, Dick, Anna-Katharina, Friday, Cary, Sidhu, Jasdeep, Kim, Wanyoung, Costa, Mariana, Gurdogan, Hubeyb, Weber, Brian, Kumar, Harsh, Jiang, Tong, Agarwal, Arunim, Ceconello, Chiara, Vaz, Warren S., Zhuang, Chao, Park, Haon, Tawfeek, Andrew R., Aggarwal, Daattavya, Kirchhof, Michael, Dai, Linjie, Kim, Evan, Ferret, Johan, Wang, Yuzhou, Yan, Minghao, Burdzy, Krzysztof, Zhang, Lixin, Franca, Antonio, Pham, Diana T., Loh, Kang Yong, Gul, Shreen, Chhablani, Gunjan, Du, Zhehang, Cosma, Adrian, White, Colin, Riblet, Robin, Saxena, Prajvi, Votava, Jacob, Vinnikov, Vladimir, Delaney, Ethan, Halasyamani, Shiv, Shahid, Syed M., Mourrat, Jean-Christophe, Vetoshkin, Lavr, Bacho, Renas, Ginis, Vincent, Maksapetyan, Aleksandr, de la Rosa, Florencia, Li, Xiuyu, Malod, Guillaume, Lang, Leon, Laurendeau, Julien, Adesanya, Fatimah, Portier, Julien, Hollom, Lawrence, Souza, Victor, Zhou, Yuchen Anna, Yalın, Yiğit, Obikoya, Gbenga Daniel, Arnaboldi, Luca, Rai, Bigi, Filippo, Bacho, Kaniuar, Clavier, Pierre, Recchia, Gabriel, Popescu, Mara, Shulga, Nikita, Tanwie, Ngefor Mildred, Lux, Thomas C. H., Rank, Ben, Ni, Colin, Yakimchyk, Alesia, Huanxu, Liu, Häggström, Olle, Verkama, Emil, Narayan, Himanshu, Gundlach, Hans, Brito-Santana, Leonor, Amaro, Brian, Vajipey, Vivek, Grover, Rynaa, Fan, Yiyang, Silva, Gabriel Poesia Reis e, Xin, Linwei, Kratish, Yosi, Łucki, Jakub, Li, Wen-Ding, Xu, Justin, Scaria, Kevin Joseph, Vargus, Freddie, Habibi, Farzad, Long, Lian, Rodolà, Emanuele, Robins, Jules, Cheng, Vincent, Grabb, Declan, Bosio, Ida, Fruhauff, Tony, Akov, Ido, Lo, Eve J. Y., Qi, Hao, Jiang, Xi, Segev, Ben, Fan, Jingxuan, Martinson, Sarah, Wang, Erik Y., Hausknecht, Kaylie, Brenner, Michael P., Mao, Mao, Jiang, Yibo, Zhang, Xinyu, Avagian, David, Scipio, Eshawn Jessica, Siddiqi, Muhammad Rehan, Ragoler, Alon, Tan, Justin, Patil, Deepakkumar, Plecnik, Rebeka, Kirtland, Aaron, Montecillo, Roselynn Grace, Durand, Stephane, Bodur, Omer Faruk, Adoul, Zahra, Zekry, Mohamed, Douville, Guillaume, Karakoc, Ali, Santos, Tania C. B., Shamseldeen, Samir, Karim, Loukmane, Liakhovitskaia, Anna, Resman, Nate, Farina, Nicholas, Gonzalez, Juan Carlos, Maayan, Gabe, Hoback, Sarah, Pena, Rodrigo De Oliveira, Sherman, Glen, Mariji, Hodjat, Pouriamanesh, Rasoul, Wu, Wentao, Demir, Gözdenur, Mendoza, Sandra, Alarab, Ismail, Cole, Joshua, Ferreira, Danyelle, Johnson, Bryan, Milliron, Hsiaoyun, Safdari, Mohammad, Dai, Liangti, Arthornthurasuk, Siriphan, Pronin, Alexey, Fan, Jing, Ramirez-Trinidad, Angel, Cartwright, Ashley, Pottmaier, Daphiny, Taheri, Omid, Outevsky, David, Stepanic, Stanley, Perry, Samuel, Askew, Luke, Rodríguez, Raúl Adrián Huerta, Dendane, Abdelkader, Ali, Sam, Lorena, Ricardo, Iyer, Krishnamurthy, Salauddin, Sk Md, Islam, Murat, Gonzalez, Juan, Ducey, Josh, Campbell, Russell, Somrak, Maja, Mavroudis, Vasilios, Vergo, Eric, Qin, Juehang, Borbás, Benjámin, Chu, Eric, Lindsey, Jack, Radhakrishnan, Anil, Jallon, Antoine, McInnis, I. M. J., Hoover, Alex, Möller, Sören, Bian, Song, Lai, John, Patwardhan, Tejal, Yue, Summer, Wang, Alexandr, and Hendrycks, Dan
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai., Comment: 25 pages, 6 figures
- Published
- 2025
30. Global Semantic-Guided Sub-image Feature Weight Allocation in High-Resolution Large Vision-Language Models
- Author
-
Liang, Yuxuan, Li, Xu, Chen, Xiaolei, Chen, Haotian, Zheng, Yi, Lai, Chenghang, Li, Bin, and Xue, Xiangyang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
As the demand for high-resolution image processing in Large Vision-Language Models (LVLMs) grows, sub-image partitioning has become a popular approach for mitigating visual information loss associated with fixed-resolution processing. However, existing partitioning methods uniformly process sub-images, resulting in suboptimal image understanding. In this work, we reveal that the sub-images with higher semantic relevance to the entire image encapsulate richer visual information for preserving the model's visual understanding ability. Therefore, we propose the Global Semantic-guided Weight Allocator (GSWA) module, which dynamically allocates weights to sub-images based on their relative information density, emulating human visual attention mechanisms. This approach enables the model to focus on more informative regions, overcoming the limitations of uniform treatment. We integrate GSWA into the InternVL2-2B framework to create SleighVL, a lightweight yet high-performing model. Extensive experiments demonstrate that SleighVL outperforms models with comparable parameters and remains competitive with larger models. Our work provides a promising direction for more efficient and contextually aware high-resolution image processing in LVLMs, advancing multimodal system development., Comment: 10 pages, 10 figures and tables
- Published
- 2025
31. MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation Methods
- Author
-
Xu, Zukang, Yue, Yuxuan, Hu, Xing, Yuan, Zhihang, Jiang, Zixu, Chen, Zhixuan, Yu, Jiangyong, Xu, Chen, Zhou, Sifan, and Yang, Dawei
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Mamba is an efficient sequence model that rivals Transformers and demonstrates significant potential as a foundational architecture for various tasks. Quantization is commonly used in neural networks to reduce model size and computational latency. However, applying quantization to Mamba remains underexplored, and existing quantization methods, which have been effective for CNN and Transformer models, appear inadequate for Mamba models (e.g., Quarot suffers a 21% accuracy drop on Vim-T$^\dagger$ even under W8A8). We have pioneered the exploration of this issue and identified several key challenges. First, significant outliers are present in gate projections, output projections, and matrix multiplications. Second, Mamba's unique parallel scan further amplifies these outliers, leading to uneven and heavy-tailed data distributions. Third, even with the application of the Hadamard transform, the variance across channels in weights and activations still remains inconsistent. To these ends, we propose MambaQuant, a post-training quantization (PTQ) framework consisting of: 1) Karhunen-Loeve Transformation (KLT) enhanced rotation, rendering the rotation matrix adaptable to diverse channel distributions. 2) Smooth-Fused rotation, which equalizes channel variances and can merge additional parameters into model weights. Experiments show that MambaQuant can quantize both weights and activations into 8-bit with less than 1% accuracy loss for Mamba-based vision and language tasks. To the best of our knowledge, MambaQuant is the first comprehensive PTQ design for the Mamba family, paving the way for further advancements in its application.
- Published
- 2025
32. Cross section measurement of $e^{+}e^{-} \to f_{1}(1285)\pi^{+}\pi^{-}$ at center-of-mass energies between $3.808$ and $4.951\rm GeV$
- Author
-
BESIII Collaboration, Ablikim, M., Achasov, M. N., Adlarson, P., Afedulidis, O., Ai, X. C., Aliberti, R., Amoroso, A., An, Q., Bai, Y., Bakina, O., Balossino, I., Ban, Y., Bao, H. -R., Batozskaya, V., Begzsuren, K., Berger, N., Berlowski, M., Bertani, M., Bettoni, D., Bianchi, F., Bianco, E., Bortone, A., Boyko, I., Briere, R. A., Brueggemann, A., Cai, H., Cai, X., Calcaterra, A., Cao, G. F., Cao, N., Cetin, S. A., Chang, J. F., Che, G. R., Chelkov, G., Chen, C., Chen, C. H., Chen, Chao, Chen, G., Chen, H. S., Chen, H. Y., Chen, M. L., Chen, S. J., Chen, S. L., Chen, S. M., Chen, T., Chen, X. R., Chen, X. T., Chen, Y. B., Chen, Y. Q., Chen, Z. J., Chen, Z. Y., Choi, S. K., Cibinetto, G., Cossio, F., Cui, J. J., Dai, H. L., Dai, J. P., Dbeyssi, A., de Boer, R. E., Dedovich, D., Deng, C. Q., Deng, Z. Y., Denig, A., Denysenko, I., Destefanis, M., De Mori, F., Ding, B., Ding, X. X., Ding, Y., Dong, J., Dong, L. Y., Dong, M. Y., Dong, X., Du, M. C., Du, S. X., Duan, Y. Y., Duan, Z. H., Egorov, P., Fan, Y. H., Fang, J., Fang, S. S., Fang, W. X., Fang, Y., Fang, Y. Q., Farinelli, R., Fava, L., Feldbauer, F., Felici, G., Feng, C. Q., Feng, J. H., Feng, Y. T., Fritsch, M., Fu, C. D., Fu, J. L., Fu, Y. W., Gao, H., Gao, X. B., Gao, Y. N., Gao, Yang, Garbolino, S., Garzia, I., Ge, L., Ge, P. T., Ge, Z. W., Geng, C., Gersabeck, E. M., Gilman, A., Goetzen, K., Gong, L., Gong, W. X., Gradl, W., Gramigna, S., Greco, M., Gu, M. H., Gu, Y. T., Guan, C. Y., Guo, A. Q., Guo, L. B., Guo, M. J., Guo, R. P., Guo, Y. P., Guskov, A., Gutierrez, J., Han, K. L., Han, T. T., Hanisch, F., Hao, X. Q., Harris, F. A., He, K. K., He, K. L., Heinsius, F. H., Heinz, C. H., Heng, Y. K., Herold, C., Holtmann, T., Hong, P. C., Hou, G. Y., Hou, X. T., Hou, Y. R., Hou, Z. L., Hu, B. Y., Hu, H. M., Hu, J. F., Hu, S. L., Hu, T., Hu, Y., Huang, G. S., Huang, K. X., Huang, L. Q., Huang, X. T., Huang, Y. P., Huang, Y. S., Hussain, T., Hölzken, F., Hüsken, N., der Wiesche, N. in, Jackson, J., Janchiv, S., Jeong, J. H., Ji, Q., Ji, Q. P., Ji, W., Ji, X. B., Ji, X. L., Ji, Y. Y., Jia, X. Q., Jia, Z. K., Jiang, D., Jiang, H. B., Jiang, P. C., Jiang, S. S., Jiang, T. J., Jiang, X. S., Jiang, Y., Jiao, J. B., Jiao, J. K., Jiao, Z., Jin, S., Jin, Y., Jing, M. Q., Jing, X. M., Johansson, T., Kabana, S., Kalantar-Nayestanaki, N., Kang, X. L., Kang, X. S., Kavatsyuk, M., Ke, B. C., Khachatryan, V., Khoukaz, A., Kiuchi, R., Kolcu, O. B., Kopf, B., Kuessner, M., Kui, X., Kumar, N., Kupsc, A., Kühn, W., Lane, J. J., Lavezzi, L., Lei, T. T., Lei, Z. H., Lellmann, M., Lenz, T., Li, C., Li, C. H., Li, Cheng, Li, D. M., Li, F., Li, G., Li, H. B., Li, H. J., Li, H. N., Li, Hui, Li, J. R., Li, J. S., Li, K., Li, L. J., Li, L. K., Li, Lei, Li, M. H., Li, P. R., Li, Q. M., Li, Q. X., Li, R., Li, S. X., Li, T., Li, W. D., Li, W. G., Li, X., Li, X. H., Li, X. L., Li, X. Y., Li, X. Z., Li, Y. G., Li, Z. J., Li, Z. Y., Liang, C., Liang, H., Liang, Y. F., Liang, Y. T., Liao, G. R., Liao, Y. P., Libby, J., Limphirat, A., Lin, C. C., Lin, D. X., Lin, T., Liu, B. J., Liu, B. X., Liu, C., Liu, C. X., Liu, D., Liu, F., Liu, F. H., Liu, Feng, Liu, G. M., Liu, H., Liu, H. B., Liu, H. H., Liu, H. M., Liu, Huihui, Liu, J. B., Liu, J. Y., Liu, K., Liu, K. Y., Liu, Ke, Liu, L., Liu, L. C., Liu, Lu, Liu, M. H., Liu, P. L., Liu, Q., Liu, S. B., Liu, T., Liu, W. K., Liu, W. M., Liu, X., Liu, Y., Liu, Y. B., Liu, Z. A., Liu, Z. D., Liu, Z. Q., Lou, X. C., Lu, F. X., Lu, H. J., Lu, J. G., Lu, X. L., Lu, Y., Lu, Y. P., Lu, Z. H., Luo, C. L., Luo, J. R., Luo, M. X., Luo, T., Luo, X. L., Lyu, X. R., Lyu, Y. F., Ma, F. C., Ma, H., Ma, H. L., Ma, J. L., Ma, L. L., Ma, L. R., Ma, M. M., Ma, Q. M., Ma, R. Q., Ma, T., Ma, X. T., Ma, X. Y., Ma, Y., Ma, Y. M., Maas, F. E., Maggiora, M., Malde, S., Mao, Y. J., Mao, Z. P., Marcello, S., Meng, Z. X., Messchendorp, J. G., Mezzadri, G., Miao, H., Min, T. J., Mitchell, R. E., Mo, X. H., Moses, B., Muchnoi, N. Yu., Muskalla, J., Nefedov, Y., Nerling, F., Nie, L. S., Nikolaev, I. B., Ning, Z., Nisar, S., Niu, Q. L., Niu, W. D., Niu, Y., Olsen, S. L., Ouyang, Q., Pacetti, S., Pan, X., Pan, Y., Pathak, A., Pei, Y. P., Pelizaeus, M., Peng, H. P., Peng, Y. Y., Peters, K., Ping, J. L., Ping, R. G., Plura, S., Prasad, V., Qi, F. Z., Qi, H., Qi, H. R., Qi, M., Qi, T. Y., Qian, S., Qian, W. B., Qiao, C. F., Qiao, X. K., Qin, J. J., Qin, L. Q., Qin, L. Y., Qin, X. P., Qin, X. S., Qin, Z. H., Qiu, J. F., Qu, Z. H., Redmer, C. F., Ren, K. J., Rivetti, A., Rolo, M., Rong, G., Rosner, Ch., Ruan, S. N., Salone, N., Sarantsev, A., Schelhaas, Y., Schoenning, K., Scodeggio, M., Shan, K. Y., Shan, W., Shan, X. Y., Shang, Z. J., Shangguan, J. F., Shao, L. G., Shao, M., Shen, C. P., Shen, H. F., Shen, W. H., Shen, X. Y., Shi, B. A., Shi, H., Shi, H. C., Shi, J. L., Shi, J. Y., Shi, Q. Q., Shi, S. Y., Shi, X., Song, J. J., Song, T. Z., Song, W. M., Song, Y. J., Song, Y. X., Sosio, S., Spataro, S., Stieler, F., Su, Y. J., Sun, G. B., Sun, G. X., Sun, H., Sun, H. K., Sun, J. F., Sun, K., Sun, L., Sun, S. S., Sun, T., Sun, W. Y., Sun, Y., Sun, Y. J., Sun, Y. Z., Sun, Z. Q., Sun, Z. T., Tang, C. J., Tang, G. Y., Tang, J., Tang, M., Tang, Y. A., Tao, L. Y., Tao, Q. T., Tat, M., Teng, J. X., Thoren, V., Tian, W. H., Tian, Y., Tian, Z. F., Uman, I., Wan, Y., Wang, S. J., Wang, B., Wang, B. L., Wang, Bo, Wang, D. Y., Wang, F., Wang, H. J., Wang, J. J., Wang, J. P., Wang, K., Wang, L. L., Wang, M., Wang, N. Y., Wang, S., Wang, T., Wang, T. J., Wang, W., Wang, W. P., Wang, X., Wang, X. F., Wang, X. J., Wang, X. L., Wang, X. N., Wang, Y., Wang, Y. D., Wang, Y. F., Wang, Y. L., Wang, Y. N., Wang, Y. Q., Wang, Yaqian, Wang, Yi, Wang, Z., Wang, Z. L., Wang, Z. Y., Wang, Ziyi, Wei, D. H., Weidner, F., Wen, S. P., Wen, Y. R., Wiedner, U., Wilkinson, G., Wolke, M., Wollenberg, L., Wu, C., Wu, J. F., Wu, L. H., Wu, L. J., Wu, X., Wu, X. H., Wu, Y., Wu, Y. H., Wu, Y. J., Wu, Z., Xia, L., Xian, X. M., Xiang, B. H., Xiang, T., Xiao, D., Xiao, G. Y., Xiao, S. Y., Xiao, Y. L., Xiao, Z. J., Xie, C., Xie, X. H., Xie, Y., Xie, Y. G., Xie, Y. H., Xie, Z. P., Xing, T. Y., Xu, C. F., Xu, C. J., Xu, G. F., Xu, H. Y., Xu, M., Xu, Q. J., Xu, Q. N., Xu, W., Xu, W. L., Xu, X. P., Xu, Y. C., Xu, Z. S., Yan, F., Yan, L., Yan, W. B., Yan, W. C., Yan, X. Q., Yang, H. J., Yang, H. L., Yang, H. X., Yang, T., Yang, Y., Yang, Y. F., Yang, Y. X., Yang, Z. W., Yao, Z. P., Ye, M., Ye, M. H., Yin, J. H., Yin, Junhao, You, Z. Y., Yu, B. X., Yu, C. X., Yu, G., Yu, J. S., Yu, T., Yu, X. D., Yu, Y. C., Yuan, C. Z., Yuan, J., Yuan, L., Yuan, S. C., Yuan, Y., Yuan, Z. Y., Yue, C. X., Zafar, A. A., Zeng, F. R., Zeng, S. H., Zeng, X., Zeng, Y., Zeng, Y. J., Zhai, X. Y., Zhai, Y. C., Zhan, Y. H., Zhang, A. Q., Zhang, B. L., Zhang, B. X., Zhang, D. H., Zhang, G. Y., Zhang, H., Zhang, H. C., Zhang, H. H., Zhang, H. Q., Zhang, H. R., Zhang, H. Y., Zhang, J., Zhang, J. J., Zhang, J. L., Zhang, J. Q., Zhang, J. S., Zhang, J. W., Zhang, J. X., Zhang, J. Y., Zhang, J. Z., Zhang, Jianyu, Zhang, L. M., Zhang, Lei, Zhang, P., Zhang, Q. Y., Zhang, R. Y., Zhang, S. H., Zhang, Shulei, Zhang, X. D., Zhang, X. M., Zhang, X. Y., Zhang, Y., Zhang, Y. T., Zhang, Y. H., Zhang, Y. M., Zhang, Yan, Zhang, Z. D., Zhang, Z. H., Zhang, Z. L., Zhang, Z. Y., Zhang, Z. Z., Zhao, G., Zhao, J. Y., Zhao, J. Z., Zhao, L., Zhao, Lei, Zhao, M. G., Zhao, N., Zhao, R. P., Zhao, S. J., Zhao, Y. B., Zhao, Y. X., Zhao, Z. G., Zhemchugov, A., Zheng, B., Zheng, B. M., Zheng, J. P., Zheng, W. J., Zheng, Y. H., Zhong, B., Zhong, X., Zhou, H., Zhou, J. Y., Zhou, L. P., Zhou, S., Zhou, X., Zhou, X. K., Zhou, X. R., Zhou, X. Y., Zhou, Y. Z., Zhu, A. N., Zhu, J., Zhu, K., Zhu, K. J., Zhu, K. S., Zhu, L., Zhu, L. X., Zhu, S. H., Zhu, T. J., Zhu, W. D., Zhu, Y. C., Zhu, Z. A., Zou, J. H., and Zu, J.
- Subjects
High Energy Physics - Experiment - Abstract
Using data samples collected by the \mbox{BESIII} detector located at the Beijing Electron Positron Collider, the cross sections of the process $e^+e^-\to f_{1}(1285)\pi^+\pi^-$ are measured at forty-five center-of-mass energies from $3.808$ to $4.951 {\rm GeV}$. An investigation on the cross section line shape is performed, and no significant structure is observed.
- Published
- 2025
33. Wafer-scale Integration of Single-Crystalline MoS$_2$ for Flexible Electronics Enabled by Oxide Dry-transfer
- Author
-
Xu, Xiang, Chen, Yitong, Shen, Jichuang, Huang, Qi, Jiang, Tong, Chen, Han, Zhu, Huaze, Ma, Yaqing, Wang, Hao, Li, Wenhao, Ji, Chen, Li, Dingwei, Zhang, Siyu, Wang, Yan, Zhu, Bowen, and Kong, Wei
- Subjects
Physics - Applied Physics ,Condensed Matter - Materials Science - Abstract
Atomically thin, single-crystalline transition metal dichalcogenides (TMDCs) grown via chemical vapor deposition (CVD) on sapphire substrates exhibit exceptional mechanical and electrical properties, positioning them as excellent channel materials for flexible electronics. However, conventional wet-transfer processes for integrating these materials onto flexible substrates often introduce surface contamination, significantly degrading device performance. Here, we present a wafer-scale dry-transfer technique using a high-dielectric oxide as the transfer medium, enabling the integration of 4-inch single-crystalline MoS$_2$ onto flexible substrates. This method eliminates contact with polymers or solvents, thus preserving the intrinsic electronic properties of MoS$_2$. As a result, the fabricated flexible field-effect transistor (FET) arrays exhibit remarkable performance, with a mobility of 117 cm$^2$/Vs, a subthreshold swing of 68.8 mV dec$^{-1}$, and an ultra-high current on/off ratio of $10^{12}$-values comparable to those achieved on rigid substrates. Leveraging the outstanding electrical characteristics, we demonstrated MoS$_2$-based flexible inverters operating in the subthreshold regime, achieving both a high gain of 218 and ultra-low power consumption of 1.4 pW/$\mu$m. Additionally, we integrated a flexible tactile sensing system driven by active-matrix MoS$_2$ FET arrays onto a robotic gripper, enabling real-time object identification. These findings demonstrate the simultaneous achievement of high electrical performance and flexibility, highlighting the immense potential of single-crystalline TMDC-based flexible electronics for real-world applications.
- Published
- 2025
34. The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling Capabilities
- Author
-
Hsu, Chan-Jan, Liu, Chia-Sheng, Chen, Meng-Hsi, Chen, Muxi, Hsu, Po-Chun, Chen, Yi-Chang, and Shiu, Da-Shan
- Subjects
Computer Science - Computation and Language - Abstract
Breeze 2 is a suite of advanced multi-modal language models, available in 3B and 8B parameter configurations, specifically designed to enhance Traditional Chinese language representation. Building upon the Llama 3, Breeze 2 continues pretraining on an extensive corpus to enhance the linguistic and cultural heritage of Traditional Chinese. It incorporates vision-aware capabilities through a visual encoder and a bridge module, and supports function-calling via prompt templates and post-training on function-calling data. The effectiveness of Breeze 2 is benchmarked across various tasks, including Taiwan general knowledge, instruction-following, long context, function calling, and vision understanding. Furthermore, we showcase the capabilities of the its 3B model in a mobile application. We are publicly releasing all Breeze 2 models under the Llama 3 Community License.
- Published
- 2025
35. On the Gauge Invariance of Secondary Gravitational Waves
- Author
-
Yuan, Chen, Lu, Yizhou, Chen, Zu-Cheng, and Liu, Lang
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,General Relativity and Quantum Cosmology - Abstract
Second-order tensor perturbations induced by primordial fluctuations play a crucial role in probing small-scale physics, but gauge dependence of their energy density has remained a fundamental challenge in cosmological perturbation theory. We address this issue by introducing a boundary condition-based filtering method that extracts physical radiation through the Sommerfeld criterion. We demonstrate that after filtering non-physical modes, the energy density of secondary gravitational waves becomes gauge-invariant and exhibits physically consistent behavior in the sub-horizon limit. This approach provides a unified framework for both adiabatic and isocurvature perturbations, enhancing theoretical predictions and observational signatures of early universe physics., Comment: 8 pages, comments are welcome!
- Published
- 2025
36. Characterization and Optimization of Tunable Couplers via Adiabatic Control in Superconducting Circuits
- Author
-
Zhang, Xuan, Zhang, Xu, Chen, Changling, Tang, Kai, Yi, Kangyuan, Luo, Kai, Xie, Zheshu, Chen, Yuanzhen, and Yan, Tongxing
- Subjects
Quantum Physics - Abstract
In the pursuit of scalable superconducting quantum computing, tunable couplers have emerged as a pivotal component, offering the flexibility required for complex quantum operations of high performance. In most current architectures of superconducting quantum chips, such couplers are not equipped with dedicated readout circuits to reduce complexity in both design and operation. However, this strategy poses challenges in precise characterization, calibration, and control of the couplers. In this work, we develop a hardware-efficient and robust technique based on adiabatic control to address the above issue. The critical ingredient of this technique is adiabatic swap (aSWAP) operation between a tunable coupler and nearby qubits. Using this technique, we have characterized and calibrated tunable couplers in our chips and achieved straightforward and precise control over these couplers. For example, we have demonstrated the calibration and correction of the flux distortion of couplers. In addition, we have also expanded this technique to tune the dispersive shift between a frequency-fixed qubit and its readout resonator over a wide range.
- Published
- 2025
37. Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models
- Author
-
Lin, Zhenghao, Tang, Zihao, Liu, Xiao, Gong, Yeyun, Cheng, Yi, Chen, Qi, Li, Hang, Xin, Ying, Yang, Ziyue, Yang, Kailai, Yan, Yu, Liang, Xiao, Lu, Shuai, Huang, Yiming, Luo, Zheheng, Qu, Lei, Feng, Xuan, Wang, Yaoxiang, Xia, Yuqing, Chen, Feiyang, Jiang, Yuting, Hu, Yasen, Ni, Hao, Li, Binyang, Zhao, Guoshuai, Chiang, Jui-Hao, Guo, Zhongxin, Lin, Chen, Kuang, Kun, Li, Wenjie, Shen, Yelong, Jiao, Jian, Cheng, Peng, and Yang, Mao
- Subjects
Computer Science - Computation and Language - Abstract
We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%.
- Published
- 2025
38. Bridging The Multi-Modality Gaps of Audio, Visual and Linguistic for Speech Enhancement
- Author
-
Lin, Meng-Ping, Hou, Jen-Cheng, Chen, Chia-Wei, Chien, Shao-Yi, Chen, Jun-Cheng, Lu, Xugang, and Tsao, Yu
- Subjects
Computer Science - Sound ,Computer Science - Machine Learning ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Speech Enhancement (SE) aims to improve the quality of noisy speech. It has been shown that additional visual cues can further improve performance. Given that speech communication involves audio, visual, and linguistic modalities, it is natural to expect another performance boost by incorporating linguistic information. However, bridging the modality gaps to efficiently incorporate linguistic information, along with audio and visual modalities during knowledge transfer, is a challenging task. In this paper, we propose a novel multi-modality learning framework for SE. In the model framework, a state-of-the-art diffusion Model backbone is utilized for Audio-Visual Speech Enhancement (AVSE) modeling where both audio and visual information are directly captured by microphones and video cameras. Based on this AVSE, the linguistic modality employs a PLM to transfer linguistic knowledge to the visual acoustic modality through a process termed Cross-Modal Knowledge Transfer (CMKT) during AVSE model training. After the model is trained, it is supposed that linguistic knowledge is encoded in the feature processing of the AVSE model by the CMKT, and the PLM will not be involved during inference stage. We carry out SE experiments to evaluate the proposed model framework. Experimental results demonstrate that our proposed AVSE system significantly enhances speech quality and reduces generative artifacts, such as phonetic confusion compared to the state-of-the-art. Moreover, our visualization results demonstrate that our Cross-Modal Knowledge Transfer method further improves the generated speech quality of our AVSE system. These findings not only suggest that Diffusion Model-based techniques hold promise for advancing the state-of-the-art in AVSE but also justify the effectiveness of incorporating linguistic information to improve the performance of Diffusion-based AVSE systems.
- Published
- 2025
39. Accelerating Discovery of Solid-State Thin-Film Metal Dealloying for 3D Nanoarchitecture Materials Design through Laser Thermal Gradient Treatment
- Author
-
Chung, Cheng-Chu, Li, Ruipeng, Veith, Gabriel M., Zhang, Honghu, Camino, Fernando, Lu, Ming, Tiwale, Nikhil, Zhang, Sheng, Yager, Kevin, and Chen-Wiegart, Yu-chen Karen
- Subjects
Condensed Matter - Materials Science - Abstract
Thin-film solid-state metal dealloying (thin-film SSMD) is a promising method for fabricating nanostructures with controlled morphology and efficiency, offering advantages over conventional bulk materials processing methods for integration into practical applications. Although machine learning (ML) has facilitated the design of dealloying systems, the selection of key thermal treatment parameters for nanostructure formation remains largely unknown and dependent on experimental trial and error. To overcome this challenge, a workflow enabling high-throughput characterization of thermal treatment parameters while probing local nanostructures of thin-film samples is needed. In this work, a laser-based thermal treatment is demonstrated to create temperature gradients on single thin-film samples of Nb-Al/Sc and Nb-Al/Cu. This continuous thermal space enables observation of dealloying transitions and the resulting nanostructures of interest. Through synchrotron X-ray multimodal and high-throughput characterization, critical transitions and nanostructures can be rapidly captured and subsequently verified using electron microscopy. The key temperatures driving chemical reactions and morphological evolutions are clearly identified within this framework. While the oxidation process may contribute to nanostructure formation during thin-film treatment, the dealloying process at the dealloying front involves interactions solely between the dealloying elements, highlighting the availability and viability of the selected systems. This approach enables efficient exploration of the dealloying process and validation of ML predictions, thereby accelerating the discovery of thin-film SSMD systems with targeted nanostructures., Comment: The main content contains 6 figures within 25 pages. The supporting information includes 5 figures within 5 pages
- Published
- 2025
40. UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior
- Author
-
Chen, I-Hsiang, Chen, Wei-Ting, Liu, Yu-Wei, Chiang, Yuan-Chun, Kuo, Sy-Yen, and Yang, Ming-Hsuan
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Machine Learning - Abstract
Image restoration aims to recover content from inputs degraded by various factors, such as adverse weather, blur, and noise. Perceptual Image Restoration (PIR) methods improve visual quality but often do not support downstream tasks effectively. On the other hand, Task-oriented Image Restoration (TIR) methods focus on enhancing image utility for high-level vision tasks, sometimes compromising visual quality. This paper introduces UniRestore, a unified image restoration model that bridges the gap between PIR and TIR by using a diffusion prior. The diffusion prior is designed to generate images that align with human visual quality preferences, but these images are often unsuitable for TIR scenarios. To solve this limitation, UniRestore utilizes encoder features from an autoencoder to adapt the diffusion prior to specific tasks. We propose a Complementary Feature Restoration Module (CFRM) to reconstruct degraded encoder features and a Task Feature Adapter (TFA) module to facilitate adaptive feature fusion in the decoder. This design allows UniRestore to optimize images for both human perception and downstream task requirements, addressing discrepancies between visual quality and functional needs. Integrating these modules also enhances UniRestore's adapability and efficiency across diverse tasks. Extensive expertments demonstrate the superior performance of UniRestore in both PIR and TIR scenarios., Comment: 11 pages, 6 figures
- Published
- 2025
41. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- Author
-
DeepSeek-AI, Guo, Daya, Yang, Dejian, Zhang, Haowei, Song, Junxiao, Zhang, Ruoyu, Xu, Runxin, Zhu, Qihao, Ma, Shirong, Wang, Peiyi, Bi, Xiao, Zhang, Xiaokang, Yu, Xingkai, Wu, Yu, Wu, Z. F., Gou, Zhibin, Shao, Zhihong, Li, Zhuoshu, Gao, Ziyi, Liu, Aixin, Xue, Bing, Wang, Bingxuan, Wu, Bochao, Feng, Bei, Lu, Chengda, Zhao, Chenggang, Deng, Chengqi, Zhang, Chenyu, Ruan, Chong, Dai, Damai, Chen, Deli, Ji, Dongjie, Li, Erhang, Lin, Fangyun, Dai, Fucong, Luo, Fuli, Hao, Guangbo, Chen, Guanting, Li, Guowei, Zhang, H., Bao, Han, Xu, Hanwei, Wang, Haocheng, Ding, Honghui, Xin, Huajian, Gao, Huazuo, Qu, Hui, Li, Hui, Guo, Jianzhong, Li, Jiashi, Wang, Jiawei, Chen, Jingchang, Yuan, Jingyang, Qiu, Junjie, Li, Junlong, Cai, J. L., Ni, Jiaqi, Liang, Jian, Chen, Jin, Dong, Kai, Hu, Kai, Gao, Kaige, Guan, Kang, Huang, Kexin, Yu, Kuai, Wang, Lean, Zhang, Lecong, Zhao, Liang, Wang, Litong, Zhang, Liyue, Xu, Lei, Xia, Leyi, Zhang, Mingchuan, Zhang, Minghua, Tang, Minghui, Li, Meng, Wang, Miaojun, Li, Mingming, Tian, Ning, Huang, Panpan, Zhang, Peng, Wang, Qiancheng, Chen, Qinyu, Du, Qiushi, Ge, Ruiqi, Zhang, Ruisong, Pan, Ruizhe, Wang, Runji, Chen, R. J., Jin, R. L., Chen, Ruyi, Lu, Shanghao, Zhou, Shangyan, Chen, Shanhuang, Ye, Shengfeng, Wang, Shiyu, Yu, Shuiping, Zhou, Shunfeng, Pan, Shuting, Li, S. S., Zhou, Shuang, Wu, Shaoqing, Yun, Tao, Pei, Tian, Sun, Tianyu, Wang, T., Zeng, Wangding, Zhao, Wanjia, Liu, Wen, Liang, Wenfeng, Gao, Wenjun, Yu, Wenqin, Zhang, Wentao, Xiao, W. L., An, Wei, Liu, Xiaodong, Wang, Xiaohan, Chen, Xiaokang, Nie, Xiaotao, Cheng, Xin, Liu, Xin, Xie, Xin, Liu, Xingchao, Yang, Xinyu, Li, Xinyuan, Su, Xuecheng, Lin, Xuheng, Li, X. Q., Jin, Xiangyue, Shen, Xiaojin, Chen, Xiaosha, Sun, Xiaowen, Wang, Xiaoxiang, Song, Xinnan, Zhou, Xinyi, Wang, Xianzu, Shan, Xinxia, Li, Y. K., Wang, Y. Q., Wei, Y. X., Zhang, Yang, Xu, Yanhong, Li, Yao, Zhao, Yao, Sun, Yaofeng, Wang, Yaohui, Yu, Yi, Zhang, Yichao, Shi, Yifan, Xiong, Yiliang, He, Ying, Piao, Yishi, Wang, Yisong, Tan, Yixuan, Ma, Yiyang, Liu, Yiyuan, Guo, Yongqiang, Ou, Yuan, Wang, Yuduan, Gong, Yue, Zou, Yuheng, He, Yujia, Xiong, Yunfan, Luo, Yuxiang, You, Yuxiang, Liu, Yuxuan, Zhou, Yuyang, Zhu, Y. X., Huang, Yanping, Li, Yaohui, Zheng, Yi, Zhu, Yuchen, Ma, Yunxian, Tang, Ying, Zha, Yukun, Yan, Yuting, Ren, Z. Z., Ren, Zehui, Sha, Zhangli, Fu, Zhe, Xu, Zhean, Xie, Zhenda, Zhang, Zhengyan, Hao, Zhewen, Ma, Zhicheng, Yan, Zhigang, Wu, Zhiyu, Gu, Zihui, Zhu, Zijia, Liu, Zijun, Li, Zilin, Xie, Ziwei, Song, Ziyang, Pan, Zizheng, Huang, Zhen, Xu, Zhipeng, Zhang, Zhongyu, and Zhang, Zhen
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.
- Published
- 2025
42. Observation of the $\Lambda_b^0 \to J/\psi \Xi^- K^+$ and $\Xi_b^0 \to J/\psi \Xi^- \pi^+$ decays
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Balboni, A., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bonacci, R. B., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bottalico, E., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Brandenburg, J. D., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonincontri, L., Marcos, M. Burgos, Burke, A. T., Burr, C., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collaviti, S., Collins, P., Colombo, T., Colonna, M., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Darze, G., Davidson, A., Davies, J. E., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Firlej, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fiutowski, T., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Führing, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Gardner, P., Garg, K. G., Garrido, L., Gaspar, C., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golinka-Bezshyyko, L., Golobardes, E., Golubkov, D., Golutvin, A., Fernandez, S. Gomez, Gomulka, W., Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Govorkova, E., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Harris, T. H., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Idzik, M., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E. D., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, M., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Linton, H., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Ordonez, G. Loachamin, Salvia, A. Lobo, Loi, A., Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Maisuzenko, D., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manganella, F. M., Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Gomez, D. Martinez, Santos, D. Martinez, Vidal, F. Martinez, Granollers, A. Martorell i, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Moeser, L., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Moron, J., Morren, W., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Niu, Q., Nogarolli, P., Nogga, P., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Pan, X., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Parmar, D., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perrevoort, A., Perro, A., Peters, M. J., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Piccolo, L., Pietrzyk, B., Pietrzyk, G., Pilato, R. N., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Provenzano, D., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Galvez, M. Ribalda, Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Alvarez, A. Rodriguez, Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Segal, I., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Sommerfeld, N. S., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stefaniak, M., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swientek, K., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tang, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Tork, T., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Urbach, B., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, van Eldik, J., Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Verdoglia, M., Vesterinen, M., Benet, D. Vico, Villalba, P. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, H., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Y. W., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. J., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Winn, M., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, X., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xing, T. X., Xu, A., Xu, L., Xu, M., Xu, Z., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yin, X., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhang, Z., Zhao, Y., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G.
- Subjects
High Energy Physics - Experiment - Abstract
The first observation of the $\Xi_b^0 \to J/\psi \Xi^- \pi^+$ decay and the most precise measurement of the branching fraction of the $\Lambda_b^0 \to J/\psi \Xi^- K^+$ decay are reported, using proton-proton collision data from the LHCb experiment collected in 2016--2018 at a centre-of-mass energy of 13~TeV, corresponding to an integrated luminosity of 5.4~fb$^{-1}$. Using the $\Lambda_b^0 \to J/\psi \Lambda$ and $\Xi_b^0 \to J/\psi \Xi^-$ decays as normalisation channels, the ratios of branching fractions are measured to be: \[ \frac{\mathcal{B}(\Lambda_b^0 \to J/\psi \Xi^- K^+)}{\mathcal{B}(\Lambda_b^0 \to J/\psi \Lambda)} = (1.17 \pm 0.14 \pm 0.08)\times 10^{-2} \, , \] \[ \frac{\mathcal{B}(\Xi_b^0 \to J/\psi \Xi^- \pi^+)}{\mathcal{B}(\Xi_b^0 \to J/\psi \Xi^-)} = (11.9 \pm 1.4 \pm 0.6)\times 10^{-2}\, , \] where the first uncertainty is statistical and the second systematic., Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/3479/ (LHCb public pages)
- Published
- 2025
43. Cost Optimization for Serverless Edge Computing with Budget Constraints using Deep Reinforcement Learning
- Author
-
Chen, Chen, Guan, Peiyuan, Chen, Ziru, Taherkordi, Amir, Hou, Fen, and Cai, Lin X.
- Subjects
Computer Science - Networking and Internet Architecture - Abstract
Serverless computing adopts a pay-as-you-go billing model where applications are executed in stateless and shortlived containers triggered by events, resulting in a reduction of monetary costs and resource utilization. However, existing platforms do not provide an upper bound for the billing model which makes the overall cost unpredictable, precluding many organizations from managing their budgets. Due to the diverse ranges of serverless functions and the heterogeneous capacity of edge devices, it is challenging to receive near-optimal solutions for deployment cost in a polynomial time. In this paper, we investigated the function scheduling problem with a budget constraint for serverless computing in wireless networks. Users and IoT devices are sending requests to edge nodes, improving the latency perceived by users. We propose two online scheduling algorithms based on reinforcement learning, incorporating several important characteristics of serverless functions. Via extensive simulations, we justify the superiority of the proposed algorithm by comparing with an ILP solver (Midaco). Our results indicate that the proposed algorithms efficiently approximate the results of Midaco within a factor of 1.03 while our decision-making time is 5 orders of magnitude less than that of Midaco., Comment: This paper has been accepted by IEEE ICC 2025
- Published
- 2025
44. Quantum Emitters in Hexagonal Boron Nitride: Principles, Engineering and Applications
- Author
-
Mai, Thi Ngoc Anh, Hossain, Md Shakhawath, Nguyen, Nhat Minh, Chen, Yongliang, Chen, Chaohao, Xu, Xiaoxue, Trinh, Quang Thang, Dinh, Toan, and Tran, Toan Trong
- Subjects
Condensed Matter - Materials Science ,Physics - Optics ,Quantum Physics - Abstract
Solid-state quantum emitters, molecular-sized complexes releasing a single photon at a time, have garnered much attention owing to their use as a key building block in various quantum technologies. Among these, quantum emitters in hexagonal boron nitride (hBN) have emerged as front runners with superior attributes compared to other competing platforms. These attributes are attainable thanks to the robust, two-dimensional lattice of the material formed by the extremely strong B-N bonds. This review discusses the fundamental properties of quantum emitters in hBN and highlights recent progress in the field. The focus is on the fabrication and engineering of these quantum emitters facilitated by state-of-the-art equipment. Strategies to integrate the quantum emitters with dielectric and plasmonic cavities to enhance their optical properties are summarized. The latest developments in new classes of spin-active defects, their predicted structural configurations, and the proposed suitable quantum applications are examined. Despite the current challenges, quantum emitters in hBN have steadily become a promising platform for applications in quantum information science.
- Published
- 2025
45. MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
- Author
-
Zhao, Yilun, Xie, Lujing, Zhang, Haowei, Gan, Guo, Long, Yitao, Hu, Zhiyuan, Hu, Tongyan, Chen, Weiyuan, Li, Chuhan, Song, Junyang, Xu, Zhijian, Wang, Chengye, Pan, Weifeng, Shangguan, Ziyao, Tang, Xiangru, Liang, Zhenwen, Liu, Yixin, Zhao, Chen, and Cohan, Arman
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark for evaluating foundation models in video understanding. MMVU includes 3,000 expert-annotated questions spanning 27 subjects across four core disciplines: Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to prior benchmarks, MMVU features three key advancements. First, it challenges models to apply domain-specific knowledge and perform expert-level reasoning to analyze specialized-domain videos, moving beyond the basic visual perception typically assessed in current video benchmarks. Second, each example is annotated by human experts from scratch. We implement strict data quality controls to ensure the high quality of the dataset. Finally, each example is enriched with expert-annotated reasoning rationals and relevant domain knowledge, facilitating in-depth analysis. We conduct an extensive evaluation of 32 frontier multimodal foundation models on MMVU. The latest System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest performance among the tested models. However, they still fall short of matching human expertise. Through in-depth error analyses and case studies, we offer actionable insights for future advancements in expert-level, knowledge-intensive video understanding for specialized domains.
- Published
- 2025
46. UI-TARS: Pioneering Automated GUI Interaction with Native Agents
- Author
-
Qin, Yujia, Ye, Yining, Fang, Junjie, Wang, Haoming, Liang, Shihao, Tian, Shizuo, Zhang, Junda, Li, Jiahao, Li, Yunxin, Huang, Shijue, Zhong, Wanjun, Li, Kuanye, Yang, Jiale, Miao, Yu, Lin, Woyu, Liu, Longxiang, Jiang, Xu, Ma, Qianli, Li, Jingyu, Xiao, Xiaojun, Cai, Kai, Li, Chuang, Zheng, Yaowei, Jin, Chaolin, Li, Chen, Zhou, Xiao, Wang, Minchao, Chen, Haoli, Li, Zhaojian, Yang, Haihua, Liu, Haifeng, Lin, Feng, Peng, Tao, Liu, Xin, and Shi, Guang
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Human-Computer Interaction - Abstract
This paper introduces UI-TARS, a native GUI agent model that solely perceives the screenshots as input and performs human-like interactions (e.g., keyboard and mouse operations). Unlike prevailing agent frameworks that depend on heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts and workflows, UI-TARS is an end-to-end model that outperforms these sophisticated frameworks. Experiments demonstrate its superior performance: UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating perception, grounding, and GUI task execution. Notably, in the OSWorld benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15 steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld, UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of GUI screenshots for context-aware understanding of UI elements and precise captioning; (2) Unified Action Modeling, which standardizes actions into a unified space across platforms and achieves precise grounding and interaction through large-scale action traces; (3) System-2 Reasoning, which incorporates deliberate reasoning into multi-step decision making, involving multiple reasoning patterns such as task decomposition, reflection thinking, milestone recognition, etc. (4) Iterative Training with Reflective Online Traces, which addresses the data bottleneck by automatically collecting, filtering, and reflectively refining new interaction traces on hundreds of virtual machines. Through iterative training and reflection tuning, UI-TARS continuously learns from its mistakes and adapts to unforeseen situations with minimal human intervention. We also analyze the evolution path of GUI agents to guide the further development of this domain.
- Published
- 2025
47. Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
- Author
-
Zhao, Zibo, Lai, Zeqiang, Lin, Qingxiang, Zhao, Yunfei, Liu, Haolin, Yang, Shuhui, Feng, Yifei, Yang, Mingxin, Zhang, Sheng, Yang, Xianghui, Shi, Huiwen, Liu, Sicong, Wu, Junta, Lian, Yihang, Yang, Fan, Tang, Ruining, He, Zebin, Wang, Xinzhou, Liu, Jian, Zuo, Xuhui, Chen, Zhuo, Lei, Biwen, Weng, Haohan, Xu, Jing, Zhu, Yiling, Liu, Xinhai, Xu, Lixin, Hu, Changrong, Huang, Tianyu, Wang, Lifu, Zhang, Jihong, Chen, Meng, Dong, Liang, Jia, Yiwen, Cai, Yulin, Yu, Jiaao, Tang, Yixuan, Zhang, Hao, Ye, Zheng, He, Peng, Wu, Runzhou, Zhang, Chao, Tan, Yonghao, Xiao, Jie, Tao, Yangyu, Zhu, Jianchen, Xue, Jinbao, Liu, Kai, Zhao, Chongqing, Wu, Xinming, Hu, Zhichao, Qin, Lei, Peng, Jianbing, Li, Zhan, Chen, Minghui, Zhang, Xipeng, Niu, Lin, Wang, Paige, Wang, Yingkai, Kuang, Haozhao, Fan, Zhongyi, Zheng, Xu, Zhuang, Weihao, He, YingPing, Liu, Tian, Yang, Yong, Wang, Di, Liu, Yuhong, Jiang, Jie, Huang, Jingwei, and Guo, Chunchao
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model -- Hunyuan3D-DiT, and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio -- a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and etc. Hunyuan3D 2.0 is publicly released in order to fill the gaps in the open-source 3D community for large-scale foundation generative models. The code and pre-trained weights of our models are available at: https://github.com/Tencent/Hunyuan3D-2, Comment: GitHub link: https://github.com/Tencent/Hunyuan3D-2
- Published
- 2025
48. AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding
- Author
-
Li, Zikun, Chen, Zhuofu, Delacourt, Remi, Oliaro, Gabriele, Wang, Zeyu, Chen, Qinghan, Lin, Shuhuai, Yang, April, Zhang, Zhihao, Chen, Zhuoming, Lai, Sean, Miao, Xupeng, and Jia, Zhihao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Machine Learning - Abstract
This paper introduces AdaServe, the first LLM serving system to support SLO customization through fine-grained speculative decoding. AdaServe leverages the logits of a draft model to predict the speculative accuracy of tokens and employs a theoretically optimal algorithm to construct token trees for verification. To accommodate diverse SLO requirements without compromising throughput, AdaServe employs a speculation-and-selection scheme that first constructs candidate token trees for each request and then dynamically selects tokens to meet individual SLO constraints while optimizing throughput. Comprehensive evaluations demonstrate that AdaServe achieves up to 73% higher SLO attainment and 74% higher goodput compared to state-of-the-art systems. These results underscore AdaServe's potential to enhance the efficiency and adaptability of LLM deployments across varied application scenarios.
- Published
- 2025
49. A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
- Author
-
Zhang, Qinggang, Chen, Shengyuan, Bei, Yuanchen, Yuan, Zheng, Zhou, Huachi, Hong, Zijin, Dong, Junnan, Chen, Hao, Chang, Yi, and Huang, Xiao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Information Retrieval - Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in a wide range of tasks, yet their application to specialized domains remains challenging due to the need for deep expertise. Retrieval-augmented generation (RAG) has emerged as a promising solution to customize LLMs for professional fields by seamlessly integrating external knowledge bases, enabling real-time access to domain-specific expertise during inference. Despite its potential, traditional RAG systems, based on flat text retrieval, face three critical challenges: (i) complex query understanding in professional contexts, (ii) difficulties in knowledge integration across distributed sources, and (iii) system efficiency bottlenecks at scale. This survey presents a systematic analysis of Graph-based Retrieval-Augmented Generation (GraphRAG), a new paradigm that revolutionizes domain-specific LLM applications. GraphRAG addresses traditional RAG limitations through three key innovations: (i) graph-structured knowledge representation that explicitly captures entity relationships and domain hierarchies, (ii) efficient graph-based retrieval techniques that enable context-preserving knowledge retrieval with multihop reasoning ability, and (iii) structure-aware knowledge integration algorithms that leverage retrieved knowledge for accurate and logical coherent generation of LLMs. In this survey, we systematically analyze the technical foundations of GraphRAG and examine current implementations across various professional domains, identifying key technical challenges and promising research directions. All the related resources of GraphRAG, including research papers, open-source data, and projects, are collected for the community in \textcolor{blue}{\url{https://github.com/DEEP-PolyU/Awesome-GraphRAG}}.
- Published
- 2025
50. Measurement of the multiplicity dependence of $\mit\Upsilon$ production ratios in $pp$ collisions at $\sqrt{s}=13$ TeV
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Balboni, A., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartolini, M., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bonacci, R. B., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonaura, A., Buonincontri, L., Burke, A. T., Burr, C., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A. C., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cervenkov, D., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Cholak, S., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collaviti, S., Collins, P., Colombo, T., Colonna, M., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Darze, G., Davidson, A., Davies, J. E., Davis, A., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Firlej, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fiutowski, T., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Führing, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Gardner, P., Garg, K. G., Garrido, L., Gaspar, C., Geertsema, R. E., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Gironell, P. Gironella, Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golobardes, E., Golubkov, D., Golutvin, A., Fernandez, S. Gomez, Gomulka, W., Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guittiere, M., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Hajheidari, M., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Harris, T. H., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Idzik, M., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E. D., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, M., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Linton, H., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Salvia, A. Lobo, Loi, A., Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Maisuzenko, D., Majewski, M. W., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manganella, F. M., Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Gomez, D. Martinez, Santos, D. Martinez, Vidal, F. Martinez, Granollers, A. Martorell i, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., Mcconnell, L., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Moron, J., Morren, W., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Niu, Q., Nogarolli, P., Nogga, P., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Pan, X., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Parmar, D., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perrevoort, A., Perro, A., Peters, M. J., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Piccolo, L., Pietrzyk, B., Pietrzyk, G., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Provenzano, D., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Alvarez, A. Rodriguez, Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzhikov, A., Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Segal, I., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Sommerfeld, N. S., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swientek, K., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tang, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Urbach, B., Ursov, E., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Verdoglia, M., Vesterinen, M., Benet, D. Vico, Villalba, P. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, H., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Y. W., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. J., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Winn, M., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, X., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xu, A., Xu, J., Xu, L., Xu, M., Xu, Z., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yin, X., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhao, Y., Zharkova, A., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G.
- Subjects
High Energy Physics - Experiment ,Nuclear Experiment - Abstract
The $\mit{\Upsilon}(\mathrm{2}S)$ and $\mit{\Upsilon}(\mathrm{3}S)$ production cross-sections are measured relative to that of the $\mit{\Upsilon}(\mathrm{1}S)$ meson, as a function of charged-particle multiplicity in proton-proton collisions at a centre-of-mass energy of $13$ TeV. The measurement uses data collected by the LHCb experiment in 2018 corresponding to an integrated luminosity of 2 $\text{fb}^{-1}$. Both the $\mit{\Upsilon}(\mathrm{2}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ and $\mit{\Upsilon}(\mathrm{3}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ cross-section ratios are found to decrease significantly as a function of event multiplicity, with the $\mit{\Upsilon}(\mathrm{3}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ ratio showing a steeper decline towards high multiplicity. This hierarchy is qualitatively consistent with the comover model predictions, indicating that final-state interactions play an important role in bottomonia production in high-multiplicity events., Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/1782/ (LHCb public pages)
- Published
- 2025
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.