10,764,740 results on '"A., Chen"'
Search Results
2. A nomogram for predicting nutritional risk before gastric cancer surgery
- Author
-
Li, Changhua, Liu, Jinlu, Wang, Congjun, Luo, Yihuan, Qin, Lanhui, Chen, Peiyin, and Chen, Junqiang
- Published
- 2024
Catalog
3. The impact of tea consumption on the risk of depression: A Mendelian randomization and Bayesian weighting algorithm study
- Author
-
Zhuo, Guifeng, Chen, Wei, Zhang, Jinzhi, Su, Mingyang, Zhu, Xiaomin, Pu, Shanshan, Liao, Naibing, Huang, Deqing, Chen, Xiangyi, and Wu, Lin
- Published
- 2024
4. Reliability and validity of five balance assessments battery in individuals with schizophrenia
- Author
-
Lin, I-Chen, Chen, Fu-Chen, Chen, Chia-Hsiang, and Chen, Ming-De
- Published
- 2024
5. Breaking the Pre-Planning Barrier: Real-Time Adaptive Coordination of Mission and Charging UAVs Using Graph Reinforcement Learning
- Author
-
Hu, Yuhan, Sun, Yirong, Chen, Yanjun, and Chen, Xinghao
- Subjects
Computer Science - Multiagent Systems - Abstract
Unmanned Aerial Vehicles (UAVs) are pivotal in applications such as search and rescue and environmental monitoring, excelling in intelligent perception tasks. However, their limited battery capacity hinders long-duration and long-distance missions. Charging UAVs (CUAVs) offers a potential solution by recharging mission UAVs (MUAVs), but existing methods rely on impractical pre-planned routes, failing to enable organic cooperation and limiting mission efficiency. We introduce a novel multi-agent deep reinforcement learning model named \textbf{H}eterogeneous \textbf{G}raph \textbf{A}ttention \textbf{M}ulti-agent Deep Deterministic Policy Gradient (HGAM), designed to dynamically coordinate MUAVs and CUAVs. This approach maximizes data collection, geographical fairness, and energy efficiency by allowing UAVs to adapt their routes in real-time to current task demands and environmental conditions without pre-planning. Our model uses heterogeneous graph attention networks (GATs) to present heterogeneous agents and facilitate efficient information exchange. It operates within an actor-critic framework. Simulation results show that our model significantly improves cooperation among heterogeneous UAVs, outperforming existing methods in several metrics, including data collection rate and charging efficiency. more...
- Published
- 2025
6. DeepFlow: Serverless Large Language Model Serving at Scale
- Author
-
Hu, Junhao, Xu, Jiang, He, Yulong, Chen, Yuetao, Dan, Gengyuan, Liu, Zhixia, Zhang, Baoquan, Wan, Shining, Dong, Zhiyu, Xu, Hao, Ren, Zhihao, Liu, Jiang, Meng, Jie, He, Chao, Xie, Tao, Lin, Dayun, Zhang, Qin, Yu, Yue, Feng, Hao, Chen, Xusheng, and Shan, Yizhou more...
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
This paper introduces DeepFlow, a scalable and serverless AI platform designed to efficiently serve large language models (LLMs) at scale in cloud environments. DeepFlow addresses key challenges such as resource allocation, serving efficiency, and cold start latencies through four main design components. First, it uses a simple serverless abstraction called the request-job-task model, which helps manage AI workloads across post-training and model serving tasks. Second, it builds an in-house serving engine FlowServe using a microkernel-inspired design, NPU-centric execution, and SPMD-based parallelism to optimize LLM serving. The system also includes novel scheduling policies tailored for both PD-disaggregated and PD-colocated configurations. With optimizations like pre-warmed pods, DRAM pre-loading, and NPU-fork, DeepFlow can scale up to 64 instances in seconds. DeepFlow has been in production for over a year, operating on a large Ascend NPU cluster and providing industrystandard APIs for fine-tuning, agent serving, and model serving to our customers. more...
- Published
- 2025
7. PAID: A Framework of Product-Centric Advertising Image Design
- Author
-
Chen, Hongyu, Zhou, Min, Jiang, Jing, Chen, Jiale, Lu, Yang, Xiao, Bo, Ge, Tiezheng, and Zheng, Bo
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In E-commerce platforms, a full advertising image is composed of a background image and marketing taglines. Automatic ad image design reduces human costs and plays a crucial role. For the convenience of users, a novel automatic framework named Product-Centric Advertising Image Design (PAID) is proposed in this work. PAID takes the product foreground image, required taglines, and target size as input and creates an ad image automatically. PAID consists of four sequential stages: prompt generation, layout generation, background image generation, and graphics rendering. Different expert models are trained to conduct these sub-tasks. A visual language model (VLM) based prompt generation model is leveraged to produce a product-matching background prompt. The layout generation model jointly predicts text and image layout according to the background prompt, product, and taglines to achieve the best harmony. An SDXL-based layout-controlled inpainting model is trained to generate an aesthetic background image. Previous ad image design methods take a background image as input and then predict the layout of taglines, which limits the spatial layout due to fixed image content. Innovatively, our PAID adjusts the stages to produce an unrestricted layout. To complete the PAID framework, we created two high-quality datasets, PITA and PIL. Extensive experimental results show that PAID creates more visually pleasing advertising images than previous methods. more...
- Published
- 2025
8. Humanity's Last Exam
- Author
-
Phan, Long, Gatti, Alice, Han, Ziwen, Li, Nathaniel, Hu, Josephina, Zhang, Hugh, Shi, Sean, Choi, Michael, Agrawal, Anish, Chopra, Arnav, Khoja, Adam, Kim, Ryan, Hausenloy, Jason, Zhang, Oliver, Mazeika, Mantas, Anderson, Daron, Nguyen, Tung, Mahmood, Mobeen, Feng, Fiona, Feng, Steven Y., Zhao, Haoran, Yu, Michael, Gangal, Varun, Zou, Chelsea, Wang, Zihan, Wang, Jessica P., Kumar, Pawan, Pokutnyi, Oleksandr, Gerbicz, Robert, Popov, Serguei, Levin, John-Clark, Kazakov, Mstyslav, Schmitt, Johannes, Galgon, Geoff, Sanchez, Alvaro, Lee, Yongki, Yeadon, Will, Sauers, Scott, Roth, Marc, Agu, Chidozie, Riis, Søren, Giska, Fabian, Utpala, Saiteja, Giboney, Zachary, Goshu, Gashaw M., Xavier, Joan of Arc, Crowson, Sarah-Jane, Naiya, Mohinder Maheshbhai, Burns, Noah, Finke, Lennart, Cheng, Zerui, Park, Hyunwoo, Fournier-Facio, Francesco, Wydallis, John, Nandor, Mark, Singh, Ankit, Gehrunger, Tim, Cai, Jiaqi, McCarty, Ben, Duclosel, Darling, Nam, Jungbae, Zampese, Jennifer, Hoerr, Ryan G., Bacho, Aras, Loume, Gautier Abou, Galal, Abdallah, Cao, Hangrui, Garretson, Alexis C, Sileo, Damien, Ren, Qiuyu, Cojoc, Doru, Arkhipov, Pavel, Qazi, Usman, Li, Lianghui, Motwani, Sumeet, de Witt, Christian Schroeder, Taylor, Edwin, Veith, Johannes, Singer, Eric, Hartman, Taylor D., Rissone, Paolo, Jin, Jaehyeok, Shi, Jack Wei Lun, Willcocks, Chris G., Robinson, Joshua, Mikov, Aleksandar, Prabhu, Ameya, Tang, Longke, Alapont, Xavier, Uro, Justine Leon, Zhou, Kevin, Santos, Emily de Oliveira, Maksimov, Andrey Pupasov, Vendrow, Edward, Zenitani, Kengo, Guillod, Julien, Li, Yuqi, Vendrow, Joshua, Kuchkin, Vladyslav, Ze-An, Ng, Marion, Pierre, Efremov, Denis, Lynch, Jayson, Liang, Kaiqu, Gritsevskiy, Andrew, Martinez, Dakotah, Pageler, Ben, Crispino, Nick, Zvonkine, Dimitri, Fraga, Natanael Wildner, Soori, Saeed, Press, Ori, Tang, Henry, Salazar, Julian, Green, Sean R., Brüssel, Lina, Twayana, Moon, Dieuleveut, Aymeric, Rogers, T. Ryan, Zhang, Wenjin, Li, Bikun, Yang, Jinzhou, Rao, Arun, Loiseau, Gabriel, Kalinin, Mikhail, Lukas, Marco, Manolescu, Ciprian, Mishra, Subrata, Kamdoum, Ariel Ghislain Kemogne, Kreiman, Tobias, Hogg, Tad, Jin, Alvin, Bosio, Carlo, Sun, Gongbo, Coppola, Brian P, Tarver, Tim, Heidinger, Haline, Sayous, Rafael, Ivanov, Stefan, Cavanagh, Joseph M, Shen, Jiawei, Imperial, Joseph Marvin, Schwaller, Philippe, Senthilkuma, Shaipranesh, Bran, Andres M, Dehghan, Ali, Algaba, Andres, Verbeken, Brecht, Noever, David, P V, Ragavendran, Schut, Lisa, Sucholutsky, Ilia, Zheltonozhskii, Evgenii, Lim, Derek, Stanley, Richard, Sivarajan, Shankar, Yang, Tong, Maar, John, Wykowski, Julian, Oller, Martí, Sandlin, Jennifer, Sahu, Anmol, Hu, Yuzheng, Fish, Sara, Heydari, Nasser, Apronti, Archimedes, Rawal, Kaivalya, Vilchis, Tobias Garcia, Zu, Yuexuan, Lackner, Martin, Koppel, James, Nguyen, Jeremy, Antonenko, Daniil S., Chern, Steffi, Zhao, Bingchen, Arsene, Pierrot, Goldfarb, Alan, Ivanov, Sergey, Poświata, Rafał, Wang, Chenguang, Li, Daofeng, Crisostomi, Donato, Achilleos, Andrea, Myklebust, Benjamin, Sen, Archan, Perrella, David, Kaparov, Nurdin, Inlow, Mark H, Zang, Allen, Thornley, Elliott, Orel, Daniil, Poritski, Vladislav, Ben-David, Shalev, Berger, Zachary, Whitfill, Parker, Foster, Michael, Munro, Daniel, Ho, Linh, Hava, Dan Bar, Kuchkin, Aleksey, Lauff, Robert, Holmes, David, Sommerhage, Frank, Schneider, Keith, Kazibwe, Zakayo, Stambaugh, Nate, Singh, Mukhwinder, Magoulas, Ilias, Clarke, Don, Kim, Dae Hyun, Dias, Felipe Meneguitti, Elser, Veit, Agarwal, Kanu Priya, Vilchis, Victor Efren Guadarrama, Klose, Immo, Demian, Christoph, Anantheswaran, Ujjwala, Zweiger, Adam, Albani, Guglielmo, Li, Jeffery, Daans, Nicolas, Radionov, Maksim, Rozhoň, Václav, Ma, Ziqiao, Stump, Christian, Berkani, Mohammed, Platnick, Jacob, Nevirkovets, Volodymyr, Basler, Luke, Piccardo, Marco, Jeanplong, Ferenc, Cohen, Niv, Tkadlec, Josef, Rosu, Paul, Padlewski, Piotr, Barzowski, Stanislaw, Montgomery, Kyle, Menezes, Aline, Patel, Arkil, Wang, Zixuan, Tucker-Foltz, Jamie, Stade, Jack, Goertzen, Tom, Kazemi, Fereshteh, Milbauer, Jeremiah, Ambay, John Arnold, Shukla, Abhishek, Labrador, Yan Carlos Leyva, Givré, Alan, Wolff, Hew, Rossbach, Vivien, Aziz, Muhammad Fayez, Kaddar, Younesse, Chen, Yanxu, Zhang, Robin, Pan, Jiayi, Terpin, Antonio, Muennighoff, Niklas, Schoelkopf, Hailey, Zheng, Eric, Carmi, Avishy, Jones, Adam, Shah, Jainam, Brown, Ethan D. L., Zhu, Kelin, Bartolo, Max, Wheeler, Richard, Ho, Andrew, Barkan, Shaul, Wang, Jiaqi, Stehberger, Martin, Kretov, Egor, Sridhar, Kaustubh, EL-Wasif, Zienab, Zhang, Anji, Pyda, Daniel, Tam, Joanna, Cunningham, David M., Goryachev, Vladimir, Patramanis, Demosthenes, Krause, Michael, Redenti, Andrew, Bugas, Daniel, Aldous, David, Lai, Jesyin, Coleman, Shannon, Bahaloo, Mohsen, Xu, Jiangnan, Lee, Sangwon, Zhao, Sandy, Tang, Ning, Cohen, Michael K., Carroll, Micah, Paradise, Orr, Kirchner, Jan Hendrik, Steinerberger, Stefan, Ovchynnikov, Maksym, Matos, Jason O., Shenoy, Adithya, Junior, Benedito Alves de Oliveira, Wang, Michael, Nie, Yuzhou, Giordano, Paolo, Petersen, Philipp, Sztyber-Betley, Anna, Shukla, Priti, Crozier, Jonathan, Pinto, Antonella, Verma, Shreyas, Joshi, Prashant, Yong, Zheng-Xin, Tee, Allison, Andréoletti, Jérémy, Weller, Orion, Singhal, Raghav, Zhang, Gang, Ivanov, Alexander, Khoury, Seri, Mostaghimi, Hamid, Thaman, Kunvar, Chen, Qijia, Khánh, Tran Quoc, Loader, Jacob, Cavalleri, Stefano, Szlyk, Hannah, Brown, Zachary, Roberts, Jonathan, Alley, William, Sun, Kunyang, Stendall, Ryan, Lamparth, Max, Reuel, Anka, Wang, Ting, Xu, Hanmeng, Raparthi, Sreenivas Goud, Hernández-Cámara, Pablo, Martin, Freddie, Malishev, Dmitry, Preu, Thomas, Korbak, Tomek, Abramovitch, Marcus, Williamson, Dominic, Chen, Ziye, Bálint, Biró, Bari, M Saiful, Kassani, Peyman, Wang, Zihao, Ansarinejad, Behzad, Goswami, Laxman Prasad, Sun, Yewen, Elgnainy, Hossam, Tordera, Daniel, Balabanian, George, Anderson, Earth, Kvistad, Lynna, Moyano, Alejandro José, Maheshwari, Rajat, Sakor, Ahmad, Eron, Murat, McAlister, Isaac C., Gimenez, Javier, Enyekwe, Innocent, O., Andrew Favre D., Shah, Shailesh, Zhou, Xiaoxiang, Kamalov, Firuz, Clark, Ronald, Abdoli, Sherwin, Santens, Tim, Meer, Khalida, Wang, Harrison K, Ramakrishnan, Kalyan, Chen, Evan, Tomasiello, Alessandro, De Luca, G. Bruno, Looi, Shi-Zhuo, Le, Vinh-Kha, Kolt, Noam, Mündler, Niels, Semler, Avi, Rodman, Emma, Drori, Jacob, Fossum, Carl J, Jagota, Milind, Pradeep, Ronak, Fan, Honglu, Shah, Tej, Eicher, Jonathan, Chen, Michael, Thaman, Kushal, Merrill, William, Harris, Carter, Gross, Jason, Gusev, Ilya, Sharma, Asankhaya, Agnihotri, Shashank, Zhelnov, Pavel, Usawasutsakorn, Siranut, Mofayezi, Mohammadreza, Bogdanov, Sergei, Piperski, Alexander, Carauleanu, Marc, Zhang, David K., Ler, Dylan, Leventov, Roman, Soroko, Ignat, Jansen, Thorben, Lauer, Pascal, Duersch, Joshua, Taamazyan, Vage, Morak, Wiktor, Ma, Wenjie, Held, William, Huy, Tran Đuc, Xian, Ruicheng, Zebaze, Armel Randy, Mohamed, Mohanad, Leser, Julian Noah, Yuan, Michelle X, Yacar, Laila, Lengler, Johannes, Shahrtash, Hossein, Oliveira, Edson, Jackson, Joseph W., Gonzalez, Daniel Espinosa, Zou, Andy, Chidambaram, Muthu, Manik, Timothy, Haffenden, Hector, Stander, Dashiell, Dasouqi, Ali, Shen, Alexander, Duc, Emilien, Golshani, Bita, Stap, David, Uzhou, Mikalai, Zhidkovskaya, Alina Borisovna, Lewark, Lukas, Vincze, Mátyás, Wehr, Dustin, Tang, Colin, Hossain, Zaki, Phillips, Shaun, Muzhen, Jiang, Ekström, Fredrik, Hammon, Angela, Patel, Oam, Remy, Nicolas, Farhidi, Faraz, Medley, George, Mohammadzadeh, Forough, Peñaflor, Madellene, Kassahun, Haile, Friedrich, Alena, Sparrow, Claire, Sakal, Taom, Dhamane, Omkar, Mirabadi, Ali Khajegili, Hallman, Eric, Battaglia, Mike, Maghsoudimehrabani, Mohammad, Hoang, Hieu, Amit, Alon, Hulbert, Dave, Pereira, Roberto, Weber, Simon, Mensah, Stephen, Andre, Nathan, Peristyy, Anton, Harjadi, Chris, Gupta, Himanshu, Malina, Stephen, Albanie, Samuel, Cai, Will, Mehkary, Mustafa, Reidegeld, Frank, Dick, Anna-Katharina, Friday, Cary, Sidhu, Jasdeep, Kim, Wanyoung, Costa, Mariana, Gurdogan, Hubeyb, Weber, Brian, Kumar, Harsh, Jiang, Tong, Agarwal, Arunim, Ceconello, Chiara, Vaz, Warren S., Zhuang, Chao, Park, Haon, Tawfeek, Andrew R., Aggarwal, Daattavya, Kirchhof, Michael, Dai, Linjie, Kim, Evan, Ferret, Johan, Wang, Yuzhou, Yan, Minghao, Burdzy, Krzysztof, Zhang, Lixin, Franca, Antonio, Pham, Diana T., Loh, Kang Yong, Gul, Shreen, Chhablani, Gunjan, Du, Zhehang, Cosma, Adrian, White, Colin, Riblet, Robin, Saxena, Prajvi, Votava, Jacob, Vinnikov, Vladimir, Delaney, Ethan, Halasyamani, Shiv, Shahid, Syed M., Mourrat, Jean-Christophe, Vetoshkin, Lavr, Bacho, Renas, Ginis, Vincent, Maksapetyan, Aleksandr, de la Rosa, Florencia, Li, Xiuyu, Malod, Guillaume, Lang, Leon, Laurendeau, Julien, Adesanya, Fatimah, Portier, Julien, Hollom, Lawrence, Souza, Victor, Zhou, Yuchen Anna, Yalın, Yiğit, Obikoya, Gbenga Daniel, Arnaboldi, Luca, Rai, Bigi, Filippo, Bacho, Kaniuar, Clavier, Pierre, Recchia, Gabriel, Popescu, Mara, Shulga, Nikita, Tanwie, Ngefor Mildred, Lux, Thomas C. H., Rank, Ben, Ni, Colin, Yakimchyk, Alesia, Huanxu, Liu, Häggström, Olle, Verkama, Emil, Narayan, Himanshu, Gundlach, Hans, Brito-Santana, Leonor, Amaro, Brian, Vajipey, Vivek, Grover, Rynaa, Fan, Yiyang, Silva, Gabriel Poesia Reis e, Xin, Linwei, Kratish, Yosi, Łucki, Jakub, Li, Wen-Ding, Xu, Justin, Scaria, Kevin Joseph, Vargus, Freddie, Habibi, Farzad, Long, Lian, Rodolà, Emanuele, Robins, Jules, Cheng, Vincent, Grabb, Declan, Bosio, Ida, Fruhauff, Tony, Akov, Ido, Lo, Eve J. Y., Qi, Hao, Jiang, Xi, Segev, Ben, Fan, Jingxuan, Martinson, Sarah, Wang, Erik Y., Hausknecht, Kaylie, Brenner, Michael P., Mao, Mao, Jiang, Yibo, Zhang, Xinyu, Avagian, David, Scipio, Eshawn Jessica, Siddiqi, Muhammad Rehan, Ragoler, Alon, Tan, Justin, Patil, Deepakkumar, Plecnik, Rebeka, Kirtland, Aaron, Montecillo, Roselynn Grace, Durand, Stephane, Bodur, Omer Faruk, Adoul, Zahra, Zekry, Mohamed, Douville, Guillaume, Karakoc, Ali, Santos, Tania C. B., Shamseldeen, Samir, Karim, Loukmane, Liakhovitskaia, Anna, Resman, Nate, Farina, Nicholas, Gonzalez, Juan Carlos, Maayan, Gabe, Hoback, Sarah, Pena, Rodrigo De Oliveira, Sherman, Glen, Mariji, Hodjat, Pouriamanesh, Rasoul, Wu, Wentao, Demir, Gözdenur, Mendoza, Sandra, Alarab, Ismail, Cole, Joshua, Ferreira, Danyelle, Johnson, Bryan, Milliron, Hsiaoyun, Safdari, Mohammad, Dai, Liangti, Arthornthurasuk, Siriphan, Pronin, Alexey, Fan, Jing, Ramirez-Trinidad, Angel, Cartwright, Ashley, Pottmaier, Daphiny, Taheri, Omid, Outevsky, David, Stepanic, Stanley, Perry, Samuel, Askew, Luke, Rodríguez, Raúl Adrián Huerta, Dendane, Abdelkader, Ali, Sam, Lorena, Ricardo, Iyer, Krishnamurthy, Salauddin, Sk Md, Islam, Murat, Gonzalez, Juan, Ducey, Josh, Campbell, Russell, Somrak, Maja, Mavroudis, Vasilios, Vergo, Eric, Qin, Juehang, Borbás, Benjámin, Chu, Eric, Lindsey, Jack, Radhakrishnan, Anil, Jallon, Antoine, McInnis, I. M. J., Hoover, Alex, Möller, Sören, Bian, Song, Lai, John, Patwardhan, Tejal, Yue, Summer, Wang, Alexandr, and Hendrycks, Dan more...
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai., Comment: 25 pages, 6 figures more...
- Published
- 2025
9. Global Semantic-Guided Sub-image Feature Weight Allocation in High-Resolution Large Vision-Language Models
- Author
-
Liang, Yuxuan, Li, Xu, Chen, Xiaolei, Chen, Haotian, Zheng, Yi, Lai, Chenghang, Li, Bin, and Xue, Xiangyang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
As the demand for high-resolution image processing in Large Vision-Language Models (LVLMs) grows, sub-image partitioning has become a popular approach for mitigating visual information loss associated with fixed-resolution processing. However, existing partitioning methods uniformly process sub-images, resulting in suboptimal image understanding. In this work, we reveal that the sub-images with higher semantic relevance to the entire image encapsulate richer visual information for preserving the model's visual understanding ability. Therefore, we propose the Global Semantic-guided Weight Allocator (GSWA) module, which dynamically allocates weights to sub-images based on their relative information density, emulating human visual attention mechanisms. This approach enables the model to focus on more informative regions, overcoming the limitations of uniform treatment. We integrate GSWA into the InternVL2-2B framework to create SleighVL, a lightweight yet high-performing model. Extensive experiments demonstrate that SleighVL outperforms models with comparable parameters and remains competitive with larger models. Our work provides a promising direction for more efficient and contextually aware high-resolution image processing in LVLMs, advancing multimodal system development., Comment: 10 pages, 10 figures and tables more...
- Published
- 2025
10. MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation Methods
- Author
-
Xu, Zukang, Yue, Yuxuan, Hu, Xing, Yuan, Zhihang, Jiang, Zixu, Chen, Zhixuan, Yu, Jiangyong, Xu, Chen, Zhou, Sifan, and Yang, Dawei
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Mamba is an efficient sequence model that rivals Transformers and demonstrates significant potential as a foundational architecture for various tasks. Quantization is commonly used in neural networks to reduce model size and computational latency. However, applying quantization to Mamba remains underexplored, and existing quantization methods, which have been effective for CNN and Transformer models, appear inadequate for Mamba models (e.g., Quarot suffers a 21% accuracy drop on Vim-T$^\dagger$ even under W8A8). We have pioneered the exploration of this issue and identified several key challenges. First, significant outliers are present in gate projections, output projections, and matrix multiplications. Second, Mamba's unique parallel scan further amplifies these outliers, leading to uneven and heavy-tailed data distributions. Third, even with the application of the Hadamard transform, the variance across channels in weights and activations still remains inconsistent. To these ends, we propose MambaQuant, a post-training quantization (PTQ) framework consisting of: 1) Karhunen-Loeve Transformation (KLT) enhanced rotation, rendering the rotation matrix adaptable to diverse channel distributions. 2) Smooth-Fused rotation, which equalizes channel variances and can merge additional parameters into model weights. Experiments show that MambaQuant can quantize both weights and activations into 8-bit with less than 1% accuracy loss for Mamba-based vision and language tasks. To the best of our knowledge, MambaQuant is the first comprehensive PTQ design for the Mamba family, paving the way for further advancements in its application. more...
- Published
- 2025
11. Cross section measurement of $e^{+}e^{-} \to f_{1}(1285)\pi^{+}\pi^{-}$ at center-of-mass energies between $3.808$ and $4.951\rm GeV$
- Author
-
BESIII Collaboration, Ablikim, M., Achasov, M. N., Adlarson, P., Afedulidis, O., Ai, X. C., Aliberti, R., Amoroso, A., An, Q., Bai, Y., Bakina, O., Balossino, I., Ban, Y., Bao, H. -R., Batozskaya, V., Begzsuren, K., Berger, N., Berlowski, M., Bertani, M., Bettoni, D., Bianchi, F., Bianco, E., Bortone, A., Boyko, I., Briere, R. A., Brueggemann, A., Cai, H., Cai, X., Calcaterra, A., Cao, G. F., Cao, N., Cetin, S. A., Chang, J. F., Che, G. R., Chelkov, G., Chen, C., Chen, C. H., Chen, Chao, Chen, G., Chen, H. S., Chen, H. Y., Chen, M. L., Chen, S. J., Chen, S. L., Chen, S. M., Chen, T., Chen, X. R., Chen, X. T., Chen, Y. B., Chen, Y. Q., Chen, Z. J., Chen, Z. Y., Choi, S. K., Cibinetto, G., Cossio, F., Cui, J. J., Dai, H. L., Dai, J. P., Dbeyssi, A., de Boer, R. E., Dedovich, D., Deng, C. Q., Deng, Z. Y., Denig, A., Denysenko, I., Destefanis, M., De Mori, F., Ding, B., Ding, X. X., Ding, Y., Dong, J., Dong, L. Y., Dong, M. Y., Dong, X., Du, M. C., Du, S. X., Duan, Y. Y., Duan, Z. H., Egorov, P., Fan, Y. H., Fang, J., Fang, S. S., Fang, W. X., Fang, Y., Fang, Y. Q., Farinelli, R., Fava, L., Feldbauer, F., Felici, G., Feng, C. Q., Feng, J. H., Feng, Y. T., Fritsch, M., Fu, C. D., Fu, J. L., Fu, Y. W., Gao, H., Gao, X. B., Gao, Y. N., Gao, Yang, Garbolino, S., Garzia, I., Ge, L., Ge, P. T., Ge, Z. W., Geng, C., Gersabeck, E. M., Gilman, A., Goetzen, K., Gong, L., Gong, W. X., Gradl, W., Gramigna, S., Greco, M., Gu, M. H., Gu, Y. T., Guan, C. Y., Guo, A. Q., Guo, L. B., Guo, M. J., Guo, R. P., Guo, Y. P., Guskov, A., Gutierrez, J., Han, K. L., Han, T. T., Hanisch, F., Hao, X. Q., Harris, F. A., He, K. K., He, K. L., Heinsius, F. H., Heinz, C. H., Heng, Y. K., Herold, C., Holtmann, T., Hong, P. C., Hou, G. Y., Hou, X. T., Hou, Y. R., Hou, Z. L., Hu, B. Y., Hu, H. M., Hu, J. F., Hu, S. L., Hu, T., Hu, Y., Huang, G. S., Huang, K. X., Huang, L. Q., Huang, X. T., Huang, Y. P., Huang, Y. S., Hussain, T., Hölzken, F., Hüsken, N., der Wiesche, N. in, Jackson, J., Janchiv, S., Jeong, J. H., Ji, Q., Ji, Q. P., Ji, W., Ji, X. B., Ji, X. L., Ji, Y. Y., Jia, X. Q., Jia, Z. K., Jiang, D., Jiang, H. B., Jiang, P. C., Jiang, S. S., Jiang, T. J., Jiang, X. S., Jiang, Y., Jiao, J. B., Jiao, J. K., Jiao, Z., Jin, S., Jin, Y., Jing, M. Q., Jing, X. M., Johansson, T., Kabana, S., Kalantar-Nayestanaki, N., Kang, X. L., Kang, X. S., Kavatsyuk, M., Ke, B. C., Khachatryan, V., Khoukaz, A., Kiuchi, R., Kolcu, O. B., Kopf, B., Kuessner, M., Kui, X., Kumar, N., Kupsc, A., Kühn, W., Lane, J. J., Lavezzi, L., Lei, T. T., Lei, Z. H., Lellmann, M., Lenz, T., Li, C., Li, C. H., Li, Cheng, Li, D. M., Li, F., Li, G., Li, H. B., Li, H. J., Li, H. N., Li, Hui, Li, J. R., Li, J. S., Li, K., Li, L. J., Li, L. K., Li, Lei, Li, M. H., Li, P. R., Li, Q. M., Li, Q. X., Li, R., Li, S. X., Li, T., Li, W. D., Li, W. G., Li, X., Li, X. H., Li, X. L., Li, X. Y., Li, X. Z., Li, Y. G., Li, Z. J., Li, Z. Y., Liang, C., Liang, H., Liang, Y. F., Liang, Y. T., Liao, G. R., Liao, Y. P., Libby, J., Limphirat, A., Lin, C. C., Lin, D. X., Lin, T., Liu, B. J., Liu, B. X., Liu, C., Liu, C. X., Liu, D., Liu, F., Liu, F. H., Liu, Feng, Liu, G. M., Liu, H., Liu, H. B., Liu, H. H., Liu, H. M., Liu, Huihui, Liu, J. B., Liu, J. Y., Liu, K., Liu, K. Y., Liu, Ke, Liu, L., Liu, L. C., Liu, Lu, Liu, M. H., Liu, P. L., Liu, Q., Liu, S. B., Liu, T., Liu, W. K., Liu, W. M., Liu, X., Liu, Y., Liu, Y. B., Liu, Z. A., Liu, Z. D., Liu, Z. Q., Lou, X. C., Lu, F. X., Lu, H. J., Lu, J. G., Lu, X. L., Lu, Y., Lu, Y. P., Lu, Z. H., Luo, C. L., Luo, J. R., Luo, M. X., Luo, T., Luo, X. L., Lyu, X. R., Lyu, Y. F., Ma, F. C., Ma, H., Ma, H. L., Ma, J. L., Ma, L. L., Ma, L. R., Ma, M. M., Ma, Q. M., Ma, R. Q., Ma, T., Ma, X. T., Ma, X. Y., Ma, Y., Ma, Y. M., Maas, F. E., Maggiora, M., Malde, S., Mao, Y. J., Mao, Z. P., Marcello, S., Meng, Z. X., Messchendorp, J. G., Mezzadri, G., Miao, H., Min, T. J., Mitchell, R. E., Mo, X. H., Moses, B., Muchnoi, N. Yu., Muskalla, J., Nefedov, Y., Nerling, F., Nie, L. S., Nikolaev, I. B., Ning, Z., Nisar, S., Niu, Q. L., Niu, W. D., Niu, Y., Olsen, S. L., Ouyang, Q., Pacetti, S., Pan, X., Pan, Y., Pathak, A., Pei, Y. P., Pelizaeus, M., Peng, H. P., Peng, Y. Y., Peters, K., Ping, J. L., Ping, R. G., Plura, S., Prasad, V., Qi, F. Z., Qi, H., Qi, H. R., Qi, M., Qi, T. Y., Qian, S., Qian, W. B., Qiao, C. F., Qiao, X. K., Qin, J. J., Qin, L. Q., Qin, L. Y., Qin, X. P., Qin, X. S., Qin, Z. H., Qiu, J. F., Qu, Z. H., Redmer, C. F., Ren, K. J., Rivetti, A., Rolo, M., Rong, G., Rosner, Ch., Ruan, S. N., Salone, N., Sarantsev, A., Schelhaas, Y., Schoenning, K., Scodeggio, M., Shan, K. Y., Shan, W., Shan, X. Y., Shang, Z. J., Shangguan, J. F., Shao, L. G., Shao, M., Shen, C. P., Shen, H. F., Shen, W. H., Shen, X. Y., Shi, B. A., Shi, H., Shi, H. C., Shi, J. L., Shi, J. Y., Shi, Q. Q., Shi, S. Y., Shi, X., Song, J. J., Song, T. Z., Song, W. M., Song, Y. J., Song, Y. X., Sosio, S., Spataro, S., Stieler, F., Su, Y. J., Sun, G. B., Sun, G. X., Sun, H., Sun, H. K., Sun, J. F., Sun, K., Sun, L., Sun, S. S., Sun, T., Sun, W. Y., Sun, Y., Sun, Y. J., Sun, Y. Z., Sun, Z. Q., Sun, Z. T., Tang, C. J., Tang, G. Y., Tang, J., Tang, M., Tang, Y. A., Tao, L. Y., Tao, Q. T., Tat, M., Teng, J. X., Thoren, V., Tian, W. H., Tian, Y., Tian, Z. F., Uman, I., Wan, Y., Wang, S. J., Wang, B., Wang, B. L., Wang, Bo, Wang, D. Y., Wang, F., Wang, H. J., Wang, J. J., Wang, J. P., Wang, K., Wang, L. L., Wang, M., Wang, N. Y., Wang, S., Wang, T., Wang, T. J., Wang, W., Wang, W. P., Wang, X., Wang, X. F., Wang, X. J., Wang, X. L., Wang, X. N., Wang, Y., Wang, Y. D., Wang, Y. F., Wang, Y. L., Wang, Y. N., Wang, Y. Q., Wang, Yaqian, Wang, Yi, Wang, Z., Wang, Z. L., Wang, Z. Y., Wang, Ziyi, Wei, D. H., Weidner, F., Wen, S. P., Wen, Y. R., Wiedner, U., Wilkinson, G., Wolke, M., Wollenberg, L., Wu, C., Wu, J. F., Wu, L. H., Wu, L. J., Wu, X., Wu, X. H., Wu, Y., Wu, Y. H., Wu, Y. J., Wu, Z., Xia, L., Xian, X. M., Xiang, B. H., Xiang, T., Xiao, D., Xiao, G. Y., Xiao, S. Y., Xiao, Y. L., Xiao, Z. J., Xie, C., Xie, X. H., Xie, Y., Xie, Y. G., Xie, Y. H., Xie, Z. P., Xing, T. Y., Xu, C. F., Xu, C. J., Xu, G. F., Xu, H. Y., Xu, M., Xu, Q. J., Xu, Q. N., Xu, W., Xu, W. L., Xu, X. P., Xu, Y. C., Xu, Z. S., Yan, F., Yan, L., Yan, W. B., Yan, W. C., Yan, X. Q., Yang, H. J., Yang, H. L., Yang, H. X., Yang, T., Yang, Y., Yang, Y. F., Yang, Y. X., Yang, Z. W., Yao, Z. P., Ye, M., Ye, M. H., Yin, J. H., Yin, Junhao, You, Z. Y., Yu, B. X., Yu, C. X., Yu, G., Yu, J. S., Yu, T., Yu, X. D., Yu, Y. C., Yuan, C. Z., Yuan, J., Yuan, L., Yuan, S. C., Yuan, Y., Yuan, Z. Y., Yue, C. X., Zafar, A. A., Zeng, F. R., Zeng, S. H., Zeng, X., Zeng, Y., Zeng, Y. J., Zhai, X. Y., Zhai, Y. C., Zhan, Y. H., Zhang, A. Q., Zhang, B. L., Zhang, B. X., Zhang, D. H., Zhang, G. Y., Zhang, H., Zhang, H. C., Zhang, H. H., Zhang, H. Q., Zhang, H. R., Zhang, H. Y., Zhang, J., Zhang, J. J., Zhang, J. L., Zhang, J. Q., Zhang, J. S., Zhang, J. W., Zhang, J. X., Zhang, J. Y., Zhang, J. Z., Zhang, Jianyu, Zhang, L. M., Zhang, Lei, Zhang, P., Zhang, Q. Y., Zhang, R. Y., Zhang, S. H., Zhang, Shulei, Zhang, X. D., Zhang, X. M., Zhang, X. Y., Zhang, Y., Zhang, Y. T., Zhang, Y. H., Zhang, Y. M., Zhang, Yan, Zhang, Z. D., Zhang, Z. H., Zhang, Z. L., Zhang, Z. Y., Zhang, Z. Z., Zhao, G., Zhao, J. Y., Zhao, J. Z., Zhao, L., Zhao, Lei, Zhao, M. G., Zhao, N., Zhao, R. P., Zhao, S. J., Zhao, Y. B., Zhao, Y. X., Zhao, Z. G., Zhemchugov, A., Zheng, B., Zheng, B. M., Zheng, J. P., Zheng, W. J., Zheng, Y. H., Zhong, B., Zhong, X., Zhou, H., Zhou, J. Y., Zhou, L. P., Zhou, S., Zhou, X., Zhou, X. K., Zhou, X. R., Zhou, X. Y., Zhou, Y. Z., Zhu, A. N., Zhu, J., Zhu, K., Zhu, K. J., Zhu, K. S., Zhu, L., Zhu, L. X., Zhu, S. H., Zhu, T. J., Zhu, W. D., Zhu, Y. C., Zhu, Z. A., Zou, J. H., and Zu, J. more...
- Subjects
High Energy Physics - Experiment - Abstract
Using data samples collected by the \mbox{BESIII} detector located at the Beijing Electron Positron Collider, the cross sections of the process $e^+e^-\to f_{1}(1285)\pi^+\pi^-$ are measured at forty-five center-of-mass energies from $3.808$ to $4.951 {\rm GeV}$. An investigation on the cross section line shape is performed, and no significant structure is observed. more...
- Published
- 2025
12. Wafer-scale Integration of Single-Crystalline MoS$_2$ for Flexible Electronics Enabled by Oxide Dry-transfer
- Author
-
Xu, Xiang, Chen, Yitong, Shen, Jichuang, Huang, Qi, Jiang, Tong, Chen, Han, Zhu, Huaze, Ma, Yaqing, Wang, Hao, Li, Wenhao, Ji, Chen, Li, Dingwei, Zhang, Siyu, Wang, Yan, Zhu, Bowen, and Kong, Wei
- Subjects
Physics - Applied Physics ,Condensed Matter - Materials Science - Abstract
Atomically thin, single-crystalline transition metal dichalcogenides (TMDCs) grown via chemical vapor deposition (CVD) on sapphire substrates exhibit exceptional mechanical and electrical properties, positioning them as excellent channel materials for flexible electronics. However, conventional wet-transfer processes for integrating these materials onto flexible substrates often introduce surface contamination, significantly degrading device performance. Here, we present a wafer-scale dry-transfer technique using a high-dielectric oxide as the transfer medium, enabling the integration of 4-inch single-crystalline MoS$_2$ onto flexible substrates. This method eliminates contact with polymers or solvents, thus preserving the intrinsic electronic properties of MoS$_2$. As a result, the fabricated flexible field-effect transistor (FET) arrays exhibit remarkable performance, with a mobility of 117 cm$^2$/Vs, a subthreshold swing of 68.8 mV dec$^{-1}$, and an ultra-high current on/off ratio of $10^{12}$-values comparable to those achieved on rigid substrates. Leveraging the outstanding electrical characteristics, we demonstrated MoS$_2$-based flexible inverters operating in the subthreshold regime, achieving both a high gain of 218 and ultra-low power consumption of 1.4 pW/$\mu$m. Additionally, we integrated a flexible tactile sensing system driven by active-matrix MoS$_2$ FET arrays onto a robotic gripper, enabling real-time object identification. These findings demonstrate the simultaneous achievement of high electrical performance and flexibility, highlighting the immense potential of single-crystalline TMDC-based flexible electronics for real-world applications. more...
- Published
- 2025
13. The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling Capabilities
- Author
-
Hsu, Chan-Jan, Liu, Chia-Sheng, Chen, Meng-Hsi, Chen, Muxi, Hsu, Po-Chun, Chen, Yi-Chang, and Shiu, Da-Shan
- Subjects
Computer Science - Computation and Language - Abstract
Breeze 2 is a suite of advanced multi-modal language models, available in 3B and 8B parameter configurations, specifically designed to enhance Traditional Chinese language representation. Building upon the Llama 3, Breeze 2 continues pretraining on an extensive corpus to enhance the linguistic and cultural heritage of Traditional Chinese. It incorporates vision-aware capabilities through a visual encoder and a bridge module, and supports function-calling via prompt templates and post-training on function-calling data. The effectiveness of Breeze 2 is benchmarked across various tasks, including Taiwan general knowledge, instruction-following, long context, function calling, and vision understanding. Furthermore, we showcase the capabilities of the its 3B model in a mobile application. We are publicly releasing all Breeze 2 models under the Llama 3 Community License. more...
- Published
- 2025
14. On the Gauge Invariance of Secondary Gravitational Waves
- Author
-
Yuan, Chen, Lu, Yizhou, Chen, Zu-Cheng, and Liu, Lang
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,General Relativity and Quantum Cosmology - Abstract
Second-order tensor perturbations induced by primordial fluctuations play a crucial role in probing small-scale physics, but gauge dependence of their energy density has remained a fundamental challenge in cosmological perturbation theory. We address this issue by introducing a boundary condition-based filtering method that extracts physical radiation through the Sommerfeld criterion. We demonstrate that after filtering non-physical modes, the energy density of secondary gravitational waves becomes gauge-invariant and exhibits physically consistent behavior in the sub-horizon limit. This approach provides a unified framework for both adiabatic and isocurvature perturbations, enhancing theoretical predictions and observational signatures of early universe physics., Comment: 8 pages, comments are welcome! more...
- Published
- 2025
15. Characterization and Optimization of Tunable Couplers via Adiabatic Control in Superconducting Circuits
- Author
-
Zhang, Xuan, Zhang, Xu, Chen, Changling, Tang, Kai, Yi, Kangyuan, Luo, Kai, Xie, Zheshu, Chen, Yuanzhen, and Yan, Tongxing
- Subjects
Quantum Physics - Abstract
In the pursuit of scalable superconducting quantum computing, tunable couplers have emerged as a pivotal component, offering the flexibility required for complex quantum operations of high performance. In most current architectures of superconducting quantum chips, such couplers are not equipped with dedicated readout circuits to reduce complexity in both design and operation. However, this strategy poses challenges in precise characterization, calibration, and control of the couplers. In this work, we develop a hardware-efficient and robust technique based on adiabatic control to address the above issue. The critical ingredient of this technique is adiabatic swap (aSWAP) operation between a tunable coupler and nearby qubits. Using this technique, we have characterized and calibrated tunable couplers in our chips and achieved straightforward and precise control over these couplers. For example, we have demonstrated the calibration and correction of the flux distortion of couplers. In addition, we have also expanded this technique to tune the dispersive shift between a frequency-fixed qubit and its readout resonator over a wide range. more...
- Published
- 2025
16. Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models
- Author
-
Lin, Zhenghao, Tang, Zihao, Liu, Xiao, Gong, Yeyun, Cheng, Yi, Chen, Qi, Li, Hang, Xin, Ying, Yang, Ziyue, Yang, Kailai, Yan, Yu, Liang, Xiao, Lu, Shuai, Huang, Yiming, Luo, Zheheng, Qu, Lei, Feng, Xuan, Wang, Yaoxiang, Xia, Yuqing, Chen, Feiyang, Jiang, Yuting, Hu, Yasen, Ni, Hao, Li, Binyang, Zhao, Guoshuai, Chiang, Jui-Hao, Guo, Zhongxin, Lin, Chen, Kuang, Kun, Li, Wenjie, Shen, Yelong, Jiao, Jian, Cheng, Peng, and Yang, Mao more...
- Subjects
Computer Science - Computation and Language - Abstract
We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%. more...
- Published
- 2025
17. Bridging The Multi-Modality Gaps of Audio, Visual and Linguistic for Speech Enhancement
- Author
-
Lin, Meng-Ping, Hou, Jen-Cheng, Chen, Chia-Wei, Chien, Shao-Yi, Chen, Jun-Cheng, Lu, Xugang, and Tsao, Yu
- Subjects
Computer Science - Sound ,Computer Science - Machine Learning ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Speech Enhancement (SE) aims to improve the quality of noisy speech. It has been shown that additional visual cues can further improve performance. Given that speech communication involves audio, visual, and linguistic modalities, it is natural to expect another performance boost by incorporating linguistic information. However, bridging the modality gaps to efficiently incorporate linguistic information, along with audio and visual modalities during knowledge transfer, is a challenging task. In this paper, we propose a novel multi-modality learning framework for SE. In the model framework, a state-of-the-art diffusion Model backbone is utilized for Audio-Visual Speech Enhancement (AVSE) modeling where both audio and visual information are directly captured by microphones and video cameras. Based on this AVSE, the linguistic modality employs a PLM to transfer linguistic knowledge to the visual acoustic modality through a process termed Cross-Modal Knowledge Transfer (CMKT) during AVSE model training. After the model is trained, it is supposed that linguistic knowledge is encoded in the feature processing of the AVSE model by the CMKT, and the PLM will not be involved during inference stage. We carry out SE experiments to evaluate the proposed model framework. Experimental results demonstrate that our proposed AVSE system significantly enhances speech quality and reduces generative artifacts, such as phonetic confusion compared to the state-of-the-art. Moreover, our visualization results demonstrate that our Cross-Modal Knowledge Transfer method further improves the generated speech quality of our AVSE system. These findings not only suggest that Diffusion Model-based techniques hold promise for advancing the state-of-the-art in AVSE but also justify the effectiveness of incorporating linguistic information to improve the performance of Diffusion-based AVSE systems. more...
- Published
- 2025
18. Accelerating Discovery of Solid-State Thin-Film Metal Dealloying for 3D Nanoarchitecture Materials Design through Laser Thermal Gradient Treatment
- Author
-
Chung, Cheng-Chu, Li, Ruipeng, Veith, Gabriel M., Zhang, Honghu, Camino, Fernando, Lu, Ming, Tiwale, Nikhil, Zhang, Sheng, Yager, Kevin, and Chen-Wiegart, Yu-chen Karen
- Subjects
Condensed Matter - Materials Science - Abstract
Thin-film solid-state metal dealloying (thin-film SSMD) is a promising method for fabricating nanostructures with controlled morphology and efficiency, offering advantages over conventional bulk materials processing methods for integration into practical applications. Although machine learning (ML) has facilitated the design of dealloying systems, the selection of key thermal treatment parameters for nanostructure formation remains largely unknown and dependent on experimental trial and error. To overcome this challenge, a workflow enabling high-throughput characterization of thermal treatment parameters while probing local nanostructures of thin-film samples is needed. In this work, a laser-based thermal treatment is demonstrated to create temperature gradients on single thin-film samples of Nb-Al/Sc and Nb-Al/Cu. This continuous thermal space enables observation of dealloying transitions and the resulting nanostructures of interest. Through synchrotron X-ray multimodal and high-throughput characterization, critical transitions and nanostructures can be rapidly captured and subsequently verified using electron microscopy. The key temperatures driving chemical reactions and morphological evolutions are clearly identified within this framework. While the oxidation process may contribute to nanostructure formation during thin-film treatment, the dealloying process at the dealloying front involves interactions solely between the dealloying elements, highlighting the availability and viability of the selected systems. This approach enables efficient exploration of the dealloying process and validation of ML predictions, thereby accelerating the discovery of thin-film SSMD systems with targeted nanostructures., Comment: The main content contains 6 figures within 25 pages. The supporting information includes 5 figures within 5 pages more...
- Published
- 2025
19. UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior
- Author
-
Chen, I-Hsiang, Chen, Wei-Ting, Liu, Yu-Wei, Chiang, Yuan-Chun, Kuo, Sy-Yen, and Yang, Ming-Hsuan
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Machine Learning - Abstract
Image restoration aims to recover content from inputs degraded by various factors, such as adverse weather, blur, and noise. Perceptual Image Restoration (PIR) methods improve visual quality but often do not support downstream tasks effectively. On the other hand, Task-oriented Image Restoration (TIR) methods focus on enhancing image utility for high-level vision tasks, sometimes compromising visual quality. This paper introduces UniRestore, a unified image restoration model that bridges the gap between PIR and TIR by using a diffusion prior. The diffusion prior is designed to generate images that align with human visual quality preferences, but these images are often unsuitable for TIR scenarios. To solve this limitation, UniRestore utilizes encoder features from an autoencoder to adapt the diffusion prior to specific tasks. We propose a Complementary Feature Restoration Module (CFRM) to reconstruct degraded encoder features and a Task Feature Adapter (TFA) module to facilitate adaptive feature fusion in the decoder. This design allows UniRestore to optimize images for both human perception and downstream task requirements, addressing discrepancies between visual quality and functional needs. Integrating these modules also enhances UniRestore's adapability and efficiency across diverse tasks. Extensive expertments demonstrate the superior performance of UniRestore in both PIR and TIR scenarios., Comment: 11 pages, 6 figures more...
- Published
- 2025
20. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- Author
-
DeepSeek-AI, Guo, Daya, Yang, Dejian, Zhang, Haowei, Song, Junxiao, Zhang, Ruoyu, Xu, Runxin, Zhu, Qihao, Ma, Shirong, Wang, Peiyi, Bi, Xiao, Zhang, Xiaokang, Yu, Xingkai, Wu, Yu, Wu, Z. F., Gou, Zhibin, Shao, Zhihong, Li, Zhuoshu, Gao, Ziyi, Liu, Aixin, Xue, Bing, Wang, Bingxuan, Wu, Bochao, Feng, Bei, Lu, Chengda, Zhao, Chenggang, Deng, Chengqi, Zhang, Chenyu, Ruan, Chong, Dai, Damai, Chen, Deli, Ji, Dongjie, Li, Erhang, Lin, Fangyun, Dai, Fucong, Luo, Fuli, Hao, Guangbo, Chen, Guanting, Li, Guowei, Zhang, H., Bao, Han, Xu, Hanwei, Wang, Haocheng, Ding, Honghui, Xin, Huajian, Gao, Huazuo, Qu, Hui, Li, Hui, Guo, Jianzhong, Li, Jiashi, Wang, Jiawei, Chen, Jingchang, Yuan, Jingyang, Qiu, Junjie, Li, Junlong, Cai, J. L., Ni, Jiaqi, Liang, Jian, Chen, Jin, Dong, Kai, Hu, Kai, Gao, Kaige, Guan, Kang, Huang, Kexin, Yu, Kuai, Wang, Lean, Zhang, Lecong, Zhao, Liang, Wang, Litong, Zhang, Liyue, Xu, Lei, Xia, Leyi, Zhang, Mingchuan, Zhang, Minghua, Tang, Minghui, Li, Meng, Wang, Miaojun, Li, Mingming, Tian, Ning, Huang, Panpan, Zhang, Peng, Wang, Qiancheng, Chen, Qinyu, Du, Qiushi, Ge, Ruiqi, Zhang, Ruisong, Pan, Ruizhe, Wang, Runji, Chen, R. J., Jin, R. L., Chen, Ruyi, Lu, Shanghao, Zhou, Shangyan, Chen, Shanhuang, Ye, Shengfeng, Wang, Shiyu, Yu, Shuiping, Zhou, Shunfeng, Pan, Shuting, Li, S. S., Zhou, Shuang, Wu, Shaoqing, Yun, Tao, Pei, Tian, Sun, Tianyu, Wang, T., Zeng, Wangding, Zhao, Wanjia, Liu, Wen, Liang, Wenfeng, Gao, Wenjun, Yu, Wenqin, Zhang, Wentao, Xiao, W. L., An, Wei, Liu, Xiaodong, Wang, Xiaohan, Chen, Xiaokang, Nie, Xiaotao, Cheng, Xin, Liu, Xin, Xie, Xin, Liu, Xingchao, Yang, Xinyu, Li, Xinyuan, Su, Xuecheng, Lin, Xuheng, Li, X. Q., Jin, Xiangyue, Shen, Xiaojin, Chen, Xiaosha, Sun, Xiaowen, Wang, Xiaoxiang, Song, Xinnan, Zhou, Xinyi, Wang, Xianzu, Shan, Xinxia, Li, Y. K., Wang, Y. Q., Wei, Y. X., Zhang, Yang, Xu, Yanhong, Li, Yao, Zhao, Yao, Sun, Yaofeng, Wang, Yaohui, Yu, Yi, Zhang, Yichao, Shi, Yifan, Xiong, Yiliang, He, Ying, Piao, Yishi, Wang, Yisong, Tan, Yixuan, Ma, Yiyang, Liu, Yiyuan, Guo, Yongqiang, Ou, Yuan, Wang, Yuduan, Gong, Yue, Zou, Yuheng, He, Yujia, Xiong, Yunfan, Luo, Yuxiang, You, Yuxiang, Liu, Yuxuan, Zhou, Yuyang, Zhu, Y. X., Huang, Yanping, Li, Yaohui, Zheng, Yi, Zhu, Yuchen, Ma, Yunxian, Tang, Ying, Zha, Yukun, Yan, Yuting, Ren, Z. Z., Ren, Zehui, Sha, Zhangli, Fu, Zhe, Xu, Zhean, Xie, Zhenda, Zhang, Zhengyan, Hao, Zhewen, Ma, Zhicheng, Yan, Zhigang, Wu, Zhiyu, Gu, Zihui, Zhu, Zijia, Liu, Zijun, Li, Zilin, Xie, Ziwei, Song, Ziyang, Pan, Zizheng, Huang, Zhen, Xu, Zhipeng, Zhang, Zhongyu, and Zhang, Zhen more...
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama. more...
- Published
- 2025
21. Observation of the $\Lambda_b^0 \to J/\psi \Xi^- K^+$ and $\Xi_b^0 \to J/\psi \Xi^- \pi^+$ decays
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Balboni, A., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bonacci, R. B., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bottalico, E., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Brandenburg, J. D., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonincontri, L., Marcos, M. Burgos, Burke, A. T., Burr, C., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collaviti, S., Collins, P., Colombo, T., Colonna, M., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Darze, G., Davidson, A., Davies, J. E., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Firlej, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fiutowski, T., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Führing, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Gardner, P., Garg, K. G., Garrido, L., Gaspar, C., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golinka-Bezshyyko, L., Golobardes, E., Golubkov, D., Golutvin, A., Fernandez, S. Gomez, Gomulka, W., Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Govorkova, E., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Harris, T. H., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Idzik, M., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E. D., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, M., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Linton, H., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Ordonez, G. Loachamin, Salvia, A. Lobo, Loi, A., Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Maisuzenko, D., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manganella, F. M., Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Gomez, D. Martinez, Santos, D. Martinez, Vidal, F. Martinez, Granollers, A. Martorell i, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Moeser, L., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Moron, J., Morren, W., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Niu, Q., Nogarolli, P., Nogga, P., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Pan, X., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Parmar, D., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perrevoort, A., Perro, A., Peters, M. J., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Piccolo, L., Pietrzyk, B., Pietrzyk, G., Pilato, R. N., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Provenzano, D., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Galvez, M. Ribalda, Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Alvarez, A. Rodriguez, Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Segal, I., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Sommerfeld, N. S., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stefaniak, M., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swientek, K., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tang, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Tork, T., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Urbach, B., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, van Eldik, J., Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Verdoglia, M., Vesterinen, M., Benet, D. Vico, Villalba, P. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, H., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Y. W., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. J., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Winn, M., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, X., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xing, T. X., Xu, A., Xu, L., Xu, M., Xu, Z., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yin, X., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhang, Z., Zhao, Y., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G. more...
- Subjects
High Energy Physics - Experiment - Abstract
The first observation of the $\Xi_b^0 \to J/\psi \Xi^- \pi^+$ decay and the most precise measurement of the branching fraction of the $\Lambda_b^0 \to J/\psi \Xi^- K^+$ decay are reported, using proton-proton collision data from the LHCb experiment collected in 2016--2018 at a centre-of-mass energy of 13~TeV, corresponding to an integrated luminosity of 5.4~fb$^{-1}$. Using the $\Lambda_b^0 \to J/\psi \Lambda$ and $\Xi_b^0 \to J/\psi \Xi^-$ decays as normalisation channels, the ratios of branching fractions are measured to be: \[ \frac{\mathcal{B}(\Lambda_b^0 \to J/\psi \Xi^- K^+)}{\mathcal{B}(\Lambda_b^0 \to J/\psi \Lambda)} = (1.17 \pm 0.14 \pm 0.08)\times 10^{-2} \, , \] \[ \frac{\mathcal{B}(\Xi_b^0 \to J/\psi \Xi^- \pi^+)}{\mathcal{B}(\Xi_b^0 \to J/\psi \Xi^-)} = (11.9 \pm 1.4 \pm 0.6)\times 10^{-2}\, , \] where the first uncertainty is statistical and the second systematic., Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/3479/ (LHCb public pages) more...
- Published
- 2025
22. Cost Optimization for Serverless Edge Computing with Budget Constraints using Deep Reinforcement Learning
- Author
-
Chen, Chen, Guan, Peiyuan, Chen, Ziru, Taherkordi, Amir, Hou, Fen, and Cai, Lin X.
- Subjects
Computer Science - Networking and Internet Architecture - Abstract
Serverless computing adopts a pay-as-you-go billing model where applications are executed in stateless and shortlived containers triggered by events, resulting in a reduction of monetary costs and resource utilization. However, existing platforms do not provide an upper bound for the billing model which makes the overall cost unpredictable, precluding many organizations from managing their budgets. Due to the diverse ranges of serverless functions and the heterogeneous capacity of edge devices, it is challenging to receive near-optimal solutions for deployment cost in a polynomial time. In this paper, we investigated the function scheduling problem with a budget constraint for serverless computing in wireless networks. Users and IoT devices are sending requests to edge nodes, improving the latency perceived by users. We propose two online scheduling algorithms based on reinforcement learning, incorporating several important characteristics of serverless functions. Via extensive simulations, we justify the superiority of the proposed algorithm by comparing with an ILP solver (Midaco). Our results indicate that the proposed algorithms efficiently approximate the results of Midaco within a factor of 1.03 while our decision-making time is 5 orders of magnitude less than that of Midaco., Comment: This paper has been accepted by IEEE ICC 2025 more...
- Published
- 2025
23. Quantum Emitters in Hexagonal Boron Nitride: Principles, Engineering and Applications
- Author
-
Mai, Thi Ngoc Anh, Hossain, Md Shakhawath, Nguyen, Nhat Minh, Chen, Yongliang, Chen, Chaohao, Xu, Xiaoxue, Trinh, Quang Thang, Dinh, Toan, and Tran, Toan Trong
- Subjects
Condensed Matter - Materials Science ,Physics - Optics ,Quantum Physics - Abstract
Solid-state quantum emitters, molecular-sized complexes releasing a single photon at a time, have garnered much attention owing to their use as a key building block in various quantum technologies. Among these, quantum emitters in hexagonal boron nitride (hBN) have emerged as front runners with superior attributes compared to other competing platforms. These attributes are attainable thanks to the robust, two-dimensional lattice of the material formed by the extremely strong B-N bonds. This review discusses the fundamental properties of quantum emitters in hBN and highlights recent progress in the field. The focus is on the fabrication and engineering of these quantum emitters facilitated by state-of-the-art equipment. Strategies to integrate the quantum emitters with dielectric and plasmonic cavities to enhance their optical properties are summarized. The latest developments in new classes of spin-active defects, their predicted structural configurations, and the proposed suitable quantum applications are examined. Despite the current challenges, quantum emitters in hBN have steadily become a promising platform for applications in quantum information science. more...
- Published
- 2025
24. MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
- Author
-
Zhao, Yilun, Xie, Lujing, Zhang, Haowei, Gan, Guo, Long, Yitao, Hu, Zhiyuan, Hu, Tongyan, Chen, Weiyuan, Li, Chuhan, Song, Junyang, Xu, Zhijian, Wang, Chengye, Pan, Weifeng, Shangguan, Ziyao, Tang, Xiangru, Liang, Zhenwen, Liu, Yixin, Zhao, Chen, and Cohan, Arman more...
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark for evaluating foundation models in video understanding. MMVU includes 3,000 expert-annotated questions spanning 27 subjects across four core disciplines: Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to prior benchmarks, MMVU features three key advancements. First, it challenges models to apply domain-specific knowledge and perform expert-level reasoning to analyze specialized-domain videos, moving beyond the basic visual perception typically assessed in current video benchmarks. Second, each example is annotated by human experts from scratch. We implement strict data quality controls to ensure the high quality of the dataset. Finally, each example is enriched with expert-annotated reasoning rationals and relevant domain knowledge, facilitating in-depth analysis. We conduct an extensive evaluation of 32 frontier multimodal foundation models on MMVU. The latest System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest performance among the tested models. However, they still fall short of matching human expertise. Through in-depth error analyses and case studies, we offer actionable insights for future advancements in expert-level, knowledge-intensive video understanding for specialized domains. more...
- Published
- 2025
25. UI-TARS: Pioneering Automated GUI Interaction with Native Agents
- Author
-
Qin, Yujia, Ye, Yining, Fang, Junjie, Wang, Haoming, Liang, Shihao, Tian, Shizuo, Zhang, Junda, Li, Jiahao, Li, Yunxin, Huang, Shijue, Zhong, Wanjun, Li, Kuanye, Yang, Jiale, Miao, Yu, Lin, Woyu, Liu, Longxiang, Jiang, Xu, Ma, Qianli, Li, Jingyu, Xiao, Xiaojun, Cai, Kai, Li, Chuang, Zheng, Yaowei, Jin, Chaolin, Li, Chen, Zhou, Xiao, Wang, Minchao, Chen, Haoli, Li, Zhaojian, Yang, Haihua, Liu, Haifeng, Lin, Feng, Peng, Tao, Liu, Xin, and Shi, Guang more...
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Human-Computer Interaction - Abstract
This paper introduces UI-TARS, a native GUI agent model that solely perceives the screenshots as input and performs human-like interactions (e.g., keyboard and mouse operations). Unlike prevailing agent frameworks that depend on heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts and workflows, UI-TARS is an end-to-end model that outperforms these sophisticated frameworks. Experiments demonstrate its superior performance: UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating perception, grounding, and GUI task execution. Notably, in the OSWorld benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15 steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld, UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of GUI screenshots for context-aware understanding of UI elements and precise captioning; (2) Unified Action Modeling, which standardizes actions into a unified space across platforms and achieves precise grounding and interaction through large-scale action traces; (3) System-2 Reasoning, which incorporates deliberate reasoning into multi-step decision making, involving multiple reasoning patterns such as task decomposition, reflection thinking, milestone recognition, etc. (4) Iterative Training with Reflective Online Traces, which addresses the data bottleneck by automatically collecting, filtering, and reflectively refining new interaction traces on hundreds of virtual machines. Through iterative training and reflection tuning, UI-TARS continuously learns from its mistakes and adapts to unforeseen situations with minimal human intervention. We also analyze the evolution path of GUI agents to guide the further development of this domain. more...
- Published
- 2025
26. Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
- Author
-
Zhao, Zibo, Lai, Zeqiang, Lin, Qingxiang, Zhao, Yunfei, Liu, Haolin, Yang, Shuhui, Feng, Yifei, Yang, Mingxin, Zhang, Sheng, Yang, Xianghui, Shi, Huiwen, Liu, Sicong, Wu, Junta, Lian, Yihang, Yang, Fan, Tang, Ruining, He, Zebin, Wang, Xinzhou, Liu, Jian, Zuo, Xuhui, Chen, Zhuo, Lei, Biwen, Weng, Haohan, Xu, Jing, Zhu, Yiling, Liu, Xinhai, Xu, Lixin, Hu, Changrong, Huang, Tianyu, Wang, Lifu, Zhang, Jihong, Chen, Meng, Dong, Liang, Jia, Yiwen, Cai, Yulin, Yu, Jiaao, Tang, Yixuan, Zhang, Hao, Ye, Zheng, He, Peng, Wu, Runzhou, Zhang, Chao, Tan, Yonghao, Xiao, Jie, Tao, Yangyu, Zhu, Jianchen, Xue, Jinbao, Liu, Kai, Zhao, Chongqing, Wu, Xinming, Hu, Zhichao, Qin, Lei, Peng, Jianbing, Li, Zhan, Chen, Minghui, Zhang, Xipeng, Niu, Lin, Wang, Paige, Wang, Yingkai, Kuang, Haozhao, Fan, Zhongyi, Zheng, Xu, Zhuang, Weihao, He, YingPing, Liu, Tian, Yang, Yong, Wang, Di, Liu, Yuhong, Jiang, Jie, Huang, Jingwei, and Guo, Chunchao more...
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model -- Hunyuan3D-DiT, and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio -- a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and etc. Hunyuan3D 2.0 is publicly released in order to fill the gaps in the open-source 3D community for large-scale foundation generative models. The code and pre-trained weights of our models are available at: https://github.com/Tencent/Hunyuan3D-2, Comment: GitHub link: https://github.com/Tencent/Hunyuan3D-2 more...
- Published
- 2025
27. AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding
- Author
-
Li, Zikun, Chen, Zhuofu, Delacourt, Remi, Oliaro, Gabriele, Wang, Zeyu, Chen, Qinghan, Lin, Shuhuai, Yang, April, Zhang, Zhihao, Chen, Zhuoming, Lai, Sean, Miao, Xupeng, and Jia, Zhihao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Machine Learning - Abstract
This paper introduces AdaServe, the first LLM serving system to support SLO customization through fine-grained speculative decoding. AdaServe leverages the logits of a draft model to predict the speculative accuracy of tokens and employs a theoretically optimal algorithm to construct token trees for verification. To accommodate diverse SLO requirements without compromising throughput, AdaServe employs a speculation-and-selection scheme that first constructs candidate token trees for each request and then dynamically selects tokens to meet individual SLO constraints while optimizing throughput. Comprehensive evaluations demonstrate that AdaServe achieves up to 73% higher SLO attainment and 74% higher goodput compared to state-of-the-art systems. These results underscore AdaServe's potential to enhance the efficiency and adaptability of LLM deployments across varied application scenarios. more...
- Published
- 2025
28. A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
- Author
-
Zhang, Qinggang, Chen, Shengyuan, Bei, Yuanchen, Yuan, Zheng, Zhou, Huachi, Hong, Zijin, Dong, Junnan, Chen, Hao, Chang, Yi, and Huang, Xiao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Information Retrieval - Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in a wide range of tasks, yet their application to specialized domains remains challenging due to the need for deep expertise. Retrieval-augmented generation (RAG) has emerged as a promising solution to customize LLMs for professional fields by seamlessly integrating external knowledge bases, enabling real-time access to domain-specific expertise during inference. Despite its potential, traditional RAG systems, based on flat text retrieval, face three critical challenges: (i) complex query understanding in professional contexts, (ii) difficulties in knowledge integration across distributed sources, and (iii) system efficiency bottlenecks at scale. This survey presents a systematic analysis of Graph-based Retrieval-Augmented Generation (GraphRAG), a new paradigm that revolutionizes domain-specific LLM applications. GraphRAG addresses traditional RAG limitations through three key innovations: (i) graph-structured knowledge representation that explicitly captures entity relationships and domain hierarchies, (ii) efficient graph-based retrieval techniques that enable context-preserving knowledge retrieval with multihop reasoning ability, and (iii) structure-aware knowledge integration algorithms that leverage retrieved knowledge for accurate and logical coherent generation of LLMs. In this survey, we systematically analyze the technical foundations of GraphRAG and examine current implementations across various professional domains, identifying key technical challenges and promising research directions. All the related resources of GraphRAG, including research papers, open-source data, and projects, are collected for the community in \textcolor{blue}{\url{https://github.com/DEEP-PolyU/Awesome-GraphRAG}}. more...
- Published
- 2025
29. Measurement of the multiplicity dependence of $\mit\Upsilon$ production ratios in $pp$ collisions at $\sqrt{s}=13$ TeV
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Balboni, A., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartolini, M., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bonacci, R. B., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonaura, A., Buonincontri, L., Burke, A. T., Burr, C., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A. C., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cervenkov, D., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Cholak, S., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collaviti, S., Collins, P., Colombo, T., Colonna, M., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Darze, G., Davidson, A., Davies, J. E., Davis, A., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Firlej, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fiutowski, T., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Führing, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Gardner, P., Garg, K. G., Garrido, L., Gaspar, C., Geertsema, R. E., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Gironell, P. Gironella, Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golobardes, E., Golubkov, D., Golutvin, A., Fernandez, S. Gomez, Gomulka, W., Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guittiere, M., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Hajheidari, M., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Harris, T. H., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Idzik, M., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E. D., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, M., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Linton, H., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Salvia, A. Lobo, Loi, A., Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Maisuzenko, D., Majewski, M. W., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manganella, F. M., Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Gomez, D. Martinez, Santos, D. Martinez, Vidal, F. Martinez, Granollers, A. Martorell i, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., Mcconnell, L., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Moron, J., Morren, W., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Niu, Q., Nogarolli, P., Nogga, P., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Pan, X., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Parmar, D., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perrevoort, A., Perro, A., Peters, M. J., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Piccolo, L., Pietrzyk, B., Pietrzyk, G., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Provenzano, D., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Alvarez, A. Rodriguez, Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzhikov, A., Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Segal, I., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Sommerfeld, N. S., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swientek, K., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tang, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Urbach, B., Ursov, E., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Verdoglia, M., Vesterinen, M., Benet, D. Vico, Villalba, P. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, H., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Y. W., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. J., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Winn, M., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, X., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xu, A., Xu, J., Xu, L., Xu, M., Xu, Z., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yin, X., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhao, Y., Zharkova, A., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G. more...
- Subjects
High Energy Physics - Experiment ,Nuclear Experiment - Abstract
The $\mit{\Upsilon}(\mathrm{2}S)$ and $\mit{\Upsilon}(\mathrm{3}S)$ production cross-sections are measured relative to that of the $\mit{\Upsilon}(\mathrm{1}S)$ meson, as a function of charged-particle multiplicity in proton-proton collisions at a centre-of-mass energy of $13$ TeV. The measurement uses data collected by the LHCb experiment in 2018 corresponding to an integrated luminosity of 2 $\text{fb}^{-1}$. Both the $\mit{\Upsilon}(\mathrm{2}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ and $\mit{\Upsilon}(\mathrm{3}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ cross-section ratios are found to decrease significantly as a function of event multiplicity, with the $\mit{\Upsilon}(\mathrm{3}S)$-to-$\mit{\Upsilon}(\mathrm{1}S)$ ratio showing a steeper decline towards high multiplicity. This hierarchy is qualitatively consistent with the comover model predictions, indicating that final-state interactions play an important role in bottomonia production in high-multiplicity events., Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/1782/ (LHCb public pages) more...
- Published
- 2025
30. Kimi k1.5: Scaling Reinforcement Learning with LLMs
- Author
-
Kimi Team, Du, Angang, Gao, Bofei, Xing, Bowei, Jiang, Changjiu, Chen, Cheng, Li, Cheng, Xiao, Chenjun, Du, Chenzhuang, Liao, Chonghua, Tang, Chuning, Wang, Congcong, Zhang, Dehao, Yuan, Enming, Lu, Enzhe, Tang, Fengxiang, Sung, Flood, Wei, Guangda, Lai, Guokun, Guo, Haiqing, Zhu, Han, Ding, Hao, Hu, Hao, Yang, Hao, Zhang, Hao, Yao, Haotian, Zhao, Haotian, Lu, Haoyu, Li, Haoze, Yu, Haozhen, Gao, Hongcheng, Zheng, Huabin, Yuan, Huan, Chen, Jia, Guo, Jianhang, Su, Jianlin, Wang, Jianzhou, Zhao, Jie, Zhang, Jin, Liu, Jingyuan, Yan, Junjie, Wu, Junyan, Shi, Lidong, Ye, Ling, Yu, Longhui, Dong, Mengnan, Zhang, Neo, Ma, Ningchen, Pan, Qiwei, Gong, Qucheng, Liu, Shaowei, Ma, Shengling, Wei, Shupeng, Cao, Sihan, Huang, Siying, Jiang, Tao, Gao, Weihao, Xiong, Weimin, He, Weiran, Huang, Weixiao, Wu, Wenhao, He, Wenyang, Wei, Xianghui, Jia, Xianqing, Wu, Xingzhe, Xu, Xinran, Zu, Xinxing, Zhou, Xinyu, Pan, Xuehai, Charles, Y., Li, Yang, Hu, Yangyang, Liu, Yangyang, Chen, Yanru, Wang, Yejie, Liu, Yibo, Qin, Yidao, Liu, Yifeng, Yang, Ying, Bao, Yiping, Du, Yulun, Wu, Yuxin, Wang, Yuzhi, Zhou, Zaida, Wang, Zhaoji, Li, Zhaowei, Zhu, Zhen, Zhang, Zheng, Wang, Zhexu, Yang, Zhilin, Huang, Zhiqi, Huang, Zihao, Xu, Ziyao, and Yang, Zonghan more...
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Language model pretraining with next token prediction has proved effective for scaling compute but is limited to the amount of available training data. Scaling reinforcement learning (RL) unlocks a new axis for the continued improvement of artificial intelligence, with the promise that large language models (LLMs) can scale their training data by learning to explore with rewards. However, prior published work has not produced competitive results. In light of this, we report on the training practice of Kimi k1.5, our latest multi-modal LLM trained with RL, including its RL training techniques, multi-modal data recipes, and infrastructure optimization. Long context scaling and improved policy optimization methods are key ingredients of our approach, which establishes a simplistic, effective RL framework without relying on more complex techniques such as Monte Carlo tree search, value functions, and process reward models. Notably, our system achieves state-of-the-art reasoning performance across multiple benchmarks and modalities -- e.g., 77.5 on AIME, 96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista -- matching OpenAI's o1. Moreover, we present effective long2short methods that use long-CoT techniques to improve short-CoT models, yielding state-of-the-art short-CoT reasoning results -- e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on LiveCodeBench -- outperforming existing short-CoT models such as GPT-4o and Claude Sonnet 3.5 by a large margin (up to +550%)., Comment: 25 pages more...
- Published
- 2025
31. Ultralow-dimensionality reduction for identifying critical transitions by spatial-temporal PCA
- Author
-
Chen, Pei, Suo, Yaofang, Liu, Rui, and Chen, Luonan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Discovering dominant patterns and exploring dynamic behaviors especially critical state transitions and tipping points in high-dimensional time-series data are challenging tasks in study of real-world complex systems, which demand interpretable data representations to facilitate comprehension of both spatial and temporal information within the original data space. Here, we proposed a general and analytical ultralow-dimensionality reduction method for dynamical systems named spatial-temporal principal component analysis (stPCA) to fully represent the dynamics of a high-dimensional time-series by only a single latent variable without distortion, which transforms high-dimensional spatial information into one-dimensional temporal information based on nonlinear delay-embedding theory. The dynamics of this single variable is analytically solved and theoretically preserves the temporal property of original high-dimensional time-series, thereby accurately and reliably identifying the tipping point before an upcoming critical transition. Its applications to real-world datasets such as individual-specific heterogeneous ICU records demonstrated the effectiveness of stPCA, which quantitatively and robustly provides the early-warning signals of the critical/tipping state on each patient. more...
- Published
- 2025
32. Structural and mechanical properties of W-Cu compounds characterized by a neural-network-based potential
- Author
-
Liu, Jianchuan, Chen, Tao, Mao, Sheng, and Chen, Mohan
- Subjects
Condensed Matter - Materials Science ,Computer Science - Machine Learning - Abstract
Tungsten-copper (W-Cu) compounds are widely utilized in various industrial fields due to their exceptional mechanical properties. In this study, we have developed a neural-network-based deep potential (DP) model that covers a wide range of temperatures, ranging from 0 to 3,000 K, and pressures, varying from 0 to 10 GPa. This study presents a model trained using density functional theory data for full concentration CuxW100-x compounds. Through this model, we systematically investigate the structural and mechanical properties of W-Cu alloys and have the following findings. First, the bulk modulus (B) and Young's modulus (E) of W-Cu alloys exhibit a linear decline as the Cu content increases, indicating a softening trend in the CuxW100-x compounds as the Cu concentration rises. Second, a higher Cu content results in higher critical strain and lower critical stress for these compounds. A brittle-to-ductile transition in the deformation mode predicted is predicted at around 37.5 at. % Cu content. Third, tensile loading tests in the W-Cu gradient structure reveal that Cu-poor region serves as a barrier, hindering shear band propagation while promoting new shear band formation in the Cu-rich region. The above results from the DP model are anticipated to aid in exploring the physical mechanisms underlying the complex phenomena of W-Cu systems and contribute to the advancement of methodologies for materials simulation. more...
- Published
- 2025
33. Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training
- Author
-
Yuan, Siyu, Chen, Zehui, Xi, Zhiheng, Ye, Junjie, Du, Zhengyin, and Chen, Jiecao
- Subjects
Computer Science - Artificial Intelligence - Abstract
Large Language Models (LLMs) agents are increasingly pivotal for addressing complex tasks in interactive environments. Existing work mainly focuses on enhancing performance through behavior cloning from stronger experts, yet such approaches often falter in real-world applications, mainly due to the inability to recover from errors. However, step-level critique data is difficult and expensive to collect. Automating and dynamically constructing self-critique datasets is thus crucial to empowering models with intelligent agent capabilities. In this work, we propose an iterative self-training framework, Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional methods that reward or penalize actions based on correctness, Agent-R leverages MCTS to construct training data that recover correct trajectories from erroneous ones. A key challenge of agent reflection lies in the necessity for timely revision rather than waiting until the end of a rollout. To address this, we introduce a model-guided critique construction mechanism: the actor model identifies the first error step (within its current capability) in a failed trajectory. Starting from it, we splice it with the adjacent correct path, which shares the same parent node in the tree. This strategy enables the model to learn reflection based on its current policy, therefore yielding better learning efficiency. To further explore the scalability of this self-improvement paradigm, we investigate iterative refinement of both error correction capabilities and dataset construction. Our findings demonstrate that Agent-R continuously improves the model's ability to recover from errors and enables timely error correction. Experiments on three interactive environments show that Agent-R effectively equips agents to correct erroneous actions while avoiding loops, achieving superior performance compared to baseline methods (+5.59%). more...
- Published
- 2025
34. Multilineage-differentiating stress-enduring cells alleviate neuropathic pain in mice by secreting TGF-b and IL-10
- Author
-
Zhao, Yayu, Fei, Ying, Cai, Yunyun, Wei, Zhongya, Chen, Ying, Ji, Yuhua, Chen, Xue, Zhang, Dongmei, and Chen, Gang
- Subjects
Quantitative Biology - Tissues and Organs ,Quantitative Biology - Quantitative Methods - Abstract
Neuropathic pain is a chronic condition characterized by damage to and dysfunction of the peripheral or central nervous system. There are currently no effective treatment options available for neuropathic pain, and existing drugs often provide only temporary relief with potential side effects. Multilineage-differentiating stress-enduring (Muse) cells are characterized by high expansion potential, a stable phenotype and strong immunosuppression. These properties make them attractive candidates for therapeutics for neuropathic pain management. In this study, we conducted a series of experiments to evaluate the effect of Muse cells on neuropathic pain. Muse cells from different species demonstrated analgesic potential by reversing CCI-induced neuropathic pain. Protein profiling revealed a high degree of similarity between Muse cells and BMSCs. The intrathecal injection of Muse cells effectively reduced neuropathic pain in various mouse models, resulting in better analgesic effects than the administration of equivalent low doses of BMSCs. Immunohistochemical analysis and qPCR revealed the ability of Muse cells to inhibit spinal cord neuroinflammation caused by SNI. In addition, Transwell and ELISA revealed that Muse cells migrated through the injured dorsal root ganglion (DRG) via the CCR7-CCL21 chemotactic axis. In addition, the secretion of TGF-b and IL-10 by Muse cells was identified as the mechanism underlying the analgesic effect of Muse cells. The capacity of Muse cells to mitigate neuroinflammation and produce analgesic effects via the modulation of TGF-b and IL-10 underscores their potential as promising therapeutic approaches for the treatment of neuropathic pain. more...
- Published
- 2025
35. The Dual-use Dilemma in LLMs: Do Empowering Ethical Capacities Make a Degraded Utility?
- Author
-
Zhang, Yiyi, Chen, Xingyu, Chen, Kexin, Du, Yuyang, Dang, Xilin, and Heng, Pheng-Ann
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Recent years have witnessed extensive efforts to enhance Large Language Models (LLMs) across various domains, alongside growing attention to their ethical implications. However, a critical challenge remains largely overlooked: LLMs must balance between rejecting harmful requests for safety and accommodating legitimate ones for utility. This paper presents a Direct Preference Optimization (DPO) based alignment framework that achieves better overall performance by addressing this ethical-utility trade-off, using chemical domain applications as a proof-of-concept. Our alignment pipeline starts with a GPT-assisted three-phase data generation scheme, in which we create LibraChemQA, a chemical question-answering dataset comprising 31.6k triplet instances. By incorporating an innovative balanced seed in the data generation process, our framework systematically considers both legitimate and illegitimate requests. The framework also introduces a rephrasing mechanism for efficient data augmentation that enhances the model's chemical comprehension. We further develop a novel hybrid evaluation scheme with LLM judges for precise assessment of both safety and utility. Experimental results demonstrate our model's substantial improvements in overall performance where both safety and utility are considered - our resulting model, LibraChem, outperforms leading LLMs including Claude-3, GPT-4o, and LLaMA-3 by margins of 13.44%, 7.16%, and 7.10% respectively on our released benchmark. more...
- Published
- 2025
36. Examining Turbulence in Galactic Molecular Clouds -- I: A Statistical Analysis of Velocity Structures
- Author
-
Ma, Yuehui, Zhang, Miaomiao, Wang, Hongchi, Fang, Min, Yue, Zhenyi, Chen, Xuepeng, Yang, Ji, Du, Fujun, Su, Yang, He, Suziye, Feng, Haoran, Sun, Yan, Li, Chong, Yan, Qing-Zeng, Chen, Zhiwei, Zhang, Shaobo, and Zhou, Xin more...
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
We present a systematic analysis of the velocity structure functions (VSFs) of 167 molecular clouds with angular sizes greater than $\sim$176 arcmin$^2$ in three sectors of the Galactic mid-plane. We calculated the 1st- to 3rd-order VSFs and found that 60\% of the VSFs exhibit power-law distributions. The relative power-law exponents are consistent with predictions from intermittent turbulence models. Column density weighting reduces the proportion of power-law VSFs and steepens the VSF slopes, implying a reduction of turbulent energy in high-density regions. All clouds show small-scale intermittency, with slightly stronger intermittency in those molecular clouds showing none power-law VSFs. Negative VSF exponents that may indicate gravitational collapse are not observed in our sample. The scaling exponents of the observed VSFs do not correlate with the virial parameters of the molecular clouds. These two observations suggest that gravity-dominated scales in molecular clouds still need further investigation. Consistent VSF scaling exponents for the molecular clouds with significant power-law VSFs suggest large-scale external driving of turbulence in these molecular clouds. However, the driving mechanisms are likely not universal, as the power-law scaling coefficients in our results show relatively large scatter. The fact that nearly 40\% of the VSFs deviate to some extent from power-law distributions suggests that the influence of local environments on the internal turbulence of molecular clouds may not be negligible. more...
- Published
- 2025
- Full Text
- View/download PDF
37. Search for charge-parity violation in semileptonically tagged $D^{0} \to K^{+} \pi^{-}$ decays
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Balboni, A., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bonacci, R. B., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bottalico, E., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Brandenburg, J. D., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonincontri, L., Marcos, M. Burgos, Burke, A. T., Burr, C., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collaviti, S., Collins, P., Colombo, T., Colonna, M., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Darze, G., Davidson, A., Davies, J. E., Davis, A., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Firlej, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fiutowski, T., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Führing, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Gardner, P., Garg, K. G., Garrido, L., Gaspar, C., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golobardes, E., Golubkov, D., Golutvin, A., Fernandez, S. Gomez, Gomulka, W., Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Govorkova, E., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Harris, T. H., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Idzik, M., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E. D., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, M., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Linton, H., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Ordonez, G. Loachamin, Salvia, A. Lobo, Loi, A., Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Maisuzenko, D., Majewski, M. W., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manganella, F. M., Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Gomez, D. Martinez, Santos, D. Martinez, Vidal, F. Martinez, Granollers, A. Martorell i, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., Mcconnell, L., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Moron, J., Morren, W., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Niu, Q., Nogarolli, P., Nogga, P., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Pan, X., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Parmar, D., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perrevoort, A., Perro, A., Peters, M. J., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Piccolo, L., Pietrzyk, B., Pietrzyk, G., Pilato, R. N., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Provenzano, D., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Alvarez, A. Rodriguez, Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzhikov, A., Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Segal, I., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Sommerfeld, N. S., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stefaniak, M., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swientek, K., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tang, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Tork, T., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Urbach, B., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, van Eldik, J., Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Verdoglia, M., Vesterinen, M., Benet, D. Vico, Villalba, P. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, H., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Y. W., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. J., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Winn, M., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, X., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xing, T. X., Xu, A., Xu, L., Xu, M., Xu, Z., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yin, X., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhang, Z., Zhao, Y., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G. more...
- Subjects
High Energy Physics - Experiment - Abstract
An analysis of the flavour oscillations of the charmed neutral meson is presented. The ratio of $D^{0} \to K^{+} \pi^{-}$ and $D^{0} \to K^{-} \pi^{+}$ decay rates is measured as a function of the decay time of the $D^{0}$ meson and compared with the charge-conjugated system to search for charge-parity violation. The meson flavour at production is double-tagged by the charges of the muon and pion in the preceding $\overline{B} \to D^{*}(2010)^{+} \mu^{-} X$ and ${{D^{*}(2010)^{+}} \to D^{0}\pi^{+}}$ decays, respectively. These decays are selected from proton-proton collision data collected by the LHCb experiment at a centre-of-mass energy of ${13\,\text{TeV}}$ and corresponding to an integrated luminosity of ${5.4\,\text{fb}^{-1}}$. The flavour oscillation parameters, relating to the differences in mass and width of the mass eigenstates, are found to be ${y^\prime=(5.8\pm1.6)\times10^{-3}}$ and ${(x^\prime)^2=(0.0\pm1.2)\times10^{-4}}$. No evidence for charge-parity violation is seen either in the flavour oscillations or in the decay, where the direct charge-parity asymmetry is measured to be ${A_{D}=(2.3\pm1.7)\,{\%}}$., Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/3260/ (LHCb public pages) more...
- Published
- 2025
38. TigerVector: Supporting Vector Search in Graph Databases for Advanced RAGs
- Author
-
Liu, Shige, Zeng, Zhifang, Chen, Li, Ainihaer, Adil, Ramasami, Arun, Chen, Songting, Xu, Yu, Wu, Mingxi, and Wang, Jianguo
- Subjects
Computer Science - Databases ,H.2 - Abstract
In this paper, we introduce TigerVector, a system that integrates vector search and graph query within TigerGraph, a Massively Parallel Processing (MPP) native graph database. We extend the vertex attribute type with the embedding type. To support fast vector search, we devise an MPP index framework that interoperates efficiently with the graph engine. The graph query language GSQL is enhanced to support vector type expressions and enable query compositions between vector search results and graph query blocks. These advancements elevate the expressive power and analytical capabilities of graph databases, enabling seamless fusion of unstructured and structured data in ways previously unattainable. Through extensive experiments, we demonstrate TigerVector's hybrid search capability, scalability, and superior performance compared to other graph databases (including Neo4j and Amazon Neptune) and a highly optimized specialized vector database (Milvus). TigerVector was integrated into TigerGraph v4.2, the latest release of TigerGraph, in December 2024., Comment: 13 pages,11 figures more...
- Published
- 2025
39. Can OpenAI o1 Reason Well in Ophthalmology? A 6,990-Question Head-to-Head Evaluation Study
- Author
-
Srinivasan, Sahana, Ai, Xuguang, Zou, Minjie, Zou, Ke, Kim, Hyunjae, Lo, Thaddaeus Wai Soon, Pushpanathan, Krithi, Kong, Yiming, Li, Anran, Singer, Maxwell, Jin, Kai, Antaki, Fares, Chen, David Ziyou, Liu, Dianbo, Adelman, Ron A., Chen, Qingyu, and Tham, Yih Chung more...
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Question: What is the performance and reasoning ability of OpenAI o1 compared to other large language models in addressing ophthalmology-specific questions? Findings: This study evaluated OpenAI o1 and five LLMs using 6,990 ophthalmological questions from MedMCQA. O1 achieved the highest accuracy (0.88) and macro-F1 score but ranked third in reasoning capabilities based on text-generation metrics. Across subtopics, o1 ranked first in ``Lens'' and ``Glaucoma'' but second to GPT-4o in ``Corneal and External Diseases'', ``Vitreous and Retina'' and ``Oculoplastic and Orbital Diseases''. Subgroup analyses showed o1 performed better on queries with longer ground truth explanations. Meaning: O1's reasoning enhancements may not fully extend to ophthalmology, underscoring the need for domain-specific refinements to optimize performance in specialized fields like ophthalmology., Comment: 44 pages more...
- Published
- 2025
40. Advancing General Multimodal Capability of Vision-language Models with Pyramid-descent Visual Position Encoding
- Author
-
Chen, Zhanpeng, Li, Mingxiao, Chen, Ziyang, Du, Nan, Li, Xiaolong, and Zou, Yuexian
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Vision-language Models (VLMs) have shown remarkable capabilities in advancing general artificial intelligence, yet the irrational encoding of visual positions persists in inhibiting the models' comprehensive perception performance across different levels of granularity. In this work, we propose Pyramid-descent Visual Position Encoding (PyPE), a novel approach designed to enhance the perception of visual tokens within VLMs. By assigning visual position indexes from the periphery to the center and expanding the central receptive field incrementally, PyPE addresses the limitations of traditional raster-scan methods and mitigates the long-term decay effects induced by Rotary Position Embedding (RoPE). Our method reduces the relative distance between interrelated visual elements and instruction tokens, promoting a more rational allocation of attention weights and allowing for a multi-granularity perception of visual elements and countering the over-reliance on anchor tokens. Extensive experimental evaluations demonstrate that PyPE consistently improves the general capabilities of VLMs across various sizes. Code is available at https://github.com/SakuraTroyChen/PyPE. more...
- Published
- 2025
41. Reliable Text-to-SQL with Adaptive Abstention
- Author
-
Chen, Kaiwen, Chen, Yueting, Yu, Xiaohui, and Koudas, Nick
- Subjects
Computer Science - Databases ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) have revolutionized natural language interfaces for databases, particularly in text-to-SQL conversion. However, current approaches often generate unreliable outputs when faced with ambiguity or insufficient context. We present Reliable Text-to-SQL (RTS), a novel framework that enhances query generation reliability by incorporating abstention and human-in-the-loop mechanisms. RTS focuses on the critical schema linking phase, which aims to identify the key database elements needed for generating SQL queries. It autonomously detects potential errors during the answer generation process and responds by either abstaining or engaging in user interaction. A vital component of RTS is the Branching Point Prediction (BPP) which utilizes statistical conformal techniques on the hidden layers of the LLM model for schema linking, providing probabilistic guarantees on schema linking accuracy. We validate our approach through comprehensive experiments on the BIRD benchmark, demonstrating significant improvements in robustness and reliability. Our findings highlight the potential of combining transparent-box LLMs with human-in-the-loop processes to create more robust natural language interfaces for databases. For the BIRD benchmark, our approach achieves near-perfect schema linking accuracy, autonomously involving a human when needed. Combined with query generation, we demonstrate that near-perfect schema linking and a small query generation model can almost match SOTA accuracy achieved with a model orders of magnitude larger than the one we use. more...
- Published
- 2025
42. Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback
- Author
-
Lin, Yen-Ting, Jin, Di, Xu, Tengyu, Wu, Tianhao, Sukhbaatar, Sainbayar, Zhu, Chen, He, Yun, Chen, Yun-Nung, Weston, Jason, Tian, Yuandong, Rahnama, Arash, Wang, Sinong, Ma, Hao, and Fang, Han
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) have recently demonstrated remarkable success in mathematical reasoning. Despite progress in methods like chain-of-thought prompting and self-consistency sampling, these advances often focus on final correctness without ensuring that the underlying reasoning process is coherent and reliable. This paper introduces Step-KTO, a training framework that combines process-level and outcome-level binary feedback to guide LLMs toward more trustworthy reasoning trajectories. By providing binary evaluations for both the intermediate reasoning steps and the final answer, Step-KTO encourages the model to adhere to logical progressions rather than relying on superficial shortcuts. Our experiments on challenging mathematical benchmarks show that Step-KTO significantly improves both final answer accuracy and the quality of intermediate reasoning steps. For example, on the MATH-500 dataset, Step-KTO achieves a notable improvement in Pass@1 accuracy over strong baselines. These results highlight the promise of integrating stepwise process feedback into LLM training, paving the way toward more interpretable and dependable reasoning capabilities. more...
- Published
- 2025
43. High-Throughput, Energy-Efficient RRAM-Based In-Memory Computing LPN Accelerator
- Author
-
Yue, Hao, Chen, Yihao, Jiang, Zhelong, Li, Zhigang, Chen, Gang, and Lu, Huaxiang
- Subjects
Computer Science - Emerging Technologies - Abstract
As a strong candidate for the post-quantum crypto-graphic (PQC) era, Learning Parity with Noise (LPN) has been extensively studied in the field of cryptography. However, the data transfer bottleneck between the computation and memory modules has significantly hindered the development of LPN-based cryptographic techniques due to large matrices. This work introduces an RRAM-based in-memory computing LPN accelerator aimed at overcoming data transfer bottlenecks, thereby executing LPN computation tasks efficiently. To ensure the high accuracy of the LPN AND operation, a folded current amplification circuit is proposed to address the leakage current issue caused by the limited high-resistance state of RRAM. Meanwhile, a Cumulative XOR Fast Computing Method is introduced to efficiently convert accumulated current values into LPN XOR operation results. more...
- Published
- 2025
44. Centenary progress from the Nernst theorem to the Nernst statement
- Author
-
Chen, Xiaohang, Su, Shanhe, Zhou, Yinghui, and Chen, Jincan
- Subjects
Condensed Matter - Statistical Mechanics ,Physics - Popular Physics - Abstract
It is found from textbooks that there are the different versions of the schematic diagram related to the Nernst equation, and consequently, it leads to some discussion related to the Nernst equation and the discovery of other meaningful schematic diagrams never appearing in literature. It is also found that through the introduction of a new function, the schematic diagram of the Nernst equation in the isothermal process of any thermodynamic system can be generated in a unified way and that the Nernst equation can be re-obtained from the experimental data of low-temperature chemical reactions without any artificial additional assumptions. The results obtained here show clearly that the centenary progress from the Nernst theorem to the Nernst statement is completed. more...
- Published
- 2025
45. Optical Flares Detected on a Contact Binary: The First Photometric and Spectroscopic Analysis of a Long-period Low Mass Ratio Contact Binary HAT 307-0007476
- Author
-
Li, Ling-Zhi, Li, Kai, Gao, Xiang, Chen, Xiao-Dian, Feng, Shuai, Gao, Dong-Yang, Guo, Di-Fu, Chen, Xu, Gao, Xing, Sun, Guo-You, Yaqup, Shahidin, Bai, Chunhai, and Esamdin, Ali
- Subjects
Astrophysics - Solar and Stellar Astrophysics - Abstract
This paper presents the photometric and spectroscopic analysis of a long-period totally eclipsing contact binary (HAT 307-0007476) for the first time. This system is a low mass ratio ($q\sim0.114$) and medium contact binary ($f\sim37.1\%$). Two flare events were detected in multiple bands observations in December 2022. The interval between the two flare events is 4 days. The average duration of these two flares is about 2289s. Both the two flares achieve the energy levels of superflares. The excess emission of the H$_\alpha$ line in the LAMOST spectra of this object was analyzed, indicating its chromospheric activity. The $O-C$ diagram showed a long-term orbital period increase, which is due to the mass transfer between the two component stars. We conclude that HAT 307-0007476 is currently in a stable region based on both $J_{spin}/J_{orb}$ and the comparison between the instability parameters and its current values., Comment: 17 pages, 11 figures, 7 tables, accepted by MNRAS more...
- Published
- 2025
46. Picachv: Formally Verified Data Use Policy Enforcement for Secure Data Analytics
- Author
-
Chen, Haobin Hiroki, Chen, Hongbo, Sun, Mingshen, Wang, Chenghong, and Wang, XiaoFeng
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Databases ,Computer Science - Programming Languages - Abstract
Ensuring the proper use of sensitive data in analytics under complex privacy policies is an increasingly critical challenge. Many existing approaches lack portability, verifiability, and scalability across diverse data processing frameworks. We introduce Picachv, a novel security monitor that automatically enforces data use policies. It works on relational algebra as an abstraction for program semantics, enabling policy enforcement on query plans generated by programs during execution. This approach simplifies analysis across diverse analytical operations and supports various front-end query languages. By formalizing both data use policies and relational algebra semantics in Coq, we prove that Picachv correctly enforces policies. Picachv also leverages Trusted Execution Environments (TEEs) to enhance trust in runtime, providing provable policy compliance to stakeholders that the analytical tasks comply with their data use policies. We integrated Picachv into Polars, a state-of-the-art data analytics framework, and evaluate its performance using the TPC-H benchmark. We also apply our approach to real-world use cases. Our work demonstrates the practical application of formal methods in securing data analytics, addressing key challenges. more...
- Published
- 2025
47. DiffVSR: Enhancing Real-World Video Super-Resolution with Diffusion Models for Advanced Visual Quality and Temporal Consistency
- Author
-
Li, Xiaohui, Liu, Yihao, Cao, Shuo, Chen, Ziyan, Zhuang, Shaobin, Chen, Xiangyu, He, Yinan, Wang, Yi, and Qiao, Yu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Diffusion models have demonstrated exceptional capabilities in image generation and restoration, yet their application to video super-resolution faces significant challenges in maintaining both high fidelity and temporal consistency. We present DiffVSR, a diffusion-based framework for real-world video super-resolution that effectively addresses these challenges through key innovations. For intra-sequence coherence, we develop a multi-scale temporal attention module and temporal-enhanced VAE decoder that capture fine-grained motion details. To ensure inter-sequence stability, we introduce a noise rescheduling mechanism with an interweaved latent transition approach, which enhances temporal consistency without additional training overhead. We propose a progressive learning strategy that transitions from simple to complex degradations, enabling robust optimization despite limited high-quality video data. Extensive experiments demonstrate that DiffVSR delivers superior results in both visual quality and temporal consistency, setting a new performance standard in real-world video super-resolution., Comment: Project page: https://xh9998.github.io/DiffVSR-project/ more...
- Published
- 2025
48. Temporal refraction and reflection in modulated mechanical metabeams: theory and physical observation
- Author
-
Wang, Shaoyun, Shao, Nan, Chen, Hui, Chen, Jiaji, Qian, Honghua, Wu, Qian, Duan, Huiling, Alu, Andrea, and Huang, Guoliang
- Subjects
Physics - Optics ,Physics - Applied Physics - Abstract
Wave reflection and refraction at a time interface follow different conservation laws compared to conventional scattering at a spatial interface. This study presents the experimental demonstration of refraction and reflection of flexural waves across a temporal boundary in a continuum based mechanical metabeam, and unveils opportunities that emerge by tailoring temporal scattering phenomena for phononic applications. We observe these phenomena in an elastic beam attached to an array of piezoelectric patches that can vary in time the effective elastic properties of the beam. Frequency conversion and phase conjugation are observed upon a single temporal interface. These results are consistent with the temporal Snell law and Fresnel equations for temporal interfaces. Further, we illustrate the manipulation of amplitude and frequency spectra of flexural wave temporal refraction and reflection through multi stepped temporal interfaces. Finally, by implementing a smooth time variation of wave impedance, we numerically and experimentally demonstrate the capabilities of the temporal metabeam to realize waveform morphing and information coding. Our findings lay the foundation for developing time mechanical metamaterials and time phononic crystals, offering new avenues for advanced phonon manipulation in both wave amplitude and frequency, Comment: 16 pages, 8 figures more...
- Published
- 2025
49. Uncertainty-Aware Digital Twins: Robust Model Predictive Control using Time-Series Deep Quantile Learning
- Author
-
Chen, Yi-Ping, Tsai, Ying-Kuan, Karkaria, Vispi, and Chen, Wei
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
Digital Twins, virtual replicas of physical systems that enable real-time monitoring, model updates, predictions, and decision-making, present novel avenues for proactive control strategies for autonomous systems. However, achieving real-time decision-making in Digital Twins considering uncertainty necessitates an efficient uncertainty quantification (UQ) approach and optimization driven by accurate predictions of system behaviors, which remains a challenge for learning-based methods. This paper presents a simultaneous multi-step robust model predictive control (MPC) framework that incorporates real-time decision-making with uncertainty awareness for Digital Twin systems. Leveraging a multistep ahead predictor named Time-Series Dense Encoder (TiDE) as the surrogate model, this framework differs from conventional MPC models that provide only one-step ahead predictions. In contrast, TiDE can predict future states within the prediction horizon in a one-shot, significantly accelerating MPC. Furthermore, quantile regression is employed with the training of TiDE to perform flexible while computationally efficient UQ on data uncertainty. Consequently, with the deep learning quantiles, the robust MPC problem is formulated into a deterministic optimization problem and provides a safety buffer that accommodates disturbances to enhance constraint satisfaction rate. As a result, the proposed method outperforms existing robust MPC methods by providing less-conservative UQ and has demonstrated efficacy in an engineering case study involving Directed Energy Deposition (DED) additive manufacturing. This proactive while uncertainty-aware control capability positions the proposed method as a potent tool for future Digital Twin applications and real-time process control in engineering systems. more...
- Published
- 2025
50. Resource-Efficient Compilation of Distributed Quantum Circuits for Solving Large-Scale Wireless Communication Network Problems
- Author
-
Chen, Kuan-Cheng, Burt, Felix, Yu, Shang, Liu, Chen-Yu, Hsieh, Min-Hsiu, and Leung, Kin K.
- Subjects
Quantum Physics ,Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Optimizing routing in Wireless Sensor Networks (WSNs) is pivotal for minimizing energy consumption and extending network lifetime. This paper introduces a resourceefficient compilation method for distributed quantum circuits tailored to address large-scale WSN routing problems. Leveraging a hybrid classical-quantum framework, we employ spectral clustering for network partitioning and the Quantum Approximate Optimization Algorithm (QAOA) for optimizing routing within manageable subgraphs. We formulate the routing problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem, providing comprehensive mathematical formulations and complexity analyses. Comparative evaluations against traditional classical algorithms demonstrate significant energy savings and enhanced scalability. Our approach underscores the potential of integrating quantum computing techniques into wireless communication networks, offering a scalable and efficient solution for future network optimization challenges more...
- Published
- 2025
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.