1,387,849 results on '"Liu, P."'
Search Results
2. The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education. EdWorkingPaper No. 24-948
- Author
-
Annenberg Institute for School Reform at Brown University, Paiheng Xu, Jing Liu, Nathan Jones, Julie Cohen, and Wei Ai
- Abstract
Assessing instruction quality is a fundamental component of any improvement efforts in the education system. However, traditional manual assessments are expensive, subjective, and heavily dependent on observers' expertise and idiosyncratic factors, preventing teachers from getting timely and frequent feedback. Different from prior research that focuses on low-inference instructional practices, this paper presents the first study that leverages Natural Language Processing (NLP) techniques to assess multiple high-inference instructional practices in two distinct educational settings: in-person K-12 classrooms and simulated performance tasks for pre-service teachers. This is also the first study that applies NLP to measure a teaching practice that has been demonstrated to be particularly effective for students with special needs. We confront two challenges inherent in NLP-based instructional analysis, including noisy and long input data and highly skewed distributions of human ratings. Our results suggest that pretrained Language Models (PLMs) demonstrate performances comparable to the agreement level of human raters for variables that are more discrete and require lower inference, but their efficacy diminishes with more complex teaching practices. Interestingly, using only teachers' utterances as input yields strong results for student-centered variables, alleviating common concerns over the difficulty of collecting and transcribing high-quality student speech data in in-person teaching settings. Our findings highlight both the potential and the limitations of current NLP techniques in the education domain, opening avenues for further exploration.
- Published
- 2024
3. Deep Learning Enhanced Quantum Holography with Undetected Photons
- Author
-
Fan, Weiru, Qian, Gewei, Wang, Yutong, Xu, Chen-Ran, Chen, Ziyang, Liu, Xun, Li, Wei, Liu, Xu, Liu, Feng, Xu, Xingqi, Wang, Da-Wei, and Yakovlev, Vladislav V.
- Subjects
Physics - Optics - Abstract
Holography is an essential technique of generating three-dimensional images. Recently, quantum holography with undetected photons (QHUP) has emerged as a groundbreaking method capable of capturing complex amplitude images. Despite its potential, the practical application of QHUP has been limited by susceptibility to phase disturbances, low interference visibility, and limited spatial resolution. Deep learning, recognized for its ability in processing complex data, holds significant promise in addressing these challenges. In this report, we present an ample advancement in QHUP achieved by harnessing the power of deep learning to extract images from single-shot holograms, resulting in vastly reduced noise and distortion, alongside a notable enhancement in spatial resolution. The proposed and demonstrated deep learning QHUP (DL-QHUP) methodology offers a transformative solution by delivering high-speed imaging, improved spatial resolution, and superior noise resilience, making it suitable for diverse applications across an array of research fields stretching from biomedical imaging to remote sensing. DL-QHUP signifies a crucial leap forward in the realm of holography, demonstrating its immense potential to revolutionize imaging capabilities and pave the way for advancements in various scientific disciplines. The integration of DL-QHUP promises to unlock new possibilities in imaging applications, transcending existing limitations and offering unparalleled performance in challenging environments.
- Published
- 2024
4. Emu3: Next-Token Prediction is All You Need
- Author
-
Wang, Xinlong, Zhang, Xiaosong, Luo, Zhengxiong, Sun, Quan, Cui, Yufeng, Wang, Jinsheng, Zhang, Fan, Wang, Yueze, Li, Zhen, Yu, Qiying, Zhao, Yingli, Ao, Yulong, Min, Xuebin, Li, Tao, Wu, Boya, Zhao, Bo, Zhang, Bowen, Wang, Liangdong, Liu, Guang, He, Zheqi, Yang, Xi, Liu, Jingjing, Lin, Yonghua, Huang, Tiejun, and Wang, Zhongyuan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
While next-token prediction is considered a promising path towards artificial general intelligence, it has struggled to excel in multimodal tasks, which are still dominated by diffusion models (e.g., Stable Diffusion) and compositional approaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, a new suite of state-of-the-art multimodal models trained solely with next-token prediction. By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences. Emu3 outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship models such as SDXL and LLaVA-1.6, while eliminating the need for diffusion or compositional architectures. Emu3 is also capable of generating high-fidelity video via predicting the next token in a video sequence. We simplify complex multimodal model designs by converging on a singular focus: tokens, unlocking great potential for scaling both during training and inference. Our results demonstrate that next-token prediction is a promising path towards building general multimodal intelligence beyond language. We open-source key techniques and models to support further research in this direction., Comment: Project Page: https://emu.baai.ac.cn
- Published
- 2024
5. Evaluation of OpenAI o1: Opportunities and Challenges of AGI
- Author
-
Zhong, Tianyang, Liu, Zhengliang, Pan, Yi, Zhang, Yutong, Zhou, Yifan, Liang, Shizhe, Wu, Zihao, Lyu, Yanjun, Shu, Peng, Yu, Xiaowei, Cao, Chao, Jiang, Hanqi, Chen, Hanxu, Li, Yiwei, Chen, Junhao, Hu, Huawen, Liu, Yihen, Zhao, Huaqin, Xu, Shaochen, Dai, Haixing, Zhao, Lin, Zhang, Ruidong, Zhao, Wei, Yang, Zhenyuan, Chen, Jingyuan, Wang, Peilong, Ruan, Wei, Wang, Hui, Zhao, Huan, Zhang, Jing, Ren, Yiming, Qin, Shihuan, Chen, Tong, Li, Jiaxi, Zidan, Arif Hassan, Jahin, Afrar, Chen, Minheng, Xia, Sichen, Holmes, Jason, Zhuang, Yan, Wang, Jiaqi, Xu, Bochen, Xia, Weiran, Yu, Jichao, Tang, Kaibo, Yang, Yaxuan, Sun, Bolun, Yang, Tao, Lu, Guoyu, Wang, Xianqiao, Chai, Lilong, Li, He, Lu, Jin, Sun, Lichao, Zhang, Xin, Ge, Bao, Hu, Xintao, Zhang, Lian, Zhou, Hua, Zhang, Lu, Zhang, Shu, Liu, Ninghao, Jiang, Bei, Kong, Linglong, Xiang, Zhen, Ren, Yudan, Liu, Jun, Jiang, Xi, Bao, Yu, Zhang, Wei, Li, Xiang, Li, Gang, Liu, Wei, Shen, Dinggang, Sikora, Andrea, Zhai, Xiaoming, Zhu, Dajiang, and Liu, Tianming
- Subjects
Computer Science - Computation and Language - Abstract
This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguistics, and social sciences. Through rigorous testing, o1-preview demonstrated remarkable capabilities, often achieving human-level or superior performance in areas ranging from coding challenges to scientific reasoning and from language processing to creative problem-solving. Key findings include: -83.3% success rate in solving complex competitive programming problems, surpassing many human experts. -Superior ability in generating coherent and accurate radiology reports, outperforming other evaluated models. -100% accuracy in high school-level mathematical reasoning tasks, providing detailed step-by-step solutions. -Advanced natural language inference capabilities across general and specialized domains like medicine. -Impressive performance in chip design tasks, outperforming specialized models in areas such as EDA script generation and bug analysis. -Remarkable proficiency in anthropology and geology, demonstrating deep understanding and reasoning in these specialized fields. -Strong capabilities in quantitative investing. O1 has comprehensive financial knowledge and statistical modeling skills. -Effective performance in social media analysis, including sentiment analysis and emotion recognition. The model excelled particularly in tasks requiring intricate reasoning and knowledge integration across various fields. While some limitations were observed, including occasional errors on simpler problems and challenges with certain highly specialized concepts, the overall results indicate significant progress towards artificial general intelligence.
- Published
- 2024
6. Just say what you want: only-prompting self-rewarding online preference optimization
- Author
-
Xu, Ruijie, Liu, Zhihan, Liu, Yongfei, Yan, Shipeng, Wang, Zhaoran, Zhang, Zhi, and He, Xuming
- Subjects
Computer Science - Artificial Intelligence - Abstract
We address the challenge of online Reinforcement Learning from Human Feedback (RLHF) with a focus on self-rewarding alignment methods. In online RLHF, obtaining feedback requires interaction with the environment, which can be costly when using additional reward models or the GPT-4 API. Current self-rewarding approaches rely heavily on the discriminator's judgment capabilities, which are effective for large-scale models but challenging to transfer to smaller ones. To address these limitations, we propose a novel, only-prompting self-rewarding online algorithm that generates preference datasets without relying on judgment capabilities. Additionally, we employ fine-grained arithmetic control over the optimality gap between positive and negative examples, generating more hard negatives in the later stages of training to help the model better capture subtle human preferences. Finally, we conduct extensive experiments on two base models, Mistral-7B and Mistral-Instruct-7B, which significantly bootstrap the performance of the reference model, achieving 34.5% in the Length-controlled Win Rates of AlpacaEval 2.0.
- Published
- 2024
7. The hypothetical track-length fitting algorithm for energy measurement in liquid argon TPCs
- Author
-
DUNE Collaboration, Abud, A. Abed, Abi, B., Acciarri, R., Acero, M. A., Adames, M. R., Adamov, G., Adamowski, M., Adams, D., Adinolfi, M., Adriano, C., Aduszkiewicz, A., Aguilar, J., Akbar, F., Alex, N. S., Allison, K., Monsalve, S. Alonso, Alrashed, M., Alton, A., Alvarez, R., Alves, T., Amar, H., Amedo, P., Anderson, J., Andreopoulos, C., Andreotti, M., Andrews, M. P., Andrianala, F., Andringa, S., Anfimov, N., Ankowski, A., Antic, D., Antoniassi, M., Antonova, M., Antoshkin, A., Aranda-Fernandez, A., Arellano, L., Diaz, E. Arrieta, Arroyave, M. A., Asaadi, J., Ashkenazi, A., Asner, D., Asquith, L., Atkin, E., Auguste, D., Aurisano, A., Aushev, V., Autiero, D., Azam, M. B., Azfar, F., Back, A., Back, H., Back, J. J., Bagaturia, I., Bagby, L., Balashov, N., Balasubramanian, S., Baldi, P., Baldini, W., Baldonedo, J., Baller, B., Bambah, B., Banerjee, R., Barao, F., Barbu, D., Barenboim, G., Alzás, P. Barham, Barker, G. J., Barkhouse, W., Barr, G., Monarca, J. Barranco, Barros, A., Barros, N., Barrow, D., Barrow, J. L., Basharina-Freshville, A., Bashyal, A., Basque, V., Batchelor, C., Bathe-Peters, L., Battat, J. B. R., Battisti, F., Bay, F., Bazetto, M. C. Q., Alba, J. L. L. Bazo, Beacom, J. F., Bechetoille, E., Behera, B., Belchior, E., Bell, G., Bellantoni, L., Bellettini, G., Bellini, V., Beltramello, O., Benekos, N., Montiel, C. Benitez, Benjamin, D., Neves, F. Bento, Berger, J., Berkman, S., Bernal, J., Bernardini, P., Bersani, A., Bertolucci, S., Betancourt, M., Rodríguez, A. Betancur, Bevan, A., Bezawada, Y., Bezerra, A. T., Bezerra, T. J., Bhat, A., Bhatnagar, V., Bhatt, J., Bhattacharjee, M., Bhattacharya, M., Bhuller, S., Bhuyan, B., Biagi, S., Bian, J., Biery, K., Bilki, B., Bishai, M., Bitadze, A., Blake, A., Blaszczyk, F. D., Blazey, G. C., Blucher, E., Bodek, A., Bogenschuetz, J., Boissevain, J., Bolognesi, S., Bolton, T., Bomben, L., Bonesini, M., Bonilla-Diaz, C., Bonini, F., Booth, A., Boran, F., Bordoni, S., Merlo, R. Borges, Borkum, A., Bostan, N., Bouet, R., Boza, J., Bracinik, J., Brahma, B., Brailsford, D., Bramati, F., Branca, A., Brandt, A., Bremer, J., Brew, C., Brice, S. J., Brio, V., Brizzolari, C., Bromberg, C., Brooke, J., Bross, A., Brunetti, G., Brunetti, M., Buchanan, N., Budd, H., Buergi, J., Bundock, A., Burgardt, D., Butchart, S., V., G. Caceres, Cagnoli, I., Cai, T., Calabrese, R., Calcutt, J., Calivers, L., Calvo, E., Caminata, A., Camino, A. F., Campanelli, W., Campani, A., Benitez, A. Campos, Canci, N., Capó, J., Caracas, I., Caratelli, D., Carber, D., Carceller, J. M., Carini, G., Carlus, B., Carneiro, M. F., Carniti, P., Terrazas, I. Caro, Carranza, H., Carrara, N., Carroll, L., Carroll, T., Carter, A., Casarejos, E., Casazza, D., Forero, J. F. Castaño, Castaño, F. A., Castillo, A., Castromonte, C., Catano-Mur, E., Cattadori, C., Cavalier, F., Cavanna, F., Centro, S., Cerati, G., Cerna, C., Cervelli, A., Villanueva, A. Cervera, Chakraborty, K., Chalifour, M., Chappell, A., Charitonidis, N., Chatterjee, A., Chen, H., Chen, M., Chen, W. C., Chen, Y., Chen-Wishart, Z., Cherdack, D., Chi, C., Chiapponi, F., Chirco, R., Chitirasreemadam, N., Cho, K., Choate, S., Choi, G., Chokheli, D., Chong, P. S., Chowdhury, B., Christian, D., Chukanov, A., Chung, M., Church, E., Cicala, M. F., Cicerchia, M., Cicero, V., Ciolini, R., Clarke, P., Cline, G., Coan, T. E., Cocco, A. G., Coelho, J. A. B., Cohen, A., Collazo, J., Collot, J., Conley, E., Conrad, J. M., Convery, M., Copello, S., Cova, P., Cox, C., Cremaldi, L., Cremonesi, L., Crespo-Anadón, J. I., Crisler, M., Cristaldo, E., Crnkovic, J., Crone, G., Cross, R., Cudd, A., Cuesta, C., Cui, Y., Curciarello, F., Cussans, D., Dai, J., Dalager, O., Dallavalle, R., Dallaway, W., D'Amico, R., da Motta, H., Dar, Z. A., Darby, R., Peres, L. Da Silva, David, Q., Davies, G. S., Davini, S., Dawson, J., De Aguiar, R., De Almeida, P., Debbins, P., De Bonis, I., Decowski, M. P., de Gouvêa, A., De Holanda, P. C., Astiz, I. L. De Icaza, De Jong, P., Sanchez, P. Del Amo, De la Torre, A., De Lauretis, G., Delbart, A., Delepine, D., Delgado, M., Dell'Acqua, A., Monache, G. Delle, Delmonte, N., De Lurgio, P., Demario, R., De Matteis, G., Neto, J. R. T. de Mello, DeMuth, D. M., Dennis, S., Densham, C., Denton, P., Deptuch, G. W., De Roeck, A., De Romeri, V., Detje, J. P., Devine, J., Dharmapalan, R., Dias, M., Diaz, A., Díaz, J. S., Díaz, F., Di Capua, F., Di Domenico, A., Di Domizio, S., Di Falco, S., Di Giulio, L., Ding, P., Di Noto, L., Diociaiuti, E., Distefano, C., Diurba, R., Diwan, M., Djurcic, Z., Doering, D., Dolan, S., Dolek, F., Dolinski, M. J., Domenici, D., Domine, L., Donati, S., Donon, Y., Doran, S., Douglas, D., Doyle, T. A., Dragone, A., Drielsma, F., Duarte, L., Duchesneau, D., Duffy, K., Dugas, K., Dunne, P., Dutta, B., Duyang, H., Dwyer, D. A., Dyshkant, A. S., Dytman, S., Eads, M., Earle, A., Edayath, S., Edmunds, D., Eisch, J., Englezos, P., Ereditato, A., Erjavec, T., Escobar, C. O., Evans, J. J., Ewart, E., Ezeribe, A. C., Fahey, K., Fajt, L., Falcone, A., Fani', M., Farnese, C., Farrell, S., Farzan, Y., Fedoseev, D., Felix, J., Feng, Y., Fernandez-Martinez, E., Ferry, G., Fialova, E., Fields, L., Filip, P., Filkins, A., Filthaut, F., Fine, R., Fiorillo, G., Fiorini, M., Fogarty, S., Foreman, W., Fowler, J., Franc, J., Francis, K., Franco, D., Franklin, J., Freeman, J., Fried, J., Friedland, A., Fuess, S., Furic, I. K., Furman, K., Furmanski, A. P., Gaba, R., Gabrielli, A., Gago, A. M., Galizzi, F., Gallagher, H., Gallice, N., Galymov, V., Gamberini, E., Gamble, T., Ganacim, F., Gandhi, R., Ganguly, S., Gao, F., Gao, S., Garcia-Gamez, D., García-Peris, M. Á., Gardim, F., Gardiner, S., Gastler, D., Gauch, A., Gauvreau, J., Gauzzi, P., Gazzana, S., Ge, G., Geffroy, N., Gelli, B., Gent, S., Gerlach, L., Ghorbani-Moghaddam, Z., Giammaria, T., Gibin, D., Gil-Botella, I., Gilligan, S., Gioiosa, A., Giovannella, S., Girerd, C., Giri, A. K., Giugliano, C., Giusti, V., Gnani, D., Gogota, O., Gollapinni, S., Gollwitzer, K., Gomes, R. A., Bermeo, L. V. Gomez, Fajardo, L. S. Gomez, Gonnella, F., Gonzalez-Diaz, D., Gonzalez-Lopez, M., Goodman, M. C., Goswami, S., Gotti, C., Goudeau, J., Goudzovski, E., Grace, C., Gramellini, E., Gran, R., Granados, E., Granger, P., Grant, C., Gratieri, D. R., Grauso, G., Green, P., Greenberg, S., Greer, J., Griffith, W. C., Groetschla, F. T., Grzelak, K., Gu, L., Gu, W., Guarino, V., Guarise, M., Guenette, R., Guerzoni, M., Guffanti, D., Guglielmi, A., Guo, B., Guo, F. Y., Gupta, A., Gupta, V., Gurung, G., Gutierrez, D., Guzowski, P., Guzzo, M. M., Gwon, S., Habig, A., Hadavand, H., Haegel, L., Haenni, R., Hagaman, L., Hahn, A., Haiston, J., Hakenmüller, J., Hamernik, T., Hamilton, P., Hancock, J., Happacher, F., Harris, D. A., Hart, A. L., Hartnell, J., Hartnett, T., Harton, J., Hasegawa, T., Hasnip, C. M., Hatcher, R., Hayrapetyan, K., Hays, J., Hazen, E., He, M., Heavey, A., Heeger, K. M., Heise, J., Hellmuth, P., Henry, S., Herner, K., Hewes, V., Higuera, A., Hilgenberg, C., Hillier, S. J., Himmel, A., Hinkle, E., Hirsch, L. R., Ho, J., Hoff, J., Holin, A., Holvey, T., Hoppe, E., Horiuchi, S., Horton-Smith, G. A., Houdy, T., Howard, B., Howell, R., Hristova, I., Hronek, M. S., Huang, J., Huang, R. G., Hulcher, Z., Ibrahim, M., Iles, G., Ilic, N., Iliescu, A. M., Illingworth, R., Ingratta, G., Ioannisian, A., Irwin, B., Isenhower, L., Oliveira, M. Ismerio, Itay, R., Jackson, C. M., Jain, V., James, E., Jang, W., Jargowsky, B., Jena, D., Jentz, I., Ji, X., Jiang, C., Jiang, J., Jiang, L., Jipa, A., Jo, J. H., Joaquim, F. R., Johnson, W., Jollet, C., Jones, B., Jones, R., Jovancevic, N., Judah, M., Jung, C. K., Jung, K. Y., Junk, T., Jwa, Y., Kabirnezhad, M., Kaboth, A. C., Kadenko, I., Kakorin, I., Kalitkina, A., Kalra, D., Kandemir, M., Kaplan, D. M., Karagiorgi, G., Karaman, G., Karcher, A., Karyotakis, Y., Kasai, S., Kasetti, S. P., Kashur, L., Katsioulas, I., Kauther, A., Kazaryan, N., Ke, L., Kearns, E., Keener, P. T., Kelly, K. J., Kemp, E., Kemularia, O., Kermaidic, Y., Ketchum, W., Kettell, S. H., Khabibullin, M., Khan, N., Khvedelidze, A., Kim, D., Kim, J., Kim, M. J., King, B., Kirby, B., Kirby, M., Kish, A., Klein, J., Kleykamp, J., Klustova, A., Kobilarcik, T., Koch, L., Koehler, K., Koerner, L. W., Koh, D. H., Kolupaeva, L., Korablev, D., Kordosky, M., Kosc, T., Kose, U., Kostelecký, V. A., Kothekar, K., Kotler, I., Kovalcuk, M., Kozhukalov, V., Krah, W., Kralik, R., Kramer, M., Kreczko, L., Krennrich, F., Kreslo, I., Kroupova, T., Kubota, S., Kubu, M., Kudenko, Y., Kudryavtsev, V. A., Kufatty, G., Kuhlmann, S., Kulagin, S., Kumar, J., Kumar, P., Kumaran, S., Kunzmann, J., Kuravi, R., Kurita, N., Kuruppu, C., Kus, V., Kutter, T., Kvasnicka, J., Labree, T., Lackey, T., Lalău, I., Lambert, A., Land, B. J., Lane, C. E., Lane, N., Lang, K., Langford, T., Langstaff, M., Lanni, F., Lantwin, O., Larkin, J., Lasorak, P., Last, D., Laudrain, A., Laundrie, A., Laurenti, G., Lavaut, E., Laycock, P., Lazanu, I., LaZur, R., Lazzaroni, M., Le, T., Leardini, S., Learned, J., LeCompte, T., Legin, V., Miotto, G. Lehmann, Lehnert, R., de Oliveira, M. A. Leigui, Leitner, M., Silverio, D. Leon, Lepin, L. M., Li, J. -Y, Li, S. W., Li, Y., Liao, H., Lin, C. S., Lindebaum, D., Linden, S., Lineros, R. A., Lister, A., Littlejohn, B. R., Liu, H., Liu, J., Liu, Y., Lockwitz, S., Lokajicek, M., Lomidze, I., Long, K., Lopes, T. V., Lopez, J., de Rego, I. López, López-March, N., Lord, T., LoSecco, J. M., Louis, W. C., Sanchez, A. Lozano, Lu, X. -G., Luk, K. B., Lunday, B., Luo, X., Luppi, E., MacFarlane, D., Machado, A. A., Machado, P., Macias, C. T., Macier, J. R., MacMahon, M., Maddalena, A., Madera, A., Madigan, P., Magill, S., Magueur, C., Mahn, K., Maio, A., Major, A., Majumdar, K., Mameli, S., Man, M., Mandujano, R. C., Maneira, J., Manly, S., Mann, A., Manolopoulos, K., Plata, M. Manrique, Corchado, S. Manthey, Manyam, V. N., Marchan, M., Marchionni, A., Marciano, W., Marfatia, D., Mariani, C., Maricic, J., Marinho, F., Marino, A. D., Markiewicz, T., Marques, F. Das Chagas, Marquet, C., Marshak, M., Marshall, C. M., Marshall, J., Martina, L., Martín-Albo, J., Martinez, N., Caicedo, D. A. Martinez, López, F. Martínez, Miravé, P. Martínez, Martynenko, S., Mascagna, V., Massari, C., Mastbaum, A., Matichard, F., Matsuno, S., Matteucci, G., Matthews, J., Mauger, C., Mauri, N., Mavrokoridis, K., Mawby, I., Mazza, R., McAskill, T., McConkey, N., McFarland, K. S., McGrew, C., McNab, A., Meazza, L., Meddage, V. C. N., Mefodiev, A., Mehta, B., Mehta, P., Melas, P., Mena, O., Mendez, H., Mendez, P., Méndez, D. P., Menegolli, A., Meng, G., Mercuri, A. C. E. A., Meregaglia, A., Messier, M. D., Metallo, S., Metcalf, W., Mewes, M., Meyer, H., Miao, T., Micallef, J., Miccoli, A., Michna, G., Milincic, R., Miller, F., Miller, G., Miller, W., Mineev, O., Minotti, A., Miralles, L., Mironov, C., Miryala, S., Miscetti, S., Mishra, C. S., Mishra, P., Mishra, S. R., Mislivec, A., Mitchell, M., Mladenov, D., Mocioiu, I., Mogan, A., Moggi, N., Mohanta, R., Mohayai, T. A., Mokhov, N., Molina, J., Bueno, L. Molina, Montagna, E., Montanari, A., Montanari, C., Montanari, D., Montanino, D., Zetina, L. M. Montaño, Mooney, M., Moor, A. F., Moore, Z., Moreno, D., Moreno-Palacios, O., Morescalchi, L., Moretti, D., Moretti, R., Morris, C., Mossey, C., Moura, C. A., Mouster, G., Mu, W., Mualem, L., Mueller, J., Muether, M., Muheim, F., Muir, A., Mukhamejanov, Y., Mulhearn, M., Munford, D., Munteanu, L. J., Muramatsu, H., Muraz, J., Murphy, M., Murphy, T., Muse, J., Mytilinaki, A., Nachtman, J., Nagai, Y., Nagu, S., Nandakumar, R., Naples, D., Narita, S., Navrer-Agasson, A., Nayak, N., Nebot-Guinot, M., Nehm, A., Nelson, J. K., Neogi, O., Nesbit, J., Nessi, M., Newbold, D., Newcomer, M., Nichol, R., Nicolas-Arnaldos, F., Nikolica, A., Nikolov, J., Niner, E., Nishimura, K., Norman, A., Norrick, A., Novella, P., Nowak, A., Nowak, J. A., Oberling, M., Ochoa-Ricoux, J. P., Oh, S., Oh, S. B., Olivier, A., Olshevskiy, A., Olson, T., Onel, Y., Onishchuk, Y., Oranday, A., Osbiston, M., Vélez, J. A. Osorio, O'Sullivan, L., Ormachea, L. Otiniano, Ott, J., Pagani, L., Palacio, G., Palamara, O., Palestini, S., Paley, J. M., Pallavicini, M., Palomares, C., Pan, S., Panda, P., Vazquez, W. Panduro, Pantic, E., Paolone, V., Papaleo, R., Papanestis, A., Papoulias, D., Paramesvaran, S., Paris, A., Parke, S., Parozzi, E., Parsa, S., Parsa, Z., Parveen, S., Parvu, M., Pasciuto, D., Pascoli, S., Pasqualini, L., Pasternak, J., Patrick, C., Patrizii, L., Patterson, R. B., Patzak, T., Paudel, A., Paulucci, L., Pavlovic, Z., Pawloski, G., Payne, D., Pec, V., Pedreschi, E., Peeters, S. J. M., Pellico, W., Perez, A. Pena, Pennacchio, E., Penzo, A., Peres, O. L. G., Gonzalez, Y. F. Perez, Pérez-Molina, L., Pernas, C., Perry, J., Pershey, D., Pessina, G., Petrillo, G., Petta, C., Petti, R., Pfaff, M., Pia, V., Pickering, L., Pietropaolo, F., Pimentel, V. L., Pinaroli, G., Pincha, S., Pinchault, J., Pitts, K., Plows, K., Pollack, C., Pollman, T., Pompa, F., Pons, X., Poonthottathil, N., Popov, V., Poppi, F., Porter, J., Paixão, L. G. Porto, Potekhin, M., Potenza, R., Pozzato, M., Prakash, T., Pratt, C., Prest, M., Psihas, F., Pugnere, D., Qian, X., Queen, J., Raaf, J. L., Radeka, V., Rademacker, J., Radics, B., Raffaelli, F., Rafique, A., Raguzin, E., Rahaman, U., Rai, M., Rajagopalan, S., Rajaoalisoa, M., Rakhno, I., Rakotondravohitra, L., Ralte, L., Delgado, M. A. Ramirez, Ramson, B., Rappoldi, A., Raselli, G., Ratoff, P., Ray, R., Razafinime, H., Razakamiandra, R. F., Rea, E. M., Real, J. S., Rebel, B., Rechenmacher, R., Reichenbacher, J., Reitzner, S. D., Sfar, H. Rejeb, Renner, E., Renshaw, A., Rescia, S., Resnati, F., Restrepo, Diego, Reynolds, C., Ribas, M., Riboldi, S., Riccio, C., Riccobene, G., Ricol, J. S., Rigan, M., Rincón, E. V., Ritchie-Yates, A., Ritter, S., Rivera, D., Rivera, R., Robert, A., Rocha, J. L. Rocabado, Rochester, L., Roda, M., Rodrigues, P., Alonso, M. J. Rodriguez, Rondon, J. Rodriguez, Rosauro-Alcaraz, S., Rosier, P., Ross, D., Rossella, M., Rossi, M., Ross-Lonergan, M., Roy, N., Roy, P., Rubbia, C., Ruggeri, A., Ferreira, G. Ruiz, Russell, B., Ruterbories, D., Rybnikov, A., Sacerdoti, S., Saha, S., Sahoo, S. K., Sahu, N., Sala, P., Samios, N., Samoylov, O., Sanchez, M. C., Bravo, A. Sánchez, Sánchez-Castillo, A., Sanchez-Lucas, P., Sandberg, V., Sanders, D. A., Sanfilippo, S., Sankey, D., Santoro, D., Saoulidou, N., Sapienza, P., Sarasty, C., Sarcevic, I., Sarra, I., Savage, G., Savinov, V., Scanavini, G., Scaramelli, A., Scarff, A., Schefke, T., Schellman, H., Schifano, S., Schlabach, P., Schmitz, D., Schneider, A. W., Scholberg, K., Schukraft, A., Schuld, B., Segade, A., Segreto, E., Selyunin, A., Senadheera, D., Senise, C. R., Sensenig, J., Shaevitz, M. H., Shanahan, P., Sharma, P., Kumar, R., Poudel, S. Sharma, Shaw, K., Shaw, T., Shchablo, K., Shen, J., Shepherd-Themistocleous, C., Sheshukov, A., Shi, J., Shi, W., Shin, S., Shivakoti, S., Shoemaker, I., Shooltz, D., Shrock, R., Siddi, B., Siden, M., Silber, J., Simard, L., Sinclair, J., Sinev, G., Singh, Jaydip, Singh, J., Singh, L., Singh, P., Singh, V., Chauhan, S. Singh, Sipos, R., Sironneau, C., Sirri, G., Siyeon, K., Skarpaas, K., Smedley, J., Smith, E., Smith, J., Smith, P., Smolik, J., Smy, M., Snape, M., Snider, E. L., Snopok, P., Snowden-Ifft, D., Nunes, M. Soares, Sobel, H., Soderberg, M., Sokolov, S., Salinas, C. J. Solano, Söldner-Rembold, S., Solomey, N., Solovov, V., Sondheim, W. E., Sorel, M., Sotnikov, A., Soto-Oton, J., Sousa, A., Soustruznik, K., Spinella, F., Spitz, J., Spooner, N. J. C., Spurgeon, K., Stalder, D., Stancari, M., Stanco, L., Steenis, J., Stein, R., Steiner, H. M., Lisbôa, A. F. Steklain, Stepanova, A., Stewart, J., Stillwell, B., Stock, J., Stocker, F., Stokes, T., Strait, M., Strauss, T., Strigari, L., Stuart, A., Suarez, J. G., Subash, J., Surdo, A., Suter, L., Sutera, C. M., Sutton, K., Suvorov, Y., Svoboda, R., Swain, S. K., Szczerbinska, B., Szelc, A. M., Sztuc, A., Taffara, A., Talukdar, N., Tamara, J., Tanaka, H. A., Tang, S., Taniuchi, N., Casanova, A. M. Tapia, Oregui, B. Tapia, Tapper, A., Tariq, S., Tarpara, E., Tatar, E., Tayloe, R., Tedeschi, D., Teklu, A. M., Vidal, J. Tena, Tennessen, P., Tenti, M., Terao, K., Terranova, F., Testera, G., Thakore, T., Thea, A., Thomas, S., Thompson, A., Thorn, C., Timm, S. C., Tiras, E., Tishchenko, V., Tiwari, S., Todorović, N., Tomassetti, L., Tonazzo, A., Torbunov, D., Torti, M., Tortola, M., Tortorici, F., Tosi, N., Totani, D., Toups, M., Touramanis, C., Tran, D., Travaglini, R., Trevor, J., Triller, E., Trilov, S., Truchon, J., Truncali, D., Trzaska, W. H., Tsai, Y., Tsai, Y. -T., Tsamalaidze, Z., Tsang, K. V., Tsverava, N., Tu, S. Z., Tufanli, S., Tunnell, C., Turnberg, S., Turner, J., Tuzi, M., Tyler, J., Tyley, E., Tzanov, M., Uchida, M. A., González, J. Ureña, Urheim, J., Usher, T., Utaegbulam, H., Uzunyan, S., Vagins, M. R., Vahle, P., Valder, S., Valdiviesso, G. A., Valencia, E., Valentim, R., Vallari, Z., Vallazza, E., Valle, J. W. F., Van Berg, R., Van de Water, R. G., Forero, D. V., Vannozzi, A., Van Nuland-Troost, M., Varanini, F., Oliva, D. Vargas, Vasina, S., Vaughan, N., Vaziri, K., Vázquez-Ramos, A., Vega, J., Ventura, S., Verdugo, A., Vergani, S., Verzocchi, M., Vetter, K., Vicenzi, M., de Souza, H. Vieira, Vignoli, C., Vilela, C., Villa, E., Viola, S., Viren, B., Vizarreta, R., Hernandez, A. P. Vizcaya, Vuong, Q., Waldron, A. V., Wallbank, M., Walsh, J., Walton, T., Wang, H., Wang, J., Wang, L., Wang, M. H. L. S., Wang, X., Wang, Y., Warburton, K., Warner, D., Warsame, L., Wascko, M. O., Waters, D., Watson, A., Wawrowska, K., Weber, A., Weber, C. M., Weber, M., Wei, H., Weinstein, A., Westerdale, S., Wetstein, M., Whalen, K., White, A., Whitehead, L. H., Whittington, D., Wilhlemi, J., Wilking, M. J., Wilkinson, A., Wilkinson, C., Wilson, F., Wilson, R. J., Winter, P., Wisniewski, W., Wolcott, J., Wolfs, J., Wongjirad, T., Wood, A., Wood, K., Worcester, E., Worcester, M., Wospakrik, M., Wresilo, K., Wret, C., Wu, S., Wu, W., Wurm, M., Wyenberg, J., Xiao, Y., Xiotidis, I., Yaeggy, B., Yahlali, N., Yandel, E., Yang, J., Yang, K., Yang, T., Yankelevich, A., Yershov, N., Yonehara, K., Young, T., Yu, B., Yu, H., Yu, J., Yu, Y., Yuan, W., Zaki, R., Zalesak, J., Zambelli, L., Zamorano, B., Zani, A., Zapata, O., Zazueta, L., Zeller, G. P., Zennamo, J., Zeug, K., Zhang, C., Zhang, S., Zhao, M., Zhivun, E., Zimmerman, E. D., Zucchelli, S., Zuklin, J., Zutshi, V., and Zwaska, R.
- Subjects
Physics - Instrumentation and Detectors ,High Energy Physics - Experiment - Abstract
This paper introduces the hypothetical track-length fitting algorithm, a novel method for measuring the kinetic energies of ionizing particles in liquid argon time projection chambers (LArTPCs). The algorithm finds the most probable offset in track length for a track-like object by comparing the measured ionization density as a function of position with a theoretical prediction of the energy loss as a function of the energy, including models of electron recombination and detector response. The algorithm can be used to measure the energies of particles that interact before they stop, such as charged pions that are absorbed by argon nuclei. The algorithm's energy measurement resolutions and fractional biases are presented as functions of particle kinetic energy and number of track hits using samples of stopping secondary charged pions in data collected by the ProtoDUNE-SP detector, and also in a detailed simulation. Additional studies describe impact of the dE/dx model on energy measurement performance. The method described in this paper to characterize the energy measurement performance can be repeated in any LArTPC experiment using stopping secondary charged pions.
- Published
- 2024
8. Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction
- Author
-
He, Jing, Li, Haodong, Yin, Wei, Liang, Yixun, Li, Leheng, Zhou, Kaiqiang, Liu, Hongbo, Liu, Bingbing, and Chen, Ying-Cong
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize. Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions. Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also significantly enhances efficiency, being hundreds of times faster than most existing diffusion-based methods., Comment: Project page: https://lotus3d.github.io/
- Published
- 2024
9. EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation
- Author
-
Tang, Jiaxiang, Li, Zhaoshuo, Hao, Zekun, Liu, Xian, Zeng, Gang, Liu, Ming-Yu, and Zhang, Qinsheng
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Current auto-regressive mesh generation methods suffer from issues such as incompleteness, insufficient detail, and poor generalization. In this paper, we propose an Auto-regressive Auto-encoder (ArAE) model capable of generating high-quality 3D meshes with up to 4,000 faces at a spatial resolution of $512^3$. We introduce a novel mesh tokenization algorithm that efficiently compresses triangular meshes into 1D token sequences, significantly enhancing training efficiency. Furthermore, our model compresses variable-length triangular meshes into a fixed-length latent space, enabling training latent diffusion models for better generalization. Extensive experiments demonstrate the superior quality, diversity, and generalization capabilities of our model in both point cloud and image-conditioned mesh generation tasks., Comment: Project Page: https://research.nvidia.com/labs/dir/edgerunner/
- Published
- 2024
10. EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
- Author
-
Chen, Kai, Gou, Yunhao, Huang, Runhui, Liu, Zhili, Tan, Daxin, Xu, Jing, Wang, Chunwei, Zhu, Yi, Zeng, Yihan, Yang, Kuo, Wang, Dingdong, Xiang, Kun, Li, Haoyuan, Bai, Haoli, Han, Jianhua, Li, Xiaohui, Jin, Weike, Xie, Nian, Zhang, Yu, Kwok, James T., Zhao, Hengshuang, Liang, Xiaodan, Yeung, Dit-Yan, Chen, Xiao, Li, Zhenguo, Zhang, Wei, Liu, Qun, Hong, Lanqing, Hou, Lu, and Xu, Hang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computation and Language - Abstract
GPT-4o, an omni-modal model that enables vocal conversations with diverse emotions and tones, marks a milestone for omni-modal foundation models. However, empowering Large Language Models to perceive and generate images, texts, and speeches end-to-end with publicly available data remains challenging in the open-source community. Existing vision-language models rely on external tools for the speech processing, while speech-language models still suffer from limited or even without vision-understanding abilities. To address this gap, we propose EMOVA (EMotionally Omni-present Voice Assistant), to enable Large Language Models with end-to-end speech capabilities while maintaining the leading vision-language performance. With a semantic-acoustic disentangled speech tokenizer, we notice surprisingly that omni-modal alignment can further enhance vision-language and speech abilities compared with the corresponding bi-modal aligned counterparts. Moreover, a lightweight style module is proposed for flexible speech style controls (e.g., emotions and pitches). For the first time, EMOVA achieves state-of-the-art performance on both the vision-language and speech benchmarks, and meanwhile, supporting omni-modal spoken dialogue with vivid emotions., Comment: Project Page: https://emova-ollm.github.io/
- Published
- 2024
11. GRB 240529A: A Tale of Two Shocks
- Author
-
Sun, Tian-Rui, Geng, Jin-Jun, Yan, Jing-Zhi, Hu, You-Dong, Wu, Xue-Feng, Castro-Tirado, Alberto J., Yang, Chao, Ping, Yi-Ding, Hu, Chen-Ran, Xu, Fan, Gao, Hao-Xuan, Jiang, Ji-An, Zhu, Yan-Tian, Xue, Yongquan, Pérez-García, Ignacio, Wu, Si-Yu, Fernández-García, Emilio, Caballero-García, María D., Sánchez-Ramírez, Rubén, Guziy, Sergiy, Olivares, Ignacio, del Pulgar, Carlos Jesus Pérez, Castellón, A., Castillo, Sebastián, Xiong, Ding-Rong, Pandey, Shashi B., Hiriart, David, García-Segura, Guillermo, Lee, William H., Carrasco-García, I. M., Park, Il H., Meintjes, Petrus J., van Heerden, Hendrik J., Martín-Carrillo, Antonio, Hanlon, Lorraine, Zhang, Bin-Bin, Maury, Alain, Hernández-García, L., Gritsevich, Maria, Rossi, Andrea, Maiorano, Elisabetta, Cusano, Felice, D'Avanzo, Paolo, Ferro, Matteo, Melandri, Andrea, De Pasquale, Massimiliano, Brivio, Riccardo, Fang, Min, Fan, Lu-Lu, Hu, Wei-Da, Wan, Zhen, Hu, Lei, Zuo, Ying-Xi, Tang, Jin-Long, Zhang, Xiao-Ling, Zheng, Xian-Zhong, Li, Bin, Luo, Wen-Tao, Liu, Wei, Wang, Jian, Zhang, Hong-Fei, Liu, Hao, Gao, Jie, Liang, Ming, Wang, Hai-Ren, Yao, Da-Zhi, Cheng, Jing-Quan, Zhao, Wen, and Dai, Zi-Gao
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
Thanks to the rapidly increasing time-domain facilities, we are entering a golden era of research on gamma-ray bursts (GRBs). In this Letter, we report our observations of GRB 240529A with the Burst Optical Observer and Transient Exploring System, the 1.5-meter telescope at Observatorio Sierra Nevada, the 2.5-meter Wide Field Survey Telescope of China, the Large Binocular Telescope, and the Telescopio Nazionale Galileo. The prompt emission of GRB 240529A shows two comparable energetic episodes separated by a quiescence time of roughly 400 s. Combining all available data on the GRB Coordinates Network, we reveal the simultaneous apparent X-ray plateau and optical re-brightening around $10^3-10^4$ s after the burst. Rather than the energy injection from the magnetar as widely invoked for similar GRBs, the multi-wavelength emissions could be better explained as two shocks launched from the central engine separately. The optical peak time and our numerical modeling suggest that the initial bulk Lorentz factor of the later shock is roughly 50, which indicates that the later jet should be accretion-driven and have a higher mass loading than a typical one. The quiescence time between the two prompt emission episodes may be caused by the transition between different accretion states of a central magnetar or black hole, or the fall-back accretion process. A sample of similar bursts with multiple emission episodes in the prompt phase and sufficient follow-up could help to probe the underlying physics of GRB central engines., Comment: Resubmitted to ApJL after addressing the referee's comments; comments are welcome
- Published
- 2024
12. First Search for Light Dark Matter in the Neutrino Fog with XENONnT
- Author
-
Aprile, E., Aalbers, J., Abe, K., Maouloud, S. Ahmed, Althueser, L., Andrieu, B., Angelino, E., Martin, D. Antón, Arneodo, F., Baudis, L., Bazyk, M., Bellagamba, L., Biondi, R., Bismark, A., Boese, K., Brown, A., Bruno, G., Budnik, R., Cai, C., Capelli, C., Cardoso, J. M. R., Chávez, A. P. Cimental, Colijn, A. P., Conrad, J., Cuenca-García, J. J., D'Andrea, V., Garcia, L. C. Daniel, Decowski, M. P., Deisting, A., Di Donato, C., Di Gangi, P., Diglio, S., Eitel, K., Morabit, S. el, Elykov, A., Ferella, A. D., Ferrari, C., Fischer, H., Flehmke, T., Flierman, M., Fulgione, W., Fuselli, C., Gaemers, P., Gaior, R., Galloway, M., Gao, F., Ghosh, S., Giacomobono, R., Glade-Beucke, R., Grandi, L., Grigat, J., Guan, H., Guida, M., Gyorgy, P., Hammann, R., Higuera, A., Hils, C., Hoetzsch, L., Hood, N. F., Iacovacci, M., Itow, Y., Jakob, J., Joerg, F., Kaminaga, Y., Kara, M., Kavrigin, P., Kazama, S., Kobayashi, M., Koke, D., Kopec, A., Landsman, H., Lang, R. F., Levinson, L., Li, I., Li, S., Liang, S., Lin, Y. -T., Lindemann, S., Lindner, M., Liu, K., Liu, M., Loizeau, J., Lombardi, F., Long, J., Lopes, J. A. M., Luce, T., Ma, Y., Macolino, C., Mahlstedt, J., Mancuso, A., Manenti, L., Marignetti, F., Undagoitia, T. Marrodán, Martens, K., Masbou, J., Masson, E., Mastroianni, S., Melchiorre, A., Merz, J., Messina, M., Michael, A., Miuchi, K., Molinario, A., Moriyama, S., Morå, K., Mosbacher, Y., Murra, M., Müller, J., Ni, K., Oberlack, U., Paetsch, B., Pan, Y., Pellegrini, Q., Peres, R., Peters, C., Pienaar, J., Pierre, M., Plante, G., Pollmann, T. R., Principe, L., Qi, J., Qin, J., García, D. Ramírez, Rajado, M., Singh, R., Sanchez, L., Santos, J. M. F. dos, Sarnoff, I., Sartorelli, G., Schreiner, J., Schulte, P., Eißing, H. Schulze, Schumann, M., Lavina, L. Scotto, Selvi, M., Semeria, F., Shagin, P., Shi, S., Shi, J., Silva, M., Simgen, H., Szyszka, C., Takeda, A., Tan, P. -L., Thers, D., Toschi, F., Trinchero, G., Tunnell, C. D., Tönnies, F., Valerius, K., Vecchi, S., Vetter, S., Solar, F. I. Villazon, Volta, G., Weinheimer, C., Weiss, M., Wenz, D., Wittweg, C., Wu, V. H. S., Xing, Y., Xu, D., Xu, Z., Yamashita, M., Yang, L., Ye, J., Yuan, L., Zavattini, G., and Zhong, M.
- Subjects
High Energy Physics - Experiment - Abstract
We search for dark matter (DM) with a mass [3,12] $\mathrm{GeV} / c^2$ using an exposure of 3.51 $\mathrm{t} \times \mathrm{y}$ with the XENONnT experiment. We consider spin-independent, spin-dependent, momentum-dependent, mirror DM, and self-interacting DM with a light mediator coupling to Standard Model particles. Using a lowered energy threshold compared to the previous WIMP search, a blind analysis of [0.5, 5.0] $\mathrm{keV}$ nuclear recoil events reveals no significant signal excess over the background. XENONnT excludes spin-independent DM-nucleon cross sections $>2.5 \times 10^{-45} \mathrm{~cm}^2$ at $90 \%$ confidence level for 6 $\mathrm{GeV} / c^2$ DM. The solar ${ }^8 \mathrm{B}$ neutrino coherent elastic neutrino-nucleus scattering background accounts for approximately half of the background in the signal region. In the considered mass range, the DM sensitivity approaches the 'neutrino fog', the limitation where neutrinos produce a signal that is indistinguishable from that of light DM-xenon nucleus scattering.
- Published
- 2024
13. Swarm-LIO2: Decentralized, Efficient LiDAR-inertial Odometry for UAV Swarms
- Author
-
Zhu, Fangcheng, Ren, Yunfan, Yin, Longji, Kong, Fanze, Liu, Qingbo, Xue, Ruize, Liu, Wenyi, Cai, Yixi, Lu, Guozheng, Li, Haotian, and Zhang, Fu
- Subjects
Computer Science - Robotics - Abstract
Aerial swarm systems possess immense potential in various aspects, such as cooperative exploration, target tracking, search and rescue. Efficient, accurate self and mutual state estimation are the critical preconditions for completing these swarm tasks, which remain challenging research topics. This paper proposes Swarm-LIO2: a fully decentralized, plug-and-play, computationally efficient, and bandwidth-efficient LiDAR-inertial odometry for aerial swarm systems. Swarm-LIO2 uses a decentralized, plug-and-play network as the communication infrastructure. Only bandwidth-efficient and low-dimensional information is exchanged, including identity, ego-state, mutual observation measurements, and global extrinsic transformations. To support the plug-and-play of new teammate participants, Swarm-LIO2 detects potential teammate UAVs and initializes the temporal offset and global extrinsic transformation all automatically. To enhance the initialization efficiency, novel reflectivity-based UAV detection, trajectory matching, and factor graph optimization methods are proposed. For state estimation, Swarm-LIO2 fuses LiDAR, IMU, and mutual observation measurements within an efficient ESIKF framework, with careful compensation of temporal delay and modeling of measurements to enhance the accuracy and consistency., Comment: 23 Pages
- Published
- 2024
14. Digital simulation of zero-temperature spontaneous symmetry breaking in a superconducting lattice processor
- Author
-
Hu, Chang-Kang, Xie, Guixu, Poulsen, Kasper, Zhou, Yuxuan, Chu, Ji, Liu, Chilong, Zhou, Ruiyang, Yuan, Haolan, Shen, Yuecheng, Liu, Song, Zinner, Nikolaj T., Tan, Dian, Santos, Alan C., and Yu, Dapeng
- Subjects
Quantum Physics - Abstract
Quantum simulators are ideal platforms to investigate quantum phenomena that are inaccessible through conventional means, such as the limited resources of classical computers to address large quantum systems or due to constraints imposed by fundamental laws of nature. Here, through a digitized adiabatic evolution, we report an experimental simulation of antiferromagnetic (AFM) and ferromagnetic (FM) phase formation induced by spontaneous symmetry breaking (SSB) in a three-generation Cayley tree-like superconducting lattice. We develop a digital quantum annealing algorithm to mimic the system dynamics, and observe the emergence of signatures of SSB-induced phase transition through a connected correlation function. We demonstrate that the signature of phase transition from classical AFM to quantum FM happens in systems undergoing zero-temperature adiabatic evolution with only nearest-neighbor interacting systems, the shortest range of interaction possible. By harnessing properties of the bipartite Renyi entropy as an entanglement witness, we observe the formation of entangled quantum FM and AFM phases. Our results open perspectives for new advances in condensed matter physics and digitized quantum annealing.
- Published
- 2024
15. Dr. GPT in Campus Counseling: Understanding Higher Education Students' Opinions on LLM-assisted Mental Health Services
- Author
-
Zhang, Owen Xingjian, Zhou, Shuyao, Geng, Jiayi, Liu, Yuhan, and Liu, Sunny Xun
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Artificial Intelligence - Abstract
In response to the increasing mental health challenges faced by college students, we sought to understand their perspectives on how AI applications, particularly Large Language Models (LLMs), can be leveraged to enhance their mental well-being. Through pilot interviews with ten diverse students, we explored their opinions on the use of LLMs across five fictional scenarios: General Information Inquiry, Initial Screening, Reshaping Patient-Expert Dynamics, Long-term Care, and Follow-up Care. Our findings revealed that students' acceptance of LLMs varied by scenario, with participants highlighting both potential benefits, such as proactive engagement and personalized follow-up care, and concerns, including limitations in training data and emotional support. These insights inform how AI technology should be designed and implemented to effectively support and enhance students' mental well-being, particularly in scenarios where LLMs can complement traditional methods, while maintaining empathy and respecting individual preferences., Comment: 5 pages
- Published
- 2024
16. Optimizing Resource Allocation for Multi-modal Semantic Communication in Mobile AIGC Networks: A Diffusion-based Game Approach
- Author
-
Liu, Jian, Xiao, Ming, Wen, Jinbo, Kang, Jiawen, Zhang, Ruichen, Zhang, Tao, Niyato, Dusit, Zhang, Weiting, and Liu, Ying
- Subjects
Computer Science - Networking and Internet Architecture - Abstract
Mobile Artificial Intelligence-Generated Content (AIGC) networks enable massive users to obtain customized content generation services. However, users still need to download a large number of AIGC outputs from mobile AIGC service providers, which strains communication resources and increases the risk of transmission failures. Fortunately, Semantic Communication (SemCom) can improve transmission efficiency and reliability through semantic information processing. Moreover, recent advances in Generative Artificial Intelligence (GAI) further enhanced the effectiveness of SemCom through its powerful generative capabilities. However, how to strike a balance between high-quality content generation and the size of semantic information transmitted is a major challenge. In this paper, we propose a Generative Diffusion Model (GDM)-based multi-modal SemCom (GM-SemCom) framework. The framework improves the accuracy of information reconstruction by integrating GDMs and multi-modal semantic information and also adopts a controllable extraction module for efficient and controllable problems of unstable data recovery and slow decoding speed in GAI-enabled SemCom. Then, we introduce a novel metric called Age of Semantic Information (AoSI) based on the concept of Age of Information (AoI) to quantify the freshness of semantic information. To address the resource trading problem within the framework, we propose a Stackelberg game model, which integrates the AoSI with psychological factors to provide a comprehensive measure of user utility. Furthermore, we propose a GDM-based algorithm to solve the game under incomplete information. Compared with the traditional deep reinforcement learning algorithms, numerical results demonstrate that the proposed algorithm converges faster and is closer to the Stackelberg equilibrium.
- Published
- 2024
17. Towards More Relevant Product Search Ranking Via Large Language Models: An Empirical Study
- Author
-
Liu, Qi, Singh, Atul, Liu, Jingbo, Mu, Cun, and Yan, Zheng
- Subjects
Computer Science - Information Retrieval - Abstract
Training Learning-to-Rank models for e-commerce product search ranking can be challenging due to the lack of a gold standard of ranking relevance. In this paper, we decompose ranking relevance into content-based and engagement-based aspects, and we propose to leverage Large Language Models (LLMs) for both label and feature generation in model training, primarily aiming to improve the model's predictive capability for content-based relevance. Additionally, we introduce different sigmoid transformations on the LLM outputs to polarize relevance scores in labeling, enhancing the model's ability to balance content-based and engagement-based relevances and thus prioritize highly relevant items overall. Comprehensive online tests and offline evaluations are also conducted for the proposed design. Our work sheds light on advanced strategies for integrating LLMs into e-commerce product search ranking model training, offering a pathway to more effective and balanced models with improved ranking relevance., Comment: To be published in CIKM 2024 GenAIECommerce Workshop
- Published
- 2024
18. Long or Short or Both? An Exploration on Lookback Time Windows of Behavioral Features in Product Search Ranking
- Author
-
Liu, Qi, Singh, Atul, Liu, Jingbo, Mu, Cun, Yan, Zheng, and Pedersen, Jan
- Subjects
Computer Science - Information Retrieval - Abstract
Customer shopping behavioral features are core to product search ranking models in eCommerce. In this paper, we investigate the effect of lookback time windows when aggregating these features at the (query, product) level over history. By studying the pros and cons of using long and short time windows, we propose a novel approach to integrating these historical behavioral features of different time windows. In particular, we address the criticality of using query-level vertical signals in ranking models to effectively aggregate all information from different behavioral features. Anecdotal evidence for the proposed approach is also provided using live product search traffic on Walmart.com., Comment: Published in ACM SIGIR Workshop on eCommerce 2024
- Published
- 2024
19. Optically Coherent Nitrogen-Vacancy Centers in HPHT Treated Diamonds
- Author
-
Tang, Yuan-Han, Zhang, Xiaoran, Liu, Kang-Yuan, Xia, Fan, Zheng, Huijie, Liu, Xiaobing, Pan, Xin-Yu, Fan, Heng, and Liu, Gang-Qin
- Subjects
Quantum Physics ,Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
As a point defect with unique spin and optical properties, nitrogen-vacancy (NV) center in diamond has attracted much attention in the fields of quantum sensing, quantum simulation, and quantum networks. The optical properties of an NV center are crucial for all these quantum applications. However, NV centers fabricated by destructive methods such as electron irradiation or ion implantation usually exhibit poor optical coherence. In this work, we demonstrate a non-destructive method to fabricate optically coherent NV centers. High-purity single crystal diamonds are annealed under high pressure and high temperature (1700 $^{\circ}$C, 5.5 GPa), and individually resolvable NV centers with narrow PLE linewidth (<100 MHz) are produced. The high-pressure condition prevents the conversion of diamond to graphite during high-temperature annealing, significantly expanding the parameter space for creating high-performance artificial defects for quantum information science. These findings deepen our understanding of NV center formation in diamond and have implications for the optimization of color centers in solids, including silicon carbide and hexagonal boron nitride., Comment: 11 pages,4 figures
- Published
- 2024
20. Disco4D: Disentangled 4D Human Generation and Animation from a Single Image
- Author
-
Pang, Hui En, Liu, Shuai, Cai, Zhongang, Yang, Lei, Zhang, Tianwei, and Liu, Ziwei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present \textbf{Disco4D}, a novel Gaussian Splatting framework for 4D human generation and animation from a single image. Different from existing methods, Disco4D distinctively disentangles clothings (with Gaussian models) from the human body (with SMPL-X model), significantly enhancing the generation details and flexibility. It has the following technical innovations. \textbf{1)} Disco4D learns to efficiently fit the clothing Gaussians over the SMPL-X Gaussians. \textbf{2)} It adopts diffusion models to enhance the 3D generation process, \textit{e.g.}, modeling occluded parts not visible in the input image. \textbf{3)} It learns an identity encoding for each clothing Gaussian to facilitate the separation and extraction of clothing assets. Furthermore, Disco4D naturally supports 4D human animation with vivid dynamics. Extensive experiments demonstrate the superiority of Disco4D on 4D human generation and animation tasks. Our visualizations can be found in \url{https://disco-4d.github.io/}.
- Published
- 2024
21. Search for $B_{(s)}^{*0}\to\mu^+\mu^-$ in $B_c^+\to\pi^+\mu^+\mu^-$ decays
- Author
-
LHCb collaboration, Aaij, R., Abdelmotteleb, A. S. W., Beteta, C. Abellan, Abudinén, F., Ackernley, T., Adefisoye, A. A., Adeva, B., Adinolfi, M., Adlarson, P., Agapopoulou, C., Aidala, C. A., Ajaltouni, Z., Akar, S., Akiba, K., Albicocco, P., Albrecht, J., Alessio, F., Alexander, M., Aliouche, Z., Cartelle, P. Alvarez, Amalric, R., Amato, S., Amey, J. L., Amhis, Y., An, L., Anderlini, L., Andersson, M., Andreianov, A., Andreola, P., Andreotti, M., Andreou, D., Anelli, A., Ao, D., Archilli, F., Argenton, M., Cuendis, S. Arguedas, Artamonov, A., Artuso, M., Aslanides, E., Da Silva, R. Ataíde, Atzeni, M., Audurier, B., Bacher, D., Perea, I. Bachiller, Bachmann, S., Bachmayer, M., Back, J. J., Rodriguez, P. Baladron, Balagura, V., Baldini, W., Balzani, L., Bao, H., Leite, J. Baptista de Souza, Pretel, C. Barbero, Barbetti, M., Barbosa, I. R., Barlow, R. J., Barnyakov, M., Barsuk, S., Barter, W., Bartolini, M., Bartz, J., Basels, J. M., Bashir, S., Bassi, G., Batsukh, B., Battista, P. B., Bay, A., Beck, A., Becker, M., Bedeschi, F., Bediaga, I. B., Behling, N. A., Belin, S., Bellee, V., Belous, K., Belov, I., Belyaev, I., Benane, G., Bencivenni, G., Ben-Haim, E., Berezhnoy, A., Bernet, R., Andres, S. Bernet, Bertolin, A., Betancourt, C., Betti, F., Bex, J., Bezshyiko, Ia., Bhom, J., Bieker, M. S., Biesuz, N. V., Billoir, P., Biolchini, A., Birch, M., Bishop, F. C. R., Bitadze, A., Bizzeti, A., Blake, T., Blanc, F., Blank, J. E., Blusk, S., Bocharnikov, V., Boelhauve, J. A., Garcia, O. Boente, Boettcher, T., Bohare, A., Boldyrev, A., Bolognani, C. S., Bolzonella, R., Bondar, N., Bordelius, A., Borgato, F., Borghi, S., Borsato, M., Borsuk, J. T., Bouchiba, S. A., Bovill, M., Bowcock, T. J. V., Boyer, A., Bozzi, C., Rodriguez, A. Brea, Breer, N., Brodzicka, J., Gonzalo, A. Brossa, Brown, J., Brundu, D., Buchanan, E., Buonaura, A., Buonincontri, L., Burke, A. T., Burr, C., Butkevich, A., Butter, J. S., Buytaert, J., Byczynski, W., Cadeddu, S., Cai, H., Caillet, A. C., Calabrese, R., Ramirez, S. Calderon, Calefice, L., Cali, S., Calvi, M., Gomez, M. Calvo, Magalhaes, P. Camargo, Bouzas, J. I. Cambon, Campana, P., Perez, D. H. Campora, Quezada, A. F. Campoverde, Capelli, S., Capriotti, L., Caravaca-Mora, R., Carbone, A., Salgado, L. Carcedo, Cardinale, R., Cardini, A., Carniti, P., Carus, L., Vidal, A. Casais, Caspary, R., Casse, G., Godinez, J. Castro, Cattaneo, M., Cavallero, G., Cavallini, V., Celani, S., Cervenkov, D., Cesare, S., Chadwick, A. J., Chahrour, I., Charles, M., Charpentier, Ph., Chatzianagnostou, E., Barajas, C. A. Chavez, Chefdeville, M., Chen, C., Chen, S., Chen, Z., Chernov, A., Chernyshenko, S., Chiotopoulos, X., Chobanova, V., Cholak, S., Chrzaszcz, M., Chubykin, A., Chulikov, V., Ciambrone, P., Vidal, X. Cid, Ciezarek, G., Cifra, P., Clarke, P. E. L., Clemencic, M., Cliff, H. V., Closier, J., Toapaxi, C. Cocha, Coco, V., Cogan, J., Cogneras, E., Cojocariu, L., Collins, P., Colombo, T., Colonna, M. C., Comerma-Montells, A., Congedo, L., Contu, A., Cooke, N., Corredoira, I., Correia, A., Corti, G., Meldrum, J. J. Cottee, Couturier, B., Craik, D. C., Torres, M. Cruz, Rivera, E. Curras, Currie, R., Da Silva, C. L., Dadabaev, S., Dai, L., Dai, X., Dall'Occo, E., Dalseno, J., D'Ambrosio, C., Daniel, J., Danilina, A., d'Argent, P., Davidson, A., Davies, J. E., Davis, A., Francisco, O. De Aguiar, De Angelis, C., De Benedetti, F., de Boer, J., De Bruyn, K., De Capua, S., De Cian, M., Da Graca, U. De Freitas Carneiro, De Lucia, E., De Miranda, J. M., De Paula, L., De Serio, M., De Simone, P., De Vellis, F., de Vries, J. A., Deacon, S., Debernardis, F., Decamp, D., Dedu, V., Dekkers, S., Del Buono, L., Delaney, B., Dembinski, H. -P., Deng, J., Denysenko, V., Deschamps, O., Dettori, F., Dey, B., Di Nezza, P., Diachkov, I., Didenko, S., Ding, S., Dittmann, L., Dobishuk, V., Docheva, A. D., Dong, C., Donohoe, A. M., Dordei, F., Reis, A. C. dos, Dowling, A. D., Duan, W., Duda, P., Dudek, M. W., Dufour, L., Duk, V., Durante, P., Duras, M. M., Durham, J. M., Durmus, O. D., Dziurda, A., Dzyuba, A., Easo, S., Eckstein, E., Egede, U., Egorychev, A., Egorychev, V., Eisenhardt, S., Ejopu, E., Eklund, L., Elashri, M., Ellbracht, J., Ely, S., Ene, A., Epple, E., Eschle, J., Esen, S., Evans, T., Fabiano, F., Falcao, L. N., Fan, Y., Fang, B., Fantini, L., Faria, M., Farmer, K., Fazzini, D., Felkowski, L., Feng, M., Feo, M., Casani, A. Fernandez, Gomez, M. Fernandez, Fernez, A. D., Ferrari, F., Rodrigues, F. Ferreira, Ferrillo, M., Ferro-Luzzi, M., Filippov, S., Fini, R. A., Fiorini, M., Fischer, K. L., Fitzgerald, D. S., Fitzpatrick, C., Fleuret, F., Fontana, M., Foreman, L. F., Forty, R., Foulds-Holt, D., Lima, V. Franco, Sevilla, M. Franco, Frank, M., Franzoso, E., Frau, G., Frei, C., Friday, D. A., Fu, J., Fuehring, Q., Fujii, Y., Fulghesu, T., Gabriel, E., Galati, G., Galati, M. D., Torreira, A. Gallas, Galli, D., Gambetta, S., Gandelman, M., Gandini, P., Ganie, B., Gao, H., Gao, R., Gao, T. Q., Gao, Y., Garau, M., Martin, L. M. Garcia, Moreno, P. Garcia, Pardiñas, J. García, Garg, K. G., Garrido, L., Gaspar, C., Geertsema, R. E., Gerken, L. L., Gersabeck, E., Gersabeck, M., Gershon, T., Ghizzo, S. G., Ghorbanimoghaddam, Z., Giambastiani, L., Giasemis, F. I., Gibson, V., Giemza, H. K., Gilman, A. L., Giovannetti, M., Gioventù, A., Girardey, L., Gironell, P. Gironella, Giugliano, C., Giza, M. A., Gkougkousis, E. L., Glaser, F. C., Gligorov, V. V., Göbel, C., Golobardes, E., Golubkov, D., Golutvin, A., Gomes, A., Fernandez, S. Gomez, Abrantes, F. Goncalves, Goncerz, M., Gong, G., Gooding, J. A., Gorelov, I. V., Gotti, C., Grabowski, J. P., Cardoso, L. A. Granado, Graugés, E., Graverini, E., Grazette, L., Graziani, G., Grecu, A. T., Greeven, L. M., Grieser, N. A., Grillo, L., Gromov, S., Gu, C., Guarise, M., Guerry, L., Guittiere, M., Guliaeva, V., Günther, P. A., Guseinov, A. -K., Gushchin, E., Guz, Y., Gys, T., Habermann, K., Hadavizadeh, T., Hadjivasiliou, C., Haefeli, G., Haen, C., Haimberger, J., Hajheidari, M., Hallett, G., Halvorsen, M. M., Hamilton, P. M., Hammerich, J., Han, Q., Han, X., Hansmann-Menzemer, S., Hao, L., Harnew, N., Hartmann, M., Hashmi, S., He, J., Hemmer, F., Henderson, C., Henderson, R. D. L., Hennequin, A. M., Hennessy, K., Henry, L., Herd, J., Gascon, P. Herrero, Heuel, J., Hicheur, A., Mendizabal, G. Hijano, Hill, D., Hollitt, S. E., Horswill, J., Hou, R., Hou, Y., Howarth, N., Hu, J., Hu, W., Hu, X., Huang, W., Hulsbergen, W., Hunter, R. J., Hushchyn, M., Hutchcroft, D., Ilin, D., Ilten, P., Inglessi, A., Iniukhin, A., Ishteev, A., Ivshin, K., Jacobsson, R., Jage, H., Elles, S. J. Jaimes, Jakobsen, S., Jans, E., Jashal, B. K., Jawahery, A., Jevtic, V., Jiang, E., Jiang, X., Jiang, Y., Jiang, Y. J., John, M., Rajan, A. John Rubesh, Johnson, D., Jones, C. R., Jones, T. P., Joshi, S., Jost, B., Castella, J. Juan, Jurik, N., Juszczak, I., Kaminaris, D., Kandybei, S., Kane, M., Kang, Y., Kar, C., Karacson, M., Karpenkov, D., Kauniskangas, A., Kautz, J. W., Kazanecki, M. K., Keizer, F., Kenzie, M., Ketel, T., Khanji, B., Kharisova, A., Kholodenko, S., Khreich, G., Kirn, T., Kirsebom, V. S., Kitouni, O., Klaver, S., Kleijne, N., Klimaszewski, K., Kmiec, M. R., Koliiev, S., Kolk, L., Konoplyannikov, A., Kopciewicz, P., Koppenburg, P., Korolev, M., Kostiuk, I., Kot, O., Kotriakhova, S., Kozachuk, A., Kravchenko, P., Kravchuk, L., Kreps, M., Krokovny, P., Krupa, W., Krzemien, W., Kshyvanskyi, O. K., Kubat, J., Kubis, S., Kucharczyk, M., Kudryavtsev, V., Kulikova, E., Kupsc, A., Kutsenko, B. K., Lacarrere, D., Gonzalez, P. Laguarta, Lai, A., Lampis, A., Lancierini, D., Gomez, C. Landesa, Lane, J. J., Lane, R., Lanfranchi, G., Langenbruch, C., Langer, J., Lantwin, O., Latham, T., Lazzari, F., Lazzeroni, C., Gac, R. Le, Lee, H., Lefèvre, R., Leflat, A., Legotin, S., Lehuraux, M., Cid, E. Lemos, Leroy, O., Lesiak, T., Lesser, E., Leverington, B., Li, A., Li, C., Li, H., Li, K., Li, L., Li, P., Li, P. -R., Li, Q., Li, S., Li, T., Li, Y., Lian, Z., Liang, X., Libralon, S., Lin, C., Lin, T., Lindner, R., Lisovskyi, V., Litvinov, R., Liu, F. L., Liu, G., Liu, K., Liu, S., Liu, W., Liu, Y., Liu, Y. L., Salvia, A. Lobo, Loi, A., Castro, J. Lomba, Long, T., Lopes, J. H., Huertas, A. Lopez, Soliño, S. López, Lu, Q., Lucarelli, C., Lucchesi, D., Martinez, M. Lucio, Lukashenko, V., Luo, Y., Lupato, A., Luppi, E., Lynch, K., Lyu, X. -R., Ma, G. M., Ma, R., Maccolini, S., Machefert, F., Maciuc, F., Mack, B., Mackay, I., Mackey, L. M., Mohan, L. R. Madhan, Madurai, M. J., Maevskiy, A., Magdalinski, D., Mahajan, V., Maisuzenko, D., Majewski, M. W., Malczewski, J. J., Malde, S., Malentacca, L., Malinin, A., Maltsev, T., Manca, G., Mancinelli, G., Mancuso, C., Escalero, R. Manera, Manuzzi, D., Marangotto, D., Marchand, J. F., Marchevski, R., Marconi, U., Mariani, E., Mariani, S., Benito, C. Marin, Marks, J., Marshall, A. M., Martel, L., Martelli, G., Martellotti, G., Martinazzoli, L., Martinelli, M., Santos, D. Martinez, Vidal, F. Martinez, Massafferri, A., Matev, R., Mathad, A., Matiunin, V., Matteuzzi, C., Mattioli, K. R., Mauri, A., Maurice, E., Mauricio, J., Mayencourt, P., de Cos, J. Mazorra, Mazurek, M., McCann, M., Mcconnell, L., McGrath, T. H., McHugh, N. T., McNab, A., McNulty, R., Meadows, B., Meier, G., Melnychuk, D., Meng, F. M., Merk, M., Merli, A., Garcia, L. Meyer, Miao, D., Miao, H., Mikhasenko, M., Milanes, D. A., Minotti, A., Minucci, E., Miralles, T., Mitreska, B., Mitzel, D. S., Modak, A., Mohammed, R. A., Moise, R. D., Mokhnenko, S., Cardenas, E. F. Molina, Mombächer, T., Monk, M., Monteil, S., Gomez, A. Morcillo, Morello, G., Morello, M. J., Morgenthaler, M. P., Morris, A. B., Morris, A. G., Mountain, R., Mu, H., Mu, Z. M., Muhammad, E., Muheim, F., Mulder, M., Müller, K., Muñoz-Rojas, F., Murta, R., Naik, P., Nakada, T., Nandakumar, R., Nanut, T., Nasteva, I., Needham, M., Neri, N., Neubert, S., Neufeld, N., Neustroev, P., Nicolini, J., Nicotra, D., Niel, E. M., Nikitin, N., Nogarolli, P., Nogga, P., Nolte, N. S., Normand, C., Fernandez, J. Novoa, Nowak, G., Nunez, C., Nur, H. N., Oblakowska-Mucha, A., Obraztsov, V., Oeser, T., Okamura, S., Okhotnikov, A., Okhrimenko, O., Oldeman, R., Oliva, F., Olocco, M., Onderwater, C. J. G., O'Neil, R. H., Osthues, D., Goicochea, J. M. Otalora, Owen, P., Oyanguren, A., Ozcelik, O., Paciolla, F., Padee, A., Padeken, K. O., Pagare, B., Pais, P. R., Pajero, T., Palano, A., Palutan, M., Panshin, G., Paolucci, L., Papanestis, A., Pappagallo, M., Pappalardo, L. L., Pappenheimer, C., Parkes, C., Passalacqua, B., Passaleva, G., Passaro, D., Pastore, A., Patel, M., Patoc, J., Patrignani, C., Paul, A., Pawley, C. J., Pellegrino, A., Peng, J., Altarelli, M. Pepe, Perazzini, S., Pereima, D., Da Costa, H. Pereira, Castro, A. Pereiro, Perret, P., Perro, A., Petridis, K., Petrolini, A., Pfaller, J. P., Pham, H., Pica, L., Piccini, M., Pietrzyk, B., Pietrzyk, G., Pinci, D., Pisani, F., Pizzichemi, M., Placinta, V., Casasus, M. Plo, Poeschl, T., Polci, F., Lener, M. Poli, Poluektov, A., Polukhina, N., Polyakov, I., Polycarpo, E., Ponce, S., Popov, D., Poslavskii, S., Prasanth, K., Prouve, C., Pugatch, V., Punzi, G., Qasim, S., Qian, Q. Q., Qian, W., Qin, N., Qu, S., Quagliani, R., Trejo, R. I. Rabadan, Rademacker, J. H., Rama, M., García, M. Ramírez, De Oliveira, V. Ramos, Pernas, M. Ramos, Rangel, M. S., Ratnikov, F., Raven, G., De Miguel, M. Rebollo, Redi, F., Reich, J., Reiss, F., Ren, Z., Resmi, P. K., Ribatti, R., Ricart, G. R., Riccardi, D., Ricciardi, S., Richardson, K., Richardson-Slipper, M., Rinnert, K., Robbe, P., Robertson, G., Rodrigues, E., Fernandez, E. Rodriguez, Lopez, J. A. Rodriguez, Rodriguez, E. Rodriguez, Roensch, J., Rogachev, A., Rogovskiy, A., Rolf, D. L., Roloff, P., Romanovskiy, V., Lamas, M. Romero, Vidal, A. Romero, Romolini, G., Ronchetti, F., Rong, T., Rotondo, M., Roy, S. R., Rudolph, M. S., Diaz, M. Ruiz, Fernandez, R. A. Ruiz, Vidal, J. Ruiz, Ryzhikov, A., Ryzka, J., Saavedra-Arias, J. J., Silva, J. J. Saborido, Sadek, R., Sagidova, N., Sahoo, D., Sahoo, N., Saitta, B., Salomoni, M., Gras, C. Sanchez, Sanderswood, I., Santacesaria, R., Rios, C. Santamarina, Santimaria, M., Santoro, L., Santovetti, E., Saputi, A., Saranin, D., Sarnatskiy, A., Sarpis, G., Sarpis, M., Satriano, C., Satta, A., Saur, M., Savrina, D., Sazak, H., Sborzacchi, F., Smead, L. G. Scantlebury, Scarabotto, A., Schael, S., Scherl, S., Schiller, M., Schindler, H., Schmelling, M., Schmidt, B., Schmitt, S., Schmitz, H., Schneider, O., Schopper, A., Schulte, N., Schulte, S., Schune, M. H., Schwemmer, R., Schwering, G., Sciascia, B., Sciuccati, A., Sellam, S., Semennikov, A., Senger, T., Soares, M. Senghi, Sergi, A., Serra, N., Sestini, L., Seuthe, A., Shang, Y., Shangase, D. M., Shapkin, M., Sharma, R. S., Shchemerov, I., Shchutska, L., Shears, T., Shekhtman, L., Shen, Z., Sheng, S., Shevchenko, V., Shi, B., Shi, Q., Shimizu, Y., Shmanin, E., Shorkin, R., Shupperd, J. D., Coutinho, R. Silva, Simi, G., Simone, S., Skidmore, N., Skwarnicki, T., Slater, M. W., Smallwood, J. C., Smith, E., Smith, K., Smith, M., Snoch, A., Lavra, L. Soares, Sokoloff, M. D., Soler, F. J. P., Solomin, A., Solovev, A., Solovyev, I., Song, R., Song, Y., Song, Y. S., De Almeida, F. L. Souza, De Paula, B. Souza, Norella, E. Spadaro, Spedicato, E., Speer, J. G., Spiridenkov, E., Spradlin, P., Sriskaran, V., Stagni, F., Stahl, M., Stahl, S., Stanislaus, S., Stein, E. N., Steinkamp, O., Stenyakin, O., Stevens, H., Strekalina, D., Su, Y., Suljik, F., Sun, J., Sun, L., Sun, Y., Sundfeld, D., Sutcliffe, W., Swallow, P. N., Swystun, F., Szabelski, A., Szumlak, T., Tan, Y., Tat, M. D., Terentev, A., Terzuoli, F., Teubert, F., Thomas, E., Thompson, D. J. D., Tilquin, H., Tisserand, V., T'Jampens, S., Tobin, M., Tomassetti, L., Tonani, G., Tong, X., Machado, D. Torres, Toscano, L., Tou, D. Y., Trippl, C., Tuci, G., Tuning, N., Uecker, L. H., Ukleja, A., Unverzagt, D. J., Ursov, E., Usachov, A., Ustyuzhanin, A., Uwer, U., Vagnoni, V., Cadenas, V. Valcarce, Valenti, G., Canudas, N. Valls, Van Hecke, H., van Herwijnen, E., Van Hulse, C. B., Van Laak, R., van Veghel, M., Vasquez, G., Gomez, R. Vazquez, Regueiro, P. Vazquez, Sierra, C. Vázquez, Vecchi, S., Velthuis, J. J., Veltri, M., Venkateswaran, A., Vesterinen, M., Benet, D. Vico, Villalba, P. V. Vidrier, Diaz, M. Vieites, Vilasis-Cardona, X., Figueras, E. Vilella, Villa, A., Vincent, P., Volle, F. C., Bruch, D. vom, Voropaev, N., Vos, K., Vouters, G., Vrahas, C., Wagner, J., Walsh, J., Walton, E. J., Wan, G., Wang, C., Wang, G., Wang, J., Wang, M., Wang, N. W., Wang, R., Wang, X., Wang, X. W., Wang, Y., Wang, Z., Ward, J. A., Waterlaat, M., Watson, N. K., Websdale, D., Wei, Y., Wendel, J., Westhenry, B. D. C., White, C., Whitehead, M., Whiter, E., Wiederhold, A. R., Wiedner, D., Wilkinson, G., Wilkinson, M. K., Williams, M., Williams, M. R. J., Williams, R., Williams, Z., Wilson, F. F., Wislicki, W., Witek, M., Witola, L., Wormser, G., Wotton, S. A., Wu, H., Wu, J., Wu, Y., Wu, Z., Wyllie, K., Xian, S., Xiang, Z., Xie, Y., Xu, A., Xu, J., Xu, L., Xu, M., Xu, Z., Yang, D., Yang, K., Yang, S., Yang, X., Yang, Y., Yang, Z., Yeroshenko, V., Yeung, H., Yin, H., Yu, C. Y., Yu, J., Yuan, X., Yuan, Y, Zaffaroni, E., Zavertyaev, M., Zdybal, M., Zenesini, F., Zeng, C., Zeng, M., Zhang, C., Zhang, D., Zhang, J., Zhang, L., Zhang, S., Zhang, Y., Zhang, Y. Z., Zhao, Y., Zharkova, A., Zhelezov, A., Zheng, S. Z., Zheng, X. Z., Zheng, Y., Zhou, T., Zhou, X., Zhou, Y., Zhovkovska, V., Zhu, L. Z., Zhu, X., Zhukov, V., Zhuo, J., Zou, Q., Zuliani, D., and Zunica, G.
- Subjects
High Energy Physics - Experiment - Abstract
A search for the very rare $B^{*0}\to\mu^+\mu^-$ and $B_{s}^{*0}\to\mu^+\mu^-$ decays is conducted by analysing the $B_c^+\to \pi^+\mu^+\mu^-$ process. The analysis uses proton-proton collision data collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of 9$\text{\,fb}^{-1}$. The signal signatures correspond to simultaneous peaks in the $\mu^+\mu^-$ and $\pi^+\mu^+\mu^-$ invariant masses. No evidence for an excess of events over background is observed for either signal decay mode. Upper limits at the $90\%$ confidence level are set on the branching fractions relative to that for $B_c^+\to J\mskip -3mu/\mskip -2mu\psi\pi^+$ decays, \begin{align*} {\cal R}_{B^{*0}(\mu^+\mu^-)\pi^+/J\mskip -3mu/\mskip -2mu\psi\pi^+} &< 3.8\times 10^{-5}\ \text{ and } {\cal R}_{B_{s}^{*0}(\mu^+\mu^-)\pi^+/J\mskip -3mu/\mskip -2mu\psi\pi^+} &< 5.0\times 10^{-5}\,. \end{align*}, Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://lbfence.cern.ch/alcm/public/analysis/full-details/1796/ (LHCb public pages)
- Published
- 2024
22. Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Handy Appetizer
- Author
-
Peng, Benji, Pan, Xuanhe, Wen, Yizhu, Bi, Ziqian, Chen, Keyu, Li, Ming, Liu, Ming, Niu, Qian, Liu, Junyu, Wang, Jinlang, Zhang, Sen, Xu, Jiawei, and Feng, Pohsun
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
This book explores the role of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) in driving the progress of big data analytics and management. The book focuses on simplifying the complex mathematical concepts behind deep learning, offering intuitive visualizations and practical case studies to help readers understand how neural networks and technologies like Convolutional Neural Networks (CNNs) work. It introduces several classic models and technologies such as Transformers, GPT, ResNet, BERT, and YOLO, highlighting their applications in fields like natural language processing, image recognition, and autonomous driving. The book also emphasizes the importance of pre-trained models and how they can enhance model performance and accuracy, with instructions on how to apply these models in various real-world scenarios. Additionally, it provides an overview of key big data management technologies like SQL and NoSQL databases, as well as distributed computing frameworks such as Apache Hadoop and Spark, explaining their importance in managing and processing vast amounts of data. Ultimately, the book underscores the value of mastering deep learning and big data management skills as critical tools for the future workforce, making it an essential resource for both beginners and experienced professionals., Comment: This book contains 93 pages and 60 figures
- Published
- 2024
23. Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale
- Author
-
Zhou, Fan, Wang, Zengzhi, Liu, Qian, Li, Junlong, and Liu, Pengfei
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Large language model pre-training has traditionally relied on human experts to craft heuristics for improving the corpora quality, resulting in numerous rules developed to date. However, these rules lack the flexibility to address the unique characteristics of individual example effectively. Meanwhile, applying tailored rules to every example is impractical for human experts. In this paper, we demonstrate that even small language models, with as few as 0.3B parameters, can exhibit substantial data refining capabilities comparable to those of human experts. We introduce Programming Every Example (ProX), a novel framework that treats data refinement as a programming task, enabling models to refine corpora by generating and executing fine-grained operations, such as string normalization, for each individual example at scale. Experimental results show that models pre-trained on ProX-curated data outperform either original data or data filtered by other selection methods by more than 2% across various downstream benchmarks. Its effectiveness spans various model sizes and pre-training corpora, including C4, RedPajama-V2, and FineWeb. Furthermore, ProX exhibits significant potential in domain-specific continual pre-training: without domain specific design, models trained on OpenWebMath refined by ProX outperform human-crafted rule-based methods, improving average accuracy by 7.6% over Mistral-7B, with 14.6% for Llama-2-7B and 20.3% for CodeLlama-7B, all within 10B tokens to be comparable to models like Llemma-7B trained on 200B tokens. Further analysis highlights that ProX significantly saves training FLOPs, offering a promising path for efficient LLM pre-training.We are open-sourcing ProX with >100B corpus, models, and sharing all training and implementation details for reproducible research and future innovation. Code: https://github.com/GAIR-NLP/ProX, Comment: 45 pages, 13 figures, 34 tables
- Published
- 2024
24. BitQ: Tailoring Block Floating Point Precision for Improved DNN Efficiency on Resource-Constrained Devices
- Author
-
Xu, Yongqi, Lee, Yujian, Yi, Gao, Liu, Bosheng, Chen, Yucong, Liu, Peng, Wu, Jigang, Chen, Xiaoming, and Han, Yinhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep neural networks (DNNs) are powerful for cognitive tasks such as image classification, object detection, and scene segmentation. One drawback however is the significant high computational complexity and memory consumption, which makes them unfeasible to run real-time on embedded platforms because of the limited hardware resources. Block floating point (BFP) quantization is one of the representative compression approaches for reducing the memory and computational burden owing to their capability to effectively capture the broad data distribution of DNN models. Unfortunately, prior works on BFP-based quantization empirically choose the block size and the precision that preserve accuracy. In this paper, we develop a BFP-based bitwidth-aware analytical modeling framework (called ``BitQ'') for the best BFP implementation of DNN inference on embedded platforms. We formulate and resolve an optimization problem to identify the optimal BFP block size and bitwidth distribution by the trade-off of both accuracy and performance loss. Experimental results show that compared with an equal bitwidth setting, the BFP DNNs with optimized bitwidth allocation provide efficient computation, preserving accuracy on famous benchmarks. The source code and data are available at https://github.com/Cheliosoops/BitQ.
- Published
- 2024
25. Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation
- Author
-
Wang, Yulin, Xiong, Honglin, Sun, Kaicong, Bai, Shuwei, Dai, Ling, Ding, Zhongxiang, Liu, Jiameng, Wang, Qian, Liu, Qian, and Shen, Dinggang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology. However, due to the accessibility of MRI scanners and their lengthy acquisition time, multimodal MR images are not commonly available. Current MR image synthesis approaches are typically trained on independent datasets for specific tasks, leading to suboptimal performance when applied to novel datasets and tasks. Here, we present TUMSyn, a Text-guided Universal MR image Synthesis generalist model, which can flexibly generate brain MR images with demanded imaging metadata from routinely acquired scans guided by text prompts. To ensure TUMSyn's image synthesis precision, versatility, and generalizability, we first construct a brain MR database comprising 31,407 3D images with 7 MRI modalities from 13 centers. We then pre-train an MRI-specific text encoder using contrastive learning to effectively control MR image synthesis based on text prompts. Extensive experiments on diverse datasets and physician assessments indicate that TUMSyn can generate clinically meaningful MR images with specified imaging metadata in supervised and zero-shot scenarios. Therefore, TUMSyn can be utilized along with acquired MR scan(s) to facilitate large-scale MRI-based screening and diagnosis of brain diseases., Comment: 23 pages, 9 figures
- Published
- 2024
26. MBC: Multi-Brain Collaborative Control for Quadruped Robots
- Author
-
Liu, Hang, Cheng, Yi, Li, Rankun, Hu, Xiaowen, Ye, Linqi, and Liu, Houde
- Subjects
Computer Science - Robotics ,Electrical Engineering and Systems Science - Systems and Control - Abstract
In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The Perceptive Policy uses visual sensors to obtain detailed environmental information, allowing it to adapt to complex terrains, but its effectiveness is limited under occluded conditions, especially when perception fails. Unlike the Blind Policy, the Perceptive Policy is not as robust under these conditions. To address these challenges, we propose a MBC:Multi-Brain collaborative system that incorporates the concepts of Multi-Agent Reinforcement Learning and introduces collaboration between the Blind Policy and the Perceptive Policy. By applying this multi-policy collaborative model to a quadruped robot, the robot can maintain stable locomotion even when the perceptual system is impaired or observational data is incomplete. Our simulations and real-world experiments demonstrate that this system significantly improves the robot's passability and robustness against perception failures in complex environments, validating the effectiveness of multi-policy collaboration in enhancing robotic motion performance., Comment: 18 pages, 9 figures, Website and Videos: https://quad-mbc.github.io/
- Published
- 2024
27. A Deep Learning Earth System Model for Stable and Efficient Simulation of the Current Climate
- Author
-
Cresswell-Clay, Nathaniel, Liu, Bowen, Durran, Dale, Liu, Andy, Espinosa, Zachary I., Moreno, Raul, and Karlbauer, Matthias
- Subjects
Physics - Atmospheric and Oceanic Physics - Abstract
A key challenge for computationally intensive state-of-the-art Earth-system models is to distinguish global warming signals from interannual variability. Recently machine learning models have performed better than state-of-the-art numerical weather prediction models for medium-range forecasting. Here we introduce DLESyM, a parsimonious deep learning model that accurately simulates the Earth's current climate over 1000-year periods with negligible drift. DLESyM simulations equal or exceed key metrics of seasonal and interannual variability--such as tropical cyclone genesis and intensity, and mid-latitude blocking frequency--for historical simulations from four leading models from the 6th Climate Model Intercomparison Project. DLESyM, trained on both historical reanalysis data and satellite observations, is a key step toward an accurate highly efficient model of the coupled Earth system, empowering long-range sub-seasonal and seasonal forecasts., Comment: 24 Pages, 20 figures
- Published
- 2024
28. TiM4Rec: An Efficient Sequential Recommendation Model Based on Time-Aware Structured State Space Duality Model
- Author
-
Fan, Hao, Zhu, Mengyi, Hu, Yanrong, Feng, Hailin, He, Zhijie, Liu, Hongjiu, and Liu, Qingyang
- Subjects
Computer Science - Information Retrieval - Abstract
Sequential recommendation represents a pivotal branch of recommendation systems, centered around dynamically analyzing the sequential dependencies between user preferences and their interactive behaviors. Despite the Transformer architecture-based models achieving commendable performance within this domain, their quadratic computational complexity relative to the sequence dimension impedes efficient modeling. In response, the innovative Mamba architecture, characterized by linear computational complexity, has emerged. Mamba4Rec further pioneers the application of Mamba in sequential recommendation. Nonetheless, Mamba 1's hardware-aware algorithm struggles to efficiently leverage modern matrix computational units, which lead to the proposal of the improved State Space Duality (SSD), also known as Mamba 2. While the SSD4Rec successfully adapts the SSD architecture for sequential recommendation, showing promising results in high-dimensional contexts, it suffers significant performance drops in low-dimensional scenarios crucial for pure ID sequential recommendation tasks. Addressing this challenge, we propose a novel sequential recommendation backbone model, TiM4Rec, which ameliorates the low-dimensional performance loss of the SSD architecture while preserving its computational efficiency. Drawing inspiration from TiSASRec, we develop a time-aware enhancement method tailored for the linear computation demands of the SSD architecture, thereby enhancing its adaptability and achieving state-of-the-art (SOTA) performance in both low and high-dimensional modeling. The code for our model is publicly accessible at https://github.com/AlwaysFHao/TiM4Rec.
- Published
- 2024
29. Chromospheric modeling of the active M3V star G 80-21 with RH1.5D
- Author
-
Liu, Shuai, Wei, Huigang, Shi, Jianrong, Li, Wenxian, Han, Henggeng, Liu, Jifeng, and Yang, Shangbin
- Subjects
Astrophysics - Solar and Stellar Astrophysics ,Astrophysics - Earth and Planetary Astrophysics - Abstract
This study investigates the active regions of the M3.0V star G 80-21 using the observed data from the CARMENES project with synthetic spectra generated by the RH1.5D radiative transfer code. The CARMENES project aims to search for exoplanets around M dwarfs using high-resolution near-infrared and optical echelle spectrographs. By comparing the observed data and models for the chromospheric lines of H$_\alpha$ and the bluest Ca II infrared triplet line, we obtain the best-fit models for this star. The optimal fitting for the observed spectrum of G 80-21 is achieved by employing two active areas in conjunction with an inactive regions, with a calcium abundance of [Ca/H] = $-$0.4. This combination successfully fits all the observed data across varying ratios. The minor active component consistently comprises approximately 18\% of the total (ranging from 14\% to 20\%), which suggests that the minor active component is likely located in the polar regions. Meanwhile, the major active component occupies a variable proportion, ranging from 51\% to 82\%. Our method allows for the determination of the structure and size of stellar chromospheric active regions by analyzing high-resolution observed spectra., Comment: Accepted for publication in The Astrophysical Journal
- Published
- 2024
30. AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment
- Author
-
Chen, Nuo, Liu, Jiqun, Dong, Xiaoyu, Liu, Qijiong, Sakai, Tetsuya, and Wu, Xiao-Ming
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Cognitive biases are systematic deviations in thinking that lead to irrational judgments and problematic decision-making, extensively studied across various fields. Recently, large language models (LLMs) have shown advanced understanding capabilities but may inherit human biases from their training data. While social biases in LLMs have been well-studied, cognitive biases have received less attention, with existing research focusing on specific scenarios. The broader impact of cognitive biases on LLMs in various decision-making contexts remains underexplored. We investigated whether LLMs are influenced by the threshold priming effect in relevance judgments, a core task and widely-discussed research topic in the Information Retrieval (IR) coummunity. The priming effect occurs when exposure to certain stimuli unconsciously affects subsequent behavior and decisions. Our experiment employed 10 topics from the TREC 2019 Deep Learning passage track collection, and tested AI judgments under different document relevance scores, batch lengths, and LLM models, including GPT-3.5, GPT-4, LLaMa2-13B and LLaMa2-70B. Results showed that LLMs tend to give lower scores to later documents if earlier ones have high relevance, and vice versa, regardless of the combination and model used. Our finding demonstrates that LLM%u2019s judgments, similar to human judgments, are also influenced by threshold priming biases, and suggests that researchers and system engineers should take into account potential human-like cognitive biases in designing, evaluating, and auditing LLMs in IR tasks and beyond.
- Published
- 2024
31. FSF-Net: Enhance 4D Occupancy Forecasting with Coarse BEV Scene Flow for Autonomous Driving
- Author
-
Guo, Erxin, An, Pei, Yang, You, Liu, Qiong, and Liu, An-An
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
4D occupancy forecasting is one of the important techniques for autonomous driving, which can avoid potential risk in the complex traffic scenes. Scene flow is a crucial element to describe 4D occupancy map tendency. However, an accurate scene flow is difficult to predict in the real scene. In this paper, we find that BEV scene flow can approximately represent 3D scene flow in most traffic scenes. And coarse BEV scene flow is easy to generate. Under this thought, we propose 4D occupancy forecasting method FSF-Net based on coarse BEV scene flow. At first, we develop a general occupancy forecasting architecture based on coarse BEV scene flow. Then, to further enhance 4D occupancy feature representation ability, we propose a vector quantized based Mamba (VQ-Mamba) network to mine spatial-temporal structural scene feature. After that, to effectively fuse coarse occupancy maps forecasted from BEV scene flow and latent features, we design a U-Net based quality fusion (UQF) network to generate the fine-grained forecasting result. Extensive experiments are conducted on public Occ3D dataset. FSF-Net has achieved IoU and mIoU 9.56% and 10.87% higher than state-of-the-art method. Hence, we believe that proposed FSF-Net benefits to the safety of autonomous driving.
- Published
- 2024
32. Search for $C\!P$ violation in $D^+_{(s)}\to{}K_{S}^{0}K^{-}\pi^{+}\pi^{+}$ decays using triple and quadruple products
- Author
-
Belle, Collaborations, Belle II, Aggarwal, L., Ahmed, H., Aihara, H., Akopov, N., Aloisio, A., Althubiti, N., Ky, N. Anh, Asner, D. M., Atmacan, H., Aushev, V., Aversano, M., Ayad, R., Babu, V., Bae, H., Baghel, N. K., Bahinipati, S., Bambade, P., Banerjee, Sw., Baudot, J., Baur, A., Beaubien, A., Becherer, F., Becker, J., Bennett, J. V., Bernlochner, F. U., Bertacchi, V., Bertemes, M., Bertholet, E., Bessner, M., Bettarini, S., Bhardwaj, V., Bianchi, F., Bilka, T., Biswas, D., Bobrov, A., Bodrov, D., Boschetti, A., Bozek, A., Bračko, M., Branchini, P., Briere, R. A., Browder, T. E., Budano, A., Bussino, S., Campajola, M., Cao, L., Casarosa, G., Cecchi, C., Cerasoli, J., Chang, M. -C., Chang, P., Cheema, P., Cheon, B. G., Chilikin, K., Chirapatpimol, K., Cho, H. -E., Cho, K., Cho, S. -J., Choi, S. -K., Choudhury, S., Cochran, J., Corona, L., Cui, J. X., De La Cruz-Burelo, E., De La Motte, S. A., De Nardo, G., De Pietro, G., de Sangro, R., Destefanis, M., Dhamija, R., Di Canto, A., Di Capua, F., Dingfelder, J., Doležal, Z., Dong, T. V., Dorigo, M., Dubey, S., Dugic, K., Dujany, G., Ecker, P., Epifanov, D., Eppelt, J., Feichtinger, P., Ferber, T., Fillinger, T., Finck, C., Finocchiaro, G., Fodor, A., Forti, F., Fulsom, B. G., Gabrielli, A., Ganiev, E., Garcia-Hernandez, M., Garg, R., Gaudino, G., Gaur, V., Gaz, A., Gellrich, A., Ghevondyan, G., Ghosh, D., Ghumaryan, H., Giakoustidis, G., Giordano, R., Giri, A., Gironell, P. Gironella, Gobbo, B., Godang, R., Gogota, O., Goldenzweig, P., Gradl, W., Graziani, E., Gruberová, Z., Guan, Y., Gudkova, K., Haide, I., Han, Y., Hara, T., Hayashii, H., Hazra, S., Hearty, C., Heidelbach, A., de la Cruz, I. Heredia, Higuchi, T., Hoek, M., Hohmann, M., Hoppe, R., Horak, P., Hsu, C. -L., Humair, T., Iijima, T., Ipsita, N., Ishikawa, A., Itoh, R., Iwasaki, M., Jackson, P., Jacobs, W. W., Jang, E. -J., Ji, Q. P., Jia, S., Jin, Y., Johnson, A., Joo, K. K., Junkerkalefeld, H., Kandra, J., Kang, K. H., Kang, S., Karyan, G., Kawasaki, T., Keil, F., Ketter, C., Kiesling, C., Kim, C. -H., Kim, D. Y., Kim, J. -Y., Kim, K. -H., Kim, Y. -K., Kinoshita, K., Kodyš, P., Koga, T., Kohani, S., Kojima, K., Korobov, A., Korpar, S., Kovalenko, E., Kowalewski, R., Križan, P., Krokovny, P., Kuhr, T., Kulii, Y., Kumar, R., Kumara, K., Kunigo, T., Kuzmin, A., Kwon, Y. -J., Lai, Y. -T., Lalwani, K., Lam, T., Lau, T. S., Laurenza, M., Leboucher, R., Diberder, F. R. Le, Lee, M. J., Lemettais, C., Leo, P., Li, C., Li, L. K., Li, Q. M., Li, W. Z., Li, Y., Li, Y. B., Liao, Y. P., Libby, J., Lin, J., Liu, M. H., Liu, Q. Y., Liu, Y., Liu, Z. Q., Liventsev, D., Longo, S., Lueck, T., Lyu, C., Maggiora, M., Maharana, S. P., Maiti, R., Mancinelli, G., Manfredi, R., Manoni, E., Mantovano, M., Marcantonio, D., Marcello, S., Marinas, C., Martellini, C., Martens, A., Martini, A., Martinov, T., Massaccesi, L., Maurya, S. K., McKenna, J. A., Mehta, R., Meier, F., Merola, M., Miller, C., Mirra, M., Mitra, S., Mondal, S., Moneta, S., Moser, H. -G., Nakamura, I., Nakao, M., Naruki, M., Natkaniec, Z., Natochii, A., Nayak, M., Nazaryan, G., Neu, M., Nishida, S., Ogawa, S., Ono, H., Otani, F., Oxford, E. R., Pakhlova, G., Paoloni, E., Pardi, S., Park, H., Park, J., Park, K., Park, S. -H., Passeri, A., Pedlar, T. K., Peruzzi, I., Pestotnik, R., Piccolo, M., Piilonen, L. E., Podobnik, T., Pokharel, S., Praz, C., Prell, S., Prencipe, E., Prim, M. T., Prudiiev, I., Purwar, H., Rados, P., Raeuber, G., Raiz, S., Rauls, N., Reif, M., Reiter, S., Remnev, M., Reuter, L., Ripp-Baudot, I., Rizzo, G., Roehrken, M., Roney, J. M., Rostomyan, A., Rout, N., Sakai, Y., Sanders, D. A., Sandilya, S., Santelj, L., Savinov, V., Scavino, B., Schneider, S., Schnepf, M., Schwanda, C., Schwartz, A. J., Seino, Y., Selce, A., Senyo, K., Serrano, J., Sevior, M. E., Sfienti, C., Shan, W., Sharma, C., Shi, X. D., Shillington, T., Shimasaki, T., Shiu, J. -G., Shtol, D., Shwartz, B., Sibidanov, A., Simon, F., Singh, J. B., Skorupa, J., Sobie, R. J., Sobotzik, M., Soffer, A., Sokolov, A., Solovieva, E., Spataro, S., Spruck, B., Song, W., Starič, M., Stavroulakis, P., Stefkova, S., Stroili, R., Strube, J., Sue, Y., Sumihama, M., Sumisawa, K., Sutcliffe, W., Suwonjandee, N., Svidras, H., Takizawa, M., Tamponi, U., Tanida, K., Tenchini, F., Thaller, A., Tittel, O., Tiwary, R., Torassa, E., Trabelsi, K., Tsaklidis, I., Ueda, I., Unger, K., Unno, Y., Uno, K., Uno, S., Urquijo, P., Vahsen, S. E., van Tonder, R., Veronesi, M., Vismaya, V. S., Vitale, L., Vobbilisetti, V., Volpe, R., Wakai, M., Wallner, S., Wang, M. -Z., Wang, X. L., Wang, Z., Warburton, A., Watanuki, S., Wessel, C., Xu, X. P., Yabsley, B. D., Yamada, S., Yan, W., Yelton, J., Yin, J. H., Yuan, C. Z., Zani, L., Zeng, F., Zhou, J. S., Zhou, Q. D., Zhukova, V. I., and Žlebčík, R.
- Subjects
High Energy Physics - Experiment - Abstract
We perform the first search for $C\!P$ violation in ${D_{(s)}^{+}\to{}K_{S}^{0}K^{-}\pi^{+}\pi^{+}}$ decays. We use a combined data set from the Belle and Belle II experiments, which study $e^+e^-$ collisions at center-of-mass energies at or near the $\Upsilon(4S)$ resonance. We use 980 fb$^{-1}$ of data from Belle and 428 fb$^{-1}$ of data from Belle~II. We measure six $C\!P$-violating asymmetries that are based on triple products and quadruple products of the momenta of final-state particles, and also the particles' helicity angles. We obtain a precision at the level of 0.5% for $D^+\to{}K_{S}^{0}K^{-}\pi^{+}\pi^{+}$ decays, and better than 0.3% for $D^+_{s}\to{}K_{S}^{0}K^{-}\pi^{+}\pi^{+}$ decays. No evidence of $C\!P$ violation is found. Our results for the triple-product asymmetries are the most precise to date for singly-Cabibbo-suppressed $D^+$ decays. Our results for the other asymmetries are the first such measurements performed for charm decays., Comment: 21 pages, 10 figures
- Published
- 2024
33. Spatial-Temporal Mixture-of-Graph-Experts for Multi-Type Crime Prediction
- Author
-
Wu, Ziyang, Liu, Fan, Han, Jindong, Liang, Yuxuan, and Liu, Hao
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
As various types of crime continue to threaten public safety and economic development, predicting the occurrence of multiple types of crimes becomes increasingly vital for effective prevention measures. Although extensive efforts have been made, most of them overlook the heterogeneity of different crime categories and fail to address the issue of imbalanced spatial distribution. In this work, we propose a Spatial-Temporal Mixture-of-Graph-Experts (ST-MoGE) framework for collective multiple-type crime prediction. To enhance the model's ability to identify diverse spatial-temporal dependencies and mitigate potential conflicts caused by spatial-temporal heterogeneity of different crime categories, we introduce an attentive-gated Mixture-of-Graph-Experts (MGEs) module to capture the distinctive and shared crime patterns of each crime category. Then, we propose Cross-Expert Contrastive Learning(CECL) to update the MGEs and force each expert to focus on specific pattern modeling, thereby reducing blending and redundancy. Furthermore, to address the issue of imbalanced spatial distribution, we propose a Hierarchical Adaptive Loss Re-weighting (HALR) approach to eliminate biases and insufficient learning of data-scarce regions. To evaluate the effectiveness of our methods, we conduct comprehensive experiments on two real-world crime datasets and compare our results with twelve advanced baselines. The experimental results demonstrate the superiority of our methods.
- Published
- 2024
34. Ring Artifacts Removal Based on Implicit Neural Representation of Sinogram Data
- Author
-
Shi, Ligen, Jiang, Xu, Liu, YunZe, Liu, Chang, Yang, Ping, Guo, Shifeng, and Zhao, Xing
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,68U05, 65D18 ,I.4.5 ,I.4.10 - Abstract
Inconsistent responses of X-ray detector elements lead to stripe artifacts in the sinogram data, which manifest as ring artifacts in the reconstructed CT images, severely degrading image quality. This paper proposes a method for correcting stripe artifacts in the sinogram data. The proposed method leverages implicit neural representation (INR) to correct defective pixel response values using implicit continuous functions and simultaneously learns stripe features in the angular direction of the sinogram data. These two components are combined within an optimization constraint framework, achieving unsupervised iterative correction of stripe artifacts in the projection domain. Experimental results demonstrate that the proposed method significantly outperforms current state-of-the-art techniques in removing ring artifacts while maintaining the clarity of CT images., Comment: 10 pages, 11 figures
- Published
- 2024
35. OmniBench: Towards The Future of Universal Omni-Language Models
- Author
-
Li, Yizhi, Zhang, Ge, Ma, Yinghao, Yuan, Ruibin, Zhu, Kang, Guo, Hangyu, Liang, Yiming, Liu, Jiaheng, Yang, Jian, Wu, Siwei, Qu, Xingwei, Shi, Jinjie, Zhang, Xinyue, Yang, Zhenzhu, Wang, Xiangzhou, Zhang, Zhaoxiang, Liu, Zachary, Benetos, Emmanouil, Huang, Wenhao, and Lin, Chenghua
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advancements in multimodal large language models (MLLMs) have aimed to integrate and interpret data across diverse modalities. However, the capacity of these models to concurrently process and reason about multiple modalities remains inadequately explored, partly due to the lack of comprehensive modality-wise benchmarks. We introduce OmniBench, a novel benchmark designed to rigorously evaluate models' ability to recognize, interpret, and reason across visual, acoustic, and textual inputs simultaneously. We define models capable of such tri-modal processing as omni-language models (OLMs). OmniBench is distinguished by high-quality human annotations, ensuring that accurate responses require integrated understanding and reasoning across all three modalities. Our main findings reveal that: i) most OLMs exhibit critical limitations in instruction-following and reasoning capabilities within tri-modal contexts; and ii) most baselines models perform poorly (below 50\% accuracy) even when provided with alternative textual representations of images or/and audio. These results suggest that the ability to construct a consistent context from text, image, and audio is often overlooked in existing MLLM training paradigms. We advocate for future research to focus on developing more robust tri-modal integration techniques and training strategies to enhance OLM performance across diverse modalities. The codes and live leaderboard could be found at https://m-a-p.ai/OmniBench.
- Published
- 2024
36. Controllable Traffic Simulation through LLM-Guided Hierarchical Chain-of-Thought Reasoning
- Author
-
Liu, Zhiyuan, Li, Leheng, Wang, Yuning, Lin, Haotian, Liu, Zhizhe, He, Lei, and Wang, Jianqiang
- Subjects
Computer Science - Robotics - Abstract
Evaluating autonomous driving systems in complex and diverse traffic scenarios through controllable simulation is essential to ensure their safety and reliability. However, existing traffic simulation methods face challenges in their controllability. To address this, this paper proposes a novel diffusion-based and LLM-enhanced traffic simulation framework. Our approach incorporates a unique chain-of-thought (CoT) mechanism, which systematically examines the hierarchical structure of traffic elements and guides LLMs to thoroughly analyze traffic scenario descriptions step by step, enhancing their understanding of complex situations. Furthermore, we propose a Frenet-frame-based cost function framework that provides LLMs with geometrically meaningful quantities, improving their grasp of spatial relationships in a scenario and enabling more accurate cost function generation. Experiments on the Waymo Open Motion Dataset (WOMD) demonstrate that our method handles more intricate descriptions, generates a broader range of scenarios in a controllable manner, and outperforms existing diffusion-based methods in terms of efficiency.
- Published
- 2024
37. TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic Sign Recognition
- Author
-
Zhao, Guoyang, Ma, Fulong, Qi, Weiqing, Zhang, Chenguang, Liu, Yuxuan, Liu, Ming, and Ma, Jun
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Traffic sign is a critical map feature for navigation and traffic control. Nevertheless, current methods for traffic sign recognition rely on traditional deep learning models, which typically suffer from significant performance degradation considering the variations in data distribution across different regions. In this paper, we propose TSCLIP, a robust fine-tuning approach with the contrastive language-image pre-training (CLIP) model for worldwide cross-regional traffic sign recognition. We first curate a cross-regional traffic sign benchmark dataset by combining data from ten different sources. Then, we propose a prompt engineering scheme tailored to the characteristics of traffic signs, which involves specific scene descriptions and corresponding rules to generate targeted text descriptions for optimizing the model training process. During the TSCLIP fine-tuning process, we implement adaptive dynamic weight ensembling (ADWE) to seamlessly incorporate outcomes from each training iteration with the zero-shot CLIP model. This approach ensures that the model retains its ability to generalize while acquiring new knowledge about traffic signs. Our method surpasses conventional classification benchmark models in cross-regional traffic sign evaluations, and it achieves state-of-the-art performance compared to existing CLIP fine-tuning techniques. To the best knowledge of authors, TSCLIP is the first contrastive language-image model used for the worldwide cross-regional traffic sign recognition task. The project website is available at: https://github.com/guoyangzhao/TSCLIP.
- Published
- 2024
38. FisheyeDepth: A Real Scale Self-Supervised Depth Estimation Model for Fisheye Camera
- Author
-
Zhao, Guoyang, Liu, Yuxuan, Qi, Weiqing, Ma, Fulong, Liu, Ming, and Ma, Jun
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics - Abstract
Accurate depth estimation is crucial for 3D scene comprehension in robotics and autonomous vehicles. Fisheye cameras, known for their wide field of view, have inherent geometric benefits. However, their use in depth estimation is restricted by a scarcity of ground truth data and image distortions. We present FisheyeDepth, a self-supervised depth estimation model tailored for fisheye cameras. We incorporate a fisheye camera model into the projection and reprojection stages during training to handle image distortions, thereby improving depth estimation accuracy and training stability. Furthermore, we incorporate real-scale pose information into the geometric projection between consecutive frames, replacing the poses estimated by the conventional pose network. Essentially, this method offers the necessary physical depth for robotic tasks, and also streamlines the training and inference procedures. Additionally, we devise a multi-channel output strategy to improve robustness by adaptively fusing features at various scales, which reduces the noise from real pose data. We demonstrate the superior performance and robustness of our model in fisheye image depth estimation through evaluations on public datasets and real-world scenarios. The project website is available at: https://github.com/guoyangzhao/FisheyeDepth.
- Published
- 2024
39. Search for $D^0\to K^-\eta e^+\nu_e$, $D^+\to K_S^0 \eta e^+\nu_e$ and $D^+\to \eta\eta e^+\nu_e$ decays
- Author
-
BESIII Collaboration, Ablikim, M., Achasov, M. N., Adlarson, P., Afedulidis, O., Ai, X. C., Aliberti, R., Amoroso, A., An, Q., Bai, Y., Bakina, O., Balossino, I., Ban, Y., Bao, H. -R., Batozskaya, V., Begzsuren, K., Berger, N., Berlowski, M., Bertani, M., Bettoni, D., Bianchi, F., Bianco, E., Bortone, A., Boyko, I., Briere, R. A., Brueggemann, A., Cai, H., Cai, X., Calcaterra, A., Cao, G. F., Cao, N., Cetin, S. A., Chang, J. F., Che, G. R., Chelkov, G., Chen, C., Chen, C. H., Chen, Chao, Chen, G., Chen, H. S., Chen, H. Y., Chen, M. L., Chen, S. J., Chen, S. L., Chen, S. M., Chen, T., Chen, X. R., Chen, X. T., Chen, Y. B., Chen, Y. Q., Chen, Z. J., Chen, Z. Y., Choi, S. K., Cibinetto, G., Cossio, F., Cui, J. J., Dai, H. L., Dai, J. P., Dbeyssi, A., de Boer, R. E., Dedovich, D., Deng, C. Q., Deng, Z. Y., Denig, A., Denysenko, I., Destefanis, M., De Mori, F., Ding, B., Ding, X. X., Ding, Y., Dong, J., Dong, L. Y., Dong, M. Y., Dong, X., Du, M. C., Du, S. X., Duan, Y. Y., Duan, Z. H., Egorov, P., Fan, Y. H., Fang, J., Fang, S. S., Fang, W. X., Fang, Y., Fang, Y. Q., Farinelli, R., Fava, L., Feldbauer, F., Felici, G., Feng, C. Q., Feng, J. H., Feng, Y. T., Fritsch, M., Fu, C. D., Fu, J. L., Fu, Y. W., Gao, H., Gao, X. B., Gao, Y. N., Gao, Yang, Garbolino, S., Garzia, I., Ge, L., Ge, P. T., Ge, Z. W., Geng, C., Gersabeck, E. M., Gilman, A., Goetzen, K., Gong, L., Gong, W. X., Gradl, W., Gramigna, S., Greco, M., Gu, M. H., Gu, Y. T., Guan, C. Y., Guo, A. Q., Guo, L. B., Guo, M. J., Guo, R. P., Guo, Y. P., Guskov, A., Gutierrez, J., Han, K. L., Han, T. T., Hanisch, F., Hao, X. Q., Harris, F. A., He, K. K., He, K. L., Heinsius, F. H., Heinz, C. H., Heng, Y. K., Herold, C., Holtmann, T., Hong, P. C., Hou, G. Y., Hou, X. T., Hou, Y. R., Hou, Z. L., Hu, B. Y., Hu, H. M., Hu, J. F., Hu, S. L., Hu, T., Hu, Y., Huang, G. S., Huang, K. X., Huang, L. Q., Huang, X. T., Huang, Y. P., Huang, Y. S., Hussain, T., Hölzken, F., Hüsken, N., der Wiesche, N. in, Jackson, J., Janchiv, S., Jeong, J. H., Ji, Q., Ji, Q. P., Ji, W., Ji, X. B., Ji, X. L., Ji, Y. Y., Jia, X. Q., Jia, Z. K., Jiang, D., Jiang, H. B., Jiang, P. C., Jiang, S. S., Jiang, T. J., Jiang, X. S., Jiang, Y., Jiao, J. B., Jiao, J. K., Jiao, Z., Jin, S., Jin, Y., Jing, M. Q., Jing, X. M., Johansson, T., Kabana, S., Kalantar-Nayestanaki, N., Kang, X. L., Kang, X. S., Kavatsyuk, M., Ke, B. C., Khachatryan, V., Khoukaz, A., Kiuchi, R., Kolcu, O. B., Kopf, B., Kuessner, M., Kui, X., Kumar, N., Kupsc, A., Kühn, W., Lane, J. J., Lavezzi, L., Lei, T. T., Lei, Z. H., Lellmann, M., Lenz, T., Li, C., Li, C. H., Li, Cheng, Li, D. M., Li, F., Li, G., Li, H. B., Li, H. J., Li, H. N., Li, Hui, Li, J. R., Li, J. S., Li, K., Li, L. J., Li, L. K., Li, Lei, Li, M. H., Li, P. R., Li, Q. M., Li, Q. X., Li, R., Li, S. X., Li, T., Li, W. D., Li, W. G., Li, X., Li, X. H., Li, X. L., Li, X. Y., Li, X. Z., Li, Y. G., Li, Z. J., Li, Z. Y., Liang, C., Liang, H., Liang, Y. F., Liang, Y. T., Liao, G. R., Liao, Y. P., Libby, J., Limphirat, A., Lin, C. C., Lin, D. X., Lin, T., Liu, B. J., Liu, B. X., Liu, C., Liu, C. X., Liu, F., Liu, F. H., Liu, Feng, Liu, G. M., Liu, H., Liu, H. B., Liu, H. H., Liu, H. M., Liu, Huihui, Liu, J. B., Liu, J. Y., Liu, K., Liu, K. Y., Liu, Ke, Liu, L., Liu, L. C., Liu, Lu, Liu, M. H., Liu, P. L., Liu, Q., Liu, S. B., Liu, T., Liu, W. K., Liu, W. M., Liu, X., Liu, Y., Liu, Y. B., Liu, Z. A., Liu, Z. D., Liu, Z. Q., Lou, X. C., Lu, F. X., Lu, H. J., Lu, J. G., Lu, X. L., Lu, Y., Lu, Y. P., Lu, Z. H., Luo, C. L., Luo, J. R., Luo, M. X., Luo, T., Luo, X. L., Lyu, X. R., Lyu, Y. F., Ma, F. C., Ma, H., Ma, H. L., Ma, J. L., Ma, L. L., Ma, M. M., Ma, Q. M., Ma, R. Q., Ma, T., Ma, X. T., Ma, X. Y., Ma, Y., Ma, Y. M., Maas, F. E., Maggiora, M., Malde, S., Mao, Y. J., Mao, Z. P., Marcello, S., Meng, Z. X., Messchendorp, J. G., Mezzadri, G., Miao, H., Min, T. J., Mitchell, R. E., Mo, X. H., Moses, B., Muchnoi, N. Yu., Muskalla, J., Nefedov, Y., Nerling, F., Nie, L. S., Nikolaev, I. B., Ning, Z., Nisar, S., Niu, Q. L., Niu, W. D., Niu, Y., Olsen, S. L., Ouyang, Q., Pacetti, S., Pan, X., Pan, Y., Pathak, A., Pei, Y. P., Pelizaeus, M., Peng, H. P., Peng, Y. Y., Peters, K., Ping, J. L., Ping, R. G., Plura, S., Prasad, V., Qi, F. Z., Qi, H., Qi, H. R., Qi, M., Qi, T. Y., Qian, S., Qian, W. B., Qiao, C. F., Qiao, X. K., Qin, J. J., Qin, L. Q., Qin, L. Y., Qin, X. P., Qin, X. S., Qin, Z. H., Qiu, J. F., Qu, Z. H., Redmer, C. F., Ren, K. J., Rivetti, A., Rolo, M., Rong, G., Rosner, Ch., Ruan, S. N., Salone, N., Sarantsev, A., Schelhaas, Y., Schoenning, K., Scodeggio, M., Shan, K. Y., Shan, W., Shan, X. Y., Shang, Z. J., Shangguan, J. F., Shao, L. G., Shao, M., Shen, C. P., Shen, H. F., Shen, W. H., Shen, X. Y., Shi, B. A., Shi, H., Shi, H. C., Shi, J. L., Shi, J. Y., Shi, Q. Q., Shi, S. Y., Shi, X., Song, J. J., Song, T. Z., Song, W. M., Song, Y. J., Song, Y. X., Sosio, S., Spataro, S., Stieler, F., Su, Y. J., Sun, G. B., Sun, G. X., Sun, H., Sun, H. K., Sun, J. F., Sun, K., Sun, L., Sun, S. S., Sun, T., Sun, W. Y., Sun, Y., Sun, Y. J., Sun, Y. Z., Sun, Z. Q., Sun, Z. T., Tang, C. J., Tang, G. Y., Tang, J., Tang, M., Tang, Y. A., Tao, L. Y., Tao, Q. T., Tat, M., Teng, J. X., Thoren, V., Tian, W. H., Tian, Y., Tian, Z. F., Uman, I., Wan, Y., Wang, S. J., Wang, B., Wang, B. L., Wang, Bo, Wang, D. Y., Wang, F., Wang, H. J., Wang, J. J., Wang, J. P., Wang, K., Wang, L. L., Wang, M., Wang, N. Y., Wang, S., Wang, T., Wang, T. J., Wang, W., Wang, W. P., Wang, X., Wang, X. F., Wang, X. J., Wang, X. L., Wang, X. N., Wang, Y., Wang, Y. D., Wang, Y. F., Wang, Y. L., Wang, Y. N., Wang, Y. Q., Wang, Yaqian, Wang, Yi, Wang, Z., Wang, Z. L., Wang, Z. Y., Wang, Ziyi, Wei, D. H., Weidner, F., Wen, S. P., Wen, Y. R., Wiedner, U., Wilkinson, G., Wolke, M., Wollenberg, L., Wu, C., Wu, J. F., Wu, L. H., Wu, L. J., Wu, X., Wu, X. H., Wu, Y., Wu, Y. H., Wu, Y. J., Wu, Z., Xia, L., Xian, X. M., Xiang, B. H., Xiang, T., Xiao, D., Xiao, G. Y., Xiao, S. Y., Xiao, Y. L., Xiao, Z. J., Xie, C., Xie, X. H., Xie, Y., Xie, Y. G., Xie, Y. H., Xie, Z. P., Xing, T. Y., Xu, C. F., Xu, C. J., Xu, G. F., Xu, H. Y., Xu, M., Xu, Q. J., Xu, Q. N., Xu, W., Xu, W. L., Xu, X. P., Xu, Y. C., Xu, Z. S., Yan, F., Yan, L., Yan, W. B., Yan, W. C., Yan, X. Q., Yang, H. J., Yang, H. L., Yang, H. X., Yang, T., Yang, Y., Yang, Y. F., Yang, Y. X., Yang, Z. W., Yao, Z. P., Ye, M., Ye, M. H., Yin, J. H., You, Z. Y., Yu, B. X., Yu, C. X., Yu, G., Yu, J. S., Yu, T., Yu, X. D., Yu, Y. C., Yuan, C. Z., Yuan, J., Yuan, L., Yuan, S. C., Yuan, Y., Yuan, Z. Y., Yue, C. X., Zafar, A. A., Zeng, F. R., Zeng, S. H., Zeng, X., Zeng, Y., Zeng, Y. J., Zhai, X. Y., Zhai, Y. C., Zhan, Y. H., Zhang, A. Q., Zhang, B. L., Zhang, B. X., Zhang, D. H., Zhang, G. Y., Zhang, H., Zhang, H. C., Zhang, H. H., Zhang, H. Q., Zhang, H. R., Zhang, H. Y., Zhang, J., Zhang, J. J., Zhang, J. L., Zhang, J. Q., Zhang, J. S., Zhang, J. W., Zhang, J. X., Zhang, J. Y., Zhang, J. Z., Zhang, Jianyu, Zhang, L. M., Zhang, Lei, Zhang, P., Zhang, Q. Y., Zhang, R. Y., Zhang, S. H., Zhang, Shulei, Zhang, X. D., Zhang, X. M., Zhang, X. Y., Zhang, Y., Zhang, Y. T., Zhang, Y. H., Zhang, Y. M., Zhang, Yan, Zhang, Z. D., Zhang, Z. H., Zhang, Z. L., Zhang, Z. Y., Zhang, Z. Z., Zhao, G., Zhao, J. Y., Zhao, J. Z., Zhao, L., Zhao, Lei, Zhao, M. G., Zhao, N., Zhao, R. P., Zhao, S. J., Zhao, Y. B., Zhao, Y. X., Zhao, Z. G., Zhemchugov, A., Zheng, B., Zheng, B. M., Zheng, J. P., Zheng, W. J., Zheng, Y. H., Zhong, B., Zhong, X., Zhou, H., Zhou, J. Y., Zhou, L. P., Zhou, S., Zhou, X., Zhou, X. K., Zhou, X. R., Zhou, X. Y., Zhou, Y. Z., Zhu, J., Zhu, K., Zhu, K. J., Zhu, K. S., Zhu, L., Zhu, L. X., Zhu, S. H., Zhu, T. J., Zhu, W. D., Zhu, Y. C., Zhu, Z. A., Zou, J. H., and Zu, J.
- Subjects
High Energy Physics - Experiment - Abstract
By analyzing $e^+e^-$ annihilation data corresponding to an integrated luminosity of 7.93 fb$^{-1}$, collected at the center-of-mass energy of 3.773 GeV with the BESIII detector, we search for the semileptonic decays $D^0\to K^-\eta e^+\nu_e$, $D^+\to K_S^0 \eta e^+\nu_e$ and $D^+\to \eta\eta e^+\nu_e$ for the first time. We present evidence for $D^0\to K^-\eta e^+\nu_e$ with a significance of $3.3\sigma$. The branching fraction of $D^0\to K^-\eta e^+\nu_e$ is measured to be $(0.84_{-0.34}^{+0.29}\pm0.22)\times 10^{-4}$. Here, the first uncertainties are statistical and the second ones are systematic. No significant signals are observed for the decays $D^+\to K_S^0 \eta e^+\nu_e$ and $D^+\to \eta\eta e^+\nu_e$ and we set the upper limits on their branching fractions., Comment: 10 pages,4 figures
- Published
- 2024
40. PackageIntel: Leveraging Large Language Models for Automated Intelligence Extraction in Package Ecosystems
- Author
-
Guo, Wenbo, Liu, Chengwei, Wang, Limin, Wu, Jiahui, Xu, Zhengzi, Huang, Cheng, Fang, Yong, and Liu, Yang
- Subjects
Computer Science - Software Engineering - Abstract
The rise of malicious packages in public registries poses a significant threat to software supply chain (SSC) security. Although academia and industry employ methods like software composition analysis (SCA) to address this issue, existing approaches often lack timely and comprehensive intelligence updates. This paper introduces PackageIntel, a novel platform that revolutionizes the collection, processing, and retrieval of malicious package intelligence. By utilizing exhaustive search techniques, snowball sampling from diverse sources, and large language models (LLMs) with specialized prompts, PackageIntel ensures enhanced coverage, timeliness, and accuracy. We have developed a comprehensive database containing 20,692 malicious NPM and PyPI packages sourced from 21 distinct intelligence repositories. Empirical evaluations demonstrate that PackageIntel achieves a precision of 98.6% and an F1 score of 92.0 in intelligence extraction. Additionally, it detects threats on average 70% earlier than leading databases like Snyk and OSV, and operates cost-effectively at $0.094 per intelligence piece. The platform has successfully identified and reported over 1,000 malicious packages in downstream package manager mirror registries. This research provides a robust, efficient, and timely solution for identifying and mitigating threats within the software supply chain ecosystem.
- Published
- 2024
41. Observe Then Act: Asynchronous Active Vision-Action Model for Robotic Manipulation
- Author
-
Wang, Guokang, Li, Hang, Zhang, Shuyuan, Liu, Yanhong, and Liu, Huaping
- Subjects
Computer Science - Robotics ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In real-world scenarios, many robotic manipulation tasks are hindered by occlusions and limited fields of view, posing significant challenges for passive observation-based models that rely on fixed or wrist-mounted cameras. In this paper, we investigate the problem of robotic manipulation under limited visual observation and propose a task-driven asynchronous active vision-action model.Our model serially connects a camera Next-Best-View (NBV) policy with a gripper Next-Best Pose (NBP) policy, and trains them in a sensor-motor coordination framework using few-shot reinforcement learning. This approach allows the agent to adjust a third-person camera to actively observe the environment based on the task goal, and subsequently infer the appropriate manipulation actions.We trained and evaluated our model on 8 viewpoint-constrained tasks in RLBench. The results demonstrate that our model consistently outperforms baseline algorithms, showcasing its effectiveness in handling visual constraints in manipulation tasks.
- Published
- 2024
42. Creation of independently controllable and long lifetime polar skyrmion textures in ferroelectric-metallic heterostructures
- Author
-
Sun, Fei, Ren, Jianhua, Li, Hongfang, Wu, Yiwei, Liang, Jianwei, Yang, Hui, Zhang, Yi, Liu, Jianyi, Liu, Linjie, Wu, Mengjun, Zhang, Xiaoyue, Zhu, Wenpeng, Chen, Weijin, and Zheng, Yue
- Subjects
Condensed Matter - Materials Science - Abstract
Topological textures like vortices, labyrinths and skyrmions formed in ferroic materials have attracted extensive interests during the past decade for their fundamental physics, intriguing topology, and technological prospects. So far, polar skyrmions remain scarce in ferroelectrics as they require a delicate balance between various dipolar interactions. Here, we report that PbTiO3 thin films in a metallic contact undergo a topological phase transition and stabilize a broad family of skyrmion-like textures (e.g., skyrmion bubbles, multiple {\pi}-twist target skyrmions, and skyrmion bags) with independent controllability, analogous to those reported in magnetic systems. Weakly-interacted skyrmion arrays with a density over 300 Gb/inch2 are successfully written, erased and read-out by local electrical and mechanical stimuli of a scanning probe. Interestingly, in contrast to the relatively short lifetime <20 hours of the skyrmion bubbles, the multiple {\pi}-twist target skyrmions and skyrmion bags show topology-enhanced stability with lifetime over two weeks. Experimental and theoretical analysis implies the heterostructures carry electric Dzyaloshinskii-Moriya interaction mediated by oxygen octahedral tiltings. Our results demonstrate ferroelectric-metallic heterostructures as fertile playground for topological states and emergent phenomena.
- Published
- 2024
43. MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding
- Author
-
Wu, Qinzhuo, Xu, Weikai, Liu, Wei, Tan, Tao, Liu, Jianfeng, Li, Ang, Luan, Jian, Wang, Bin, and Shang, Shuo
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Recently, mobile AI agents based on VLMs have been gaining increasing attention. These works typically utilize VLM as a foundation, fine-tuning it with instruction-based mobile datasets. However, these VLMs are typically pre-trained on general-domain data, which often results in a lack of fundamental capabilities specific to the mobile domain. Therefore, they may struggle to recognize specific UI elements and understand intra-UI fine-grained information. In addition, the current fine-tuning task focuses on interacting with the most relevant element for the given instruction. These fine-tuned VLMs may still ignore the relationships between UI pages, neglect the roles of elements in page transitions and lack inter-UI understanding. To address issues, we propose a VLM called MobileVLM, which includes two additional pre-training stages to enhance both intra- and inter-UI understanding. We defined four UI-based pre-training tasks, enabling the model to better perceive fine-grained elements and capture page transition actions. To address the lack of mobile pre-training data, we built a large Chinese mobile dataset Mobile3M from scratch, which contains 3 million UI pages, and real-world transition actions, forming a directed graph structure. Experimental results show MobileVLM excels on both our test set and public mobile benchmarks, outperforming existing VLMs.
- Published
- 2024
44. Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation
- Author
-
Li, Li, Cheng, Mingyue, Liu, Zhiding, Zhang, Hao, Liu, Qi, and Chen, Enhong
- Subjects
Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
Sequential recommendation models user interests based on historical behaviors to provide personalized recommendation. Previous sequential recommendation algorithms primarily employ neural networks to extract features of user interests, achieving good performance. However, due to the recommendation system datasets sparsity, these algorithms often employ small-scale network frameworks, resulting in weaker generalization capability. Recently, a series of sequential recommendation algorithms based on large pre-trained language models have been proposed. Nonetheless, given the real-time demands of recommendation systems, the challenge remains in applying pre-trained language models for rapid recommendations in real scenarios. To address this, we propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation. The key of proposed algorithm is to transfer pre-trained knowledge across domains and achieve lightweight inference by knowledge distillation. The algorithm operates in two stages: in the first stage, we fine-tune the pre-trained language model on the recommendation dataset to transfer the pre-trained knowledge to the recommendation task; in the second stage, we distill the trained language model to transfer the learned knowledge to a lightweight model. Extensive experiments on multiple public recommendation datasets show that the proposed algorithm enhances recommendation accuracy and provide timely recommendation services., Comment: in Chinese language
- Published
- 2024
45. Neural refractive index field: Unlocking the Potential of Background-oriented Schlieren Tomography in Volumetric Flow Visualization
- Author
-
He, Yuanzhe, Zheng, Yutao, Xu, Shijie, Liu, Chang, Peng, Di, Liu, Yingzheng, and Cai, Weiwei
- Subjects
Physics - Fluid Dynamics ,Computer Science - Human-Computer Interaction ,Computer Science - Machine Learning ,Physics - Optics - Abstract
Background-oriented Schlieren tomography (BOST) is a prevalent method for visualizing intricate turbulent flows, valued for its ease of implementation and capacity to capture three-dimensional distributions of a multitude of flow parameters. However, the voxel-based meshing scheme leads to significant challenges, such as inadequate spatial resolution, substantial discretization errors, poor noise immunity, and excessive computational costs. This work presents an innovative reconstruction approach termed neural refractive index field (NeRIF) which implicitly represents the flow field with a neural network, which is trained with tailored strategies. Both numerical simulations and experimental demonstrations on turbulent Bunsen flames suggest that our approach can significantly improve the reconstruction accuracy and spatial resolution while concurrently reducing computational expenses. Although showcased in the context of background-oriented schlieren tomography here, the key idea embedded in the NeRIF can be readily adapted to various other tomographic modalities including tomographic absorption spectroscopy and tomographic particle imaging velocimetry, broadening its potential impact across different domains of flow visualization and analysis., Comment: 10 pages, 5 figures
- Published
- 2024
46. Simultaneous Multiband Photometry of the Early Optical Afterglow of GRB 240825A with Mephisto
- Author
-
Cheng, Yehao, Pan, Yu, Yang, Yuan-Pei, Zhang, Jinghua, Du, Guowang, Fang, Yuan, Kumar, Brajesh, Guo, Helong, Er, Xinzhong, Chen, Xinlei, Liu, Chenxu, Wang, Tao, Qin, Zhenfei, Jin, Yicheng, Zou, Xingzhu, Han, Xuhui, Zhang, Pinpin, Xin, Liping, Wu, Chao, Lian, Jianhui, Liu, Xiangkun, and Liu, Xiaowei
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
Gamma-ray bursts (GRBs) are the most luminous transients in the universe. The interaction of the relativistic jet with the circumburst medium produces an afterglow and generates multiwavelength emission. In this work, we present simultaneous multiband photometry of GRB~240825A with the Multi-channel Photometric Survey Telescope (Mephisto) and analyze its temporal and spectral properties. The measurement began 128 seconds after the GRB trigger and continued until the fourth day when the afterglow essentially diminished and the measured brightness was close to that of the host galaxy. Based on the multiband light curves in the $uvgriz$ bands, we find that the optical flux density satisfies $F_{\nu,{\rm obs}}\propto t^{-1.34}\nu^{-2.48}$ with a spectral index of $2.48$ much larger than those of most other GRBs. To reconcile the measured much softer spectral energy distribution (SED) with that predicted by the standard afterglow model, an extra host-galaxy extinction of $E_{B-V}\sim(0.37-0.57)$ mag is required. We interpreted this excess as arising from a dense circumburst medium. We further find that the SED of the optical afterglow hardened as the afterglow decayed and the color excess $E_{B-V}$ decreased $\sim0.21$ mag in the first 3000 seconds. Finally, we analyze the properties of the host galaxy of GRB~240825A based on data from the SDSS, PanSTARRS and HSC-SSP surveys. For a host redshift of $z=0.659$, the stellar mass and star formation rate of the host galaxy are estimated to be $\log(M_*/M_\odot)=10.0^{+0.3}_{-0.3}$ and $\log({\rm SFR}/M_{\odot}{\rm yr}^{-1})= 0.6^{+0.8}_{-3.3}$, respectively, pointing to a gas-rich, star-forming, medium-size galaxy., Comment: 15 pages, 5 figures, 1 table. Comments welcome!
- Published
- 2024
47. M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
- Author
-
Wang, Taowen, Liu, Yiyang, Liang, James Chenhao, zhao, junhan, Cui, Yiming, Mao, Yuning, Nie, Shaoliang, Liu, Jiahao, Feng, Fuli, Xu, Zenglin, Han, Cheng, Huang, Lifu, Wang, Qifan, and Liu, Dongfang
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M$^2$PT) approach for efficient instruction tuning of MLLMs. M$^2$PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach., Comment: EMNLP 2024
- Published
- 2024
48. Resolving the Valence of Iron Oxides by Resonant Photoemission Spectroscopy
- Author
-
Chen, Hao, Liu, Yun, Zhang, Hexin, Zhao, Shengdi, Nemsak, Slavomir, Liu, Haishan, and Salmeron, Miquel
- Subjects
Condensed Matter - Materials Science - Abstract
Precisely determining the oxidation states of metal cations within variable-valence transition metal oxides remains a significant challenge, yet it is crucial for understanding and predicting the properties of these technologically important materials. Iron oxides, in particular, exhibit a remarkable diversity of electronic structures due to the variable valence states of iron (Fe2+ and Fe3+), however, quantitative analysis using conventional X-ray photoelectron spectroscopy (XPS) is challenging because of significant overlapping of the Fe2p spectra among different oxidation states. In this study, we leverage the intriguing case of Pt supported FeO2 phase of monolayer thickness (ML) as a model system and employ Resonant Photoemission Spectroscopy (ResPES) to directly quantify the cation valence states and compositional ratios in this complex Fe oxide. Our results reveal that this ultrathin FeO2 film (Pt-O-Fe-O), contrary to the +3 valence predicted by density functional theory (DFT), consists of an equal mixture of Fe2+ and Fe3+ cations, yielding an average valence of +2.5. Structurally, FeO2 is likely derived from the Fe3O4 sublattice, featuring an octahedral Fe layer (50% Fe3+ and 50% Fe2+) bonded to upper and lower oxygen layers., Comment: 14 pages, 4 figures, in submission
- Published
- 2024
49. $Mesiri$:Mephisto Early Supernovae Ia Rapid Identifier
- Author
-
Lunwei, Zhang, Zhenyu, Wang, Dezi, Liu, Yuan, Fang, Bingqiu, Chen, Brajesh, Kumar, Xinzhong, Er, and Xiaowei, Liu
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
The early time observations of Type Ia supernovae (SNe Ia) play a crucial role in investigating and resolving longstanding questions about progenitor stars and the explosion mechanisms of these events. Colors of supernovae (SNe) in the initial days after the explosion can help differentiate between different types of SNe. However, the use of true color information to identify SNe Ia at the early-time explosion is still in its infancy. The Multi-channel Photometric Survey Telescope (Mephisto) is a photometric survey telescope equipped with three CCD cameras, capable of simultaneously imaging the same patch of sky in three bands (\emph{u, g, i} or \emph{v, r, z}), yielding real-time colors of astronomical objects. In this paper, we introduce a new time-series classification tool named Mephisto Early Supernovae Ia Rapid Identifier (\emph{\texttt{Mesiri}}), which for the first time, utilizes real-time color information to distinguish early-time SNe Ia from core-collapse supernovae (CCSNe). \emph{\texttt{Mesiri}} is based on the deep learning approach and can achieve an accuracy of $96.75\pm0.79$\%, and AUC of $98.87\pm0.53$\% in case of single epoch random observation before the peak brightness. These values reach towards perfectness if additional data points on several night observations are considered. The classification with real-time color significantly outperforms that with pseudo-color, especially at the early time, i.e., with only a few points of observations. The architecture of BiLSTM shows the best performance than the others that have been tested in this work., Comment: 30 pages, 17 figures, 4 tables, accepted for publication in RAA
- Published
- 2024
50. ALMASOP. The Localized and Chemically rich Features near the Bases of the Protostellar Jet in HOPS 87
- Author
-
Hsu, Shih-Ying, Lee, Chin-Fei, Liu, Sheng-Yuan, Johnstone, Doug, Liu, Tie, Takahashi, Satoko, Bronfman, Leonardo, Chen, Huei-Ru Vivien, Dutta, Somnath, Eden, David J., Evans II, Neal J., Hirano, Naomi, Juvela, Mika, Kuan, Yi-Jehng, Kwon, Woojin, Lee, Chang Won, Lee, Jeong-Eun, Li, Shanghuo, Liu, Chun-Fan, Liu, Xunchuan, Luo, Qiuyi, Qin, Sheng-Li, Sahu, Dipen, Sanhueza, Patricio, Shang, Hsien, Tatematsu, Kenichi, and Yang, Yao-Lun
- Subjects
Astrophysics - Solar and Stellar Astrophysics ,Astrophysics - Astrophysics of Galaxies - Abstract
HOPS 87 is a Class 0 protostellar core known to harbor an extremely young bipolar outflow and a hot corino. We report the discovery of localized, chemically rich regions near the bases of the two-lobe bipolar molecular outflow in HOPS 87 containing molecules such as H$_2$CO, $^{13}$CS, H$_2$S, OCS, and CH$_3$OH, the simplest complex organic molecule (COM). The locations and kinematics suggest that these localized features are due to jet-driven shocks rather than being part of the hot corino region encasing the protostar. The COM compositions of the molecular gas in these jet-localized regions are relatively simpler than those in the hot corino zone. We speculate that this simplicity is due to either the liberation of ice with a less complex chemical history or the effects of shock chemistry. Our study highlights the dynamic interplay between the protostellar bipolar outflow, disk, inner core environment, and the surrounding medium, contributing to our understanding of molecular complexity in solar-like young stellar objects., Comment: 16 pages, 6+2 figures, accepted by ApJ
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.