183,116 results on '"LI, WEI"'
Search Results
302. Remote Sensing Scene Classification Method Based on Multi-scale Local Attention Network
- Author
-
Miao, Yi, Wang, JunJie, Zhang, MengMeng, Xie, XiaoMing, Li, Wei, Li, Gang, Series Editor, Filipe, Joaquim, Series Editor, Ghosh, Ashish, Series Editor, Xu, Zhiwei, Series Editor, Wang, Yongtian, editor, and Huang, Hua, editor
- Published
- 2025
- Full Text
- View/download PDF
303. The Influencing Factors of Young Designers’ Intentions to Continue Using Artificial Intelligence Generated Content Platforms
- Author
-
Peng, Xiangbin, Li, Junjie, Li, Wei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Coman, Adela, editor, Vasilache, Simona, editor, Fui-Hoon Nah, Fiona, editor, Siau, Keng Leng, editor, Wei, June, editor, and Margetis, George, editor
- Published
- 2025
- Full Text
- View/download PDF
304. Research on the Influence of Main Flux Path Saturation on Induction Motor Drive Systems
- Author
-
Li, Wei, Dang, Kuan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
305. Optimization Design of Brushless DC Motor Based on Combined Scanning Method and Genetic Algorithm
- Author
-
Li, Wei, Tian, Jiahe, He, Tianran, Zhang, Xiangjun, Gao, Xiaoyan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
306. Analysis of Worst-Case Energy Absorption by EM Arrester in ± 800kV HVDC System
- Author
-
Li, Wei, Liang, Tao, Yin, Ziqian, Wang, Sen, Han, Yanhua, Zhu, Mingxi, Mu, Yaru, Guo, Jie, Xie, Yan-zhao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
307. Efficient Long-Range Context Modeling for Motion Forecasting with State Space Models
- Author
-
Dong, Zhiwei, Ding, Ran, Wang, Jiaxiang, Li, Wei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
308. Application Research of Pile Bottom Detection in Bridge Survey in Karst Areas
- Author
-
Li, Wei, Feng, Ming Yue, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Cui, Zhen-Dong, Series Editor, Lu, Xinzheng, Series Editor, and Jeon, Han-Yong, editor
- Published
- 2025
- Full Text
- View/download PDF
309. Optimal Scheduling of Virtual Power Plant Based on Information Gap Decision Theory and Demand Response
- Author
-
Jin, Xurong, Yin, Jiang, Yang, Guohua, Li, Wei, Wang, Guobin, Wang, Lele, Yang, Na, Zhou, Xuenian, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
310. Rotor Position Estimation Algorithm Based on Improved Luenberger Observer for PMSM with Hall Sensors
- Author
-
Li, Wei, Zhang, Yang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
311. Study on Multicolumn Current Imbalance on Energy Absorption of EM Arrester in ± 800kV HVDC System
- Author
-
Li, Wei, Liang, Tao, Yin, Ziqian, Mu, Yaru, Wang, Sen, Han, Yanhua, Zhu, Mingxi, Guo, Jie, Xie, Yan-zhao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
312. Optimization Scheduling of Virtual Power Plant Alliances Based on Cooperative Game Theory
- Author
-
Jin, Xurong, Zhou, Xuenian, Li, Wei, Wang, Guobin, Wang, Lele, Yang, Guohua, Yang, Na, Yin, Jiang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, and Li, Jian, editor
- Published
- 2025
- Full Text
- View/download PDF
313. Research on Train Schedule Under the Flexible Train Composition Mode with Online Coupling/Decoupling for Y-Shaped Urban Rail Transit Lines
- Author
-
Huang, Jiaqi, Li, Wei, Luo, Qin, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Cui, Zhen-Dong, Series Editor, Lu, Xinzheng, Series Editor, Meng, Lingyun, editor, Qian, Yongsheng, editor, Bai, Yun, editor, Lv, Bin, editor, and Tang, Yuanjie, editor
- Published
- 2025
- Full Text
- View/download PDF
314. Double-Mix-Net: A Multimodal Music Emotion Recognition Network with Multi-layer Feature Mixing
- Author
-
Li, Peilin, Chen, Kairan, Wei, Weixin, Zhao, Jiahao, Li, Wei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qian, Kun, editor, Wang, Xin, editor, Meng, Qinglin, editor, and Chen, Mingzhi, editor
- Published
- 2025
- Full Text
- View/download PDF
315. Multiband Kick Drum Separation with Genre-Guided Optimization
- Author
-
Li, Si, Jiang, Yuanfeng, Wang, Zhaowen, Qian, Jiale, Li, Wei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qian, Kun, editor, Wang, Xin, editor, Meng, Qinglin, editor, and Chen, Mingzhi, editor
- Published
- 2025
- Full Text
- View/download PDF
316. DiffTimb: Diffusion Models for Many-to-Many Timbre Transfer
- Author
-
Gan, Xu, Wu, Yifei, Liu, Xinlu, Duan, Ruilei, Li, Wei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qian, Kun, editor, Wang, Xin, editor, Meng, Qinglin, editor, and Chen, Mingzhi, editor
- Published
- 2025
- Full Text
- View/download PDF
317. XBeat: A Hybrid CNN-Transformer Model for Beat and Downbeat Tracking
- Author
-
Wu, YiFei, Liu, Xinlu, Gan, Xu, Li, Wei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qian, Kun, editor, Wang, Xin, editor, Meng, Qinglin, editor, and Chen, Mingzhi, editor
- Published
- 2025
- Full Text
- View/download PDF
318. A Holistic Evaluation of Piano Sound Quality
- Author
-
Zhou, Monan, Wu, Shangda, Ji, Shaohua, Li, Zijin, Li, Wei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qian, Kun, editor, Wang, Xin, editor, Meng, Qinglin, editor, and Chen, Mingzhi, editor
- Published
- 2025
- Full Text
- View/download PDF
319. GAN-Diffusion Relay Model: Advancing Semantic Image Synthesis
- Author
-
Jia, Jinyin, Yang, Jun, Fan, Anfei, Chen, Junfan, Cao, Peng, Zhang, Chiyu, Li, Wei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
320. HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation
- Author
-
Guan, Shanyan, Ge, Yanhao, Tai, Ying, Yang, Jian, Li, Wei, You, Mingyu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
321. MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo
- Author
-
Liu, Tianqi, Wang, Guangcong, Hu, Shoukang, Shen, Liao, Ye, Xinyi, Zang, Yuhang, Cao, Zhiguo, Li, Wei, Liu, Ziwei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
322. CMD: A Cross Mechanism Domain Adaptation Dataset for 3D Object Detection
- Author
-
Deng, Jinhao, Ye, Wei, Wu, Hai, Huang, Xun, Xia, Qiming, Li, Xin, Fang, Jin, Li, Wei, Wen, Chenglu, Wang, Cheng, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
323. FoundaBench: Evaluating Chinese Fundamental Knowledge Capabilities of Large Language Models
- Author
-
Li, Wei, Ma, Ren, Wu, Jiang, Gu, Chenya, Peng, Jiahui, Len, Jinyang, Zhang, Songyang, Yan, Hang, Lin, Dahua, and He, Conghui
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
In the burgeoning field of large language models (LLMs), the assessment of fundamental knowledge remains a critical challenge, particularly for models tailored to Chinese language and culture. This paper introduces FoundaBench, a pioneering benchmark designed to rigorously evaluate the fundamental knowledge capabilities of Chinese LLMs. FoundaBench encompasses a diverse array of 3354 multiple-choice questions across common sense and K-12 educational subjects, meticulously curated to reflect the breadth and depth of everyday and academic knowledge. We present an extensive evaluation of 12 state-of-the-art LLMs using FoundaBench, employing both traditional assessment methods and our CircularEval protocol to mitigate potential biases in model responses. Our results highlight the superior performance of models pre-trained on Chinese corpora, and reveal a significant disparity between models' reasoning and memory recall capabilities. The insights gleaned from FoundaBench evaluations set a new standard for understanding the fundamental knowledge of LLMs, providing a robust framework for future advancements in the field.
- Published
- 2024
324. How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites
- Author
-
Chen, Zhe, Wang, Weiyun, Tian, Hao, Ye, Shenglong, Gao, Zhangwei, Cui, Erfei, Tong, Wenwen, Hu, Kongzhi, Luo, Jiapeng, Ma, Zheng, Ma, Ji, Wang, Jiaqi, Dong, Xiaoyi, Yan, Hang, Guo, Hewei, He, Conghui, Shi, Botian, Jin, Zhenjiang, Xu, Chao, Wang, Bin, Wei, Xingjian, Li, Wei, Zhang, Wenjian, Zhang, Bo, Cai, Pinlong, Wen, Licheng, Yan, Xiangchao, Dou, Min, Lu, Lewei, Zhu, Xizhou, Lu, Tong, Lin, Dahua, Qiao, Yu, Dai, Jifeng, and Wang, Wenhai
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements: (1) Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model -- InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs. (2) Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448$\times$448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input. (3) High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks. We evaluate InternVL 1.5 through a series of benchmarks and comparative studies. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Code has been released at https://github.com/OpenGVLab/InternVL., Comment: Technical report
- Published
- 2024
325. BezierFormer: A Unified Architecture for 2D and 3D Lane Detection
- Author
-
Dong, Zhiwei, Zhu, Xi, Cao, Xiya, Ding, Ran, Li, Wei, Zhou, Caifa, Wang, Yongliang, and Liu, Qiangbo
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Lane detection has made significant progress in recent years, but there is not a unified architecture for its two sub-tasks: 2D lane detection and 3D lane detection. To fill this gap, we introduce B\'{e}zierFormer, a unified 2D and 3D lane detection architecture based on B\'{e}zier curve lane representation. B\'{e}zierFormer formulate queries as B\'{e}zier control points and incorporate a novel B\'{e}zier curve attention mechanism. This attention mechanism enables comprehensive and accurate feature extraction for slender lane curves via sampling and fusing multiple reference points on each curve. In addition, we propose a novel Chamfer IoU-based loss which is more suitable for the B\'{e}zier control points regression. The state-of-the-art performance of B\'{e}zierFormer on widely-used 2D and 3D lane detection benchmarks verifies its effectiveness and suggests the worthiness of further exploration., Comment: ICME 2024, 11 pages, 8 figures
- Published
- 2024
326. Double Magnon-Roton Excitations in the Triangular-Lattice Spin Supersolid
- Author
-
Gao, Yuan, Zhang, Chuandi, Xiang, Junsen, Yu, Dehong, Lu, Xingye, Sun, Peijie, Jin, Wentao, Su, Gang, and Li, Wei
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
Supersolid is an exotic quantum state of matter that spontaneously hosts the features of both solid and superfluid, which breaks the translation and U(1) gauge symmetries. Here we study the spin dynamics in the triangular-lattice compound Na$_2$BaCo(PO$_4$)$_2$, which is revealed in [Xiang et al., Nature 625, 270-275 (2024)] as a quantum magnetic analog of supersolid. We simulate the easy-axis Heisenberg model with tensor network approach and uncover unique dynamic traits. These features are manifested in two branches of excitations that can be associated with the spin solidity and superfluidity, respectively. One branch contains the U(1) Goldstone and roton modes, while the other comprises pseudo-Goldstone and roton modes. The gapless Goldstone modes of the in-plane superfluid order are confirmed by our inelastic neutron scattering measurements. Together with the evident out-of-plane solid order indicated by the magnetic Bragg peaks, our findings provide spectroscopic evidence for spin supersolidity in this easy-axis antiferromagnet. Akin to the role of phonon-roton modes -- Landau elementary excitations -- in shaping the helium superfluid thermodynamics, the intriguing double magnon-roton dispersion here determines the low-temperature thermodynamics of spin supersolid down to sub-Kelvin regime, explaining the recently observed giant magnetocaloric effect in Na$_2$BaCo(PO$_4$)$_2$., Comment: 8+9 pages, 4+9 figures
- Published
- 2024
- Full Text
- View/download PDF
327. Super-resolution imaging based on active optical intensity interferometry
- Author
-
Liu, Lu-Chuan, Wu, Cheng, Li, Wei, Chen, Yu-Ao, Wilczek, Frank, Shao, Xiao-Peng, Xu, Feihu, Zhang, Qiang, and Pan, Jian-Wei
- Subjects
Physics - Optics ,Quantum Physics - Abstract
Long baseline diffraction-limited optical aperture synthesis technology by interferometry plays an important role in scientific study and practical application. In contrast to amplitude (phase) interferometry, intensity interferometry -- which exploits the quantum nature of light to measure the photon bunching effect in thermal light -- is robust against atmospheric turbulence and optical defects. However, a thermal light source typically has a significant divergence angle and a low average photon number per mode, forestalling the applicability over long ranges. Here, we propose and demonstrate active intensity interferometry for super-resolution imaging over the kilometer range. Our scheme exploits phase-independent multiple laser emitters to produce the thermal illumination and uses an elaborate computational algorithm to reconstruct the image. In outdoor environments, we image two-dimension millimeter-level targets over 1.36 kilometers at a resolution of 14 times the diffraction limit of a single telescope. High-resolution optical imaging and sensing are anticipated by applying long-baseline active intensity interferometry in general branches of physics and metrology., Comment: 42 pages, 11 figures
- Published
- 2024
328. High-order harmonic generation from laser induced plasma comprising CdSe/V2O5 Core/Shell quantum dots embedded on MoS2 nanosheets
- Author
-
Konda, Srinivasa Rao, Barik, Puspendu, Singh, Subshash, Mottamchetty, Venkatesh, Srivasthava, Amit, Kim, Vyacheslav V., Ganeev, Rashid A., Guo, Chunlei, and Li, Wei
- Subjects
Physics - Applied Physics - Abstract
Research of the nonlinear optical characteristics of transition metal dichalcogenides in the presence of photoactive particles, plasmonic nanocavities, waveguides, and metamaterials is still in its early stages. This investigation delves into the high-order harmonic generation (HHG) from laser induced plasma of MoS2 nanosheets in the presence of semiconductor photoactive medium such as CdSe and CdSe/V2O5 core/shell quantum dots. Our comprehensive findings shed light on the counteractive coupling impact of both bare and passivated quantum dots on MoS2 nanosheets, as evidenced by the emission of higher-order harmonics. Significantly, the intensity of harmonics and their cut-off were notably enhanced in the MoS2-CdSe and MoS2-V-CdSe configurations compared to pristine MoS2 nanosheets. These advancements hold promise for applications requiring the emission of coherent short-wavelength radiation., Comment: 8 pages, 4 figures
- Published
- 2024
329. Charge transfer mechanism on MoS$_2$ nanosheets in the presence of a semiconductor photoactive media
- Author
-
Konda, Srinivasa Rao, Barik, Puspendu, Singh, Subshash, Mottamchetty, Venkatesh, Srivasthava, Amit, Ganeev, Rashid A., Rao, Soma Venugopal, Guo, Chunlei, and Li, Wei
- Subjects
Physics - Applied Physics - Abstract
The studies of the nonlinear optical (NLO) properties of the transition metal dichalcogenides (TMDs) coupled with photoactive particles, plasmonic nanocavities, waveguides, and metamaterials remain in their infancy. This study investigates the third-order NLO properties of MoS$_2$ nanosheets in the presence of a semiconductor photoactive medium. Our extensive studies and the obtained results reveal the counteractive coupling effect of bare and passivated quantum dots on the MoS$_2$ nanosheet, as made evident by the analysis of the NLO processes. The enhanced NLO properties of MoS$_2$ nanosheets functionalized with CdSe and CdSe-V2O5 quantum dots are helpful for applications as saturable absorbers in laser applications and the emission of coherent short-wavelength radiation. The multiphoton-excitation resonance energy transfer mechanism exploiting remote dipole dipole coupling, and ultrafast charge transfer pathways emerges as another plausible way to alter the NLO properties in TMDs., Comment: 16 pages, 4 figures
- Published
- 2024
330. Realization of Kagome Kondo lattice
- Author
-
Song, Boqin, Xie, Yuyang, Li, Wei-Jian, Liu, Hui, Zhang, Qinghua, Guo, Jian-gang, Zhao, Lin, Yu, Shun-Li, Zhou, Xingjiang, Chen, Xiaolong, and Ying, Tianping
- Subjects
Condensed Matter - Strongly Correlated Electrons ,Condensed Matter - Materials Science - Abstract
The Kondo lattice, describing a grid of the local magnetic moments coupling to itinerant electrons, is a fertile ground of strongly correlated states in condensed matter physics. While the Kagome lattice has long been predicted to host Kondo physics with exotic magnetism and nontrivial topology, no experimental realization has been achieved. Here, we report the discovery of CsCr6Sb6, a van der Waals-like Kagome Kondo lattice featuring extremely flat, isolated bands at the Fermi level (EF) that composed entirely of Cr-3d electrons. We observe heavy fermions with the effective mass over 100 times greater than those of its vanadium counterpart. We also observe Kondo insulating behavior in an ultra-low carrier density of 1019 cm-3 and dimensionality-induced Kondo breakdown. More interestingly, the frustrated magnetism observed in the bulk give way to a hidden A-type antiferromagnetic ordering in few layers, in sharp contrast to the common sense of weakened magnetism with thinning. The realization of Kondo physics in Kagome lattice opens avenues for exploring diverse quantum criticalities in a strongly-correlated frustrated system., Comment: 13 pages, 4 figures
- Published
- 2024
331. The Ninth NTIRE 2024 Efficient Super-Resolution Challenge Report
- Author
-
Ren, Bin, Li, Yawei, Mehta, Nancy, Timofte, Radu, Yu, Hongyuan, Wan, Cheng, Hong, Yuxin, Han, Bingnan, Wu, Zhuoyuan, Zou, Yajun, Liu, Yuqing, Li, Jizhe, He, Keji, Fan, Chao, Zhang, Heng, Zhang, Xiaolin, Yin, Xuanwu, Zuo, Kunlong, Liao, Bohao, Xia, Peizhe, Peng, Long, Du, Zhibo, Di, Xin, Li, Wangkai, Wang, Yang, Zhai, Wei, Pei, Renjing, Guo, Jiaming, Xu, Songcen, Cao, Yang, Zha, Zhengjun, Wang, Yan, Liu, Yi, Wang, Qing, Zhang, Gang, Zhang, Liou, Zhao, Shijie, Sun, Long, Pan, Jinshan, Dong, Jiangxin, Tang, Jinhui, Liu, Xin, Yan, Min, Wang, Qian, Zhou, Menghan, Yan, Yiqiang, Liu, Yixuan, Chan, Wensong, Tang, Dehua, Zhou, Dong, Wang, Li, Tian, Lu, Emad, Barsoum, Jia, Bohan, Qiao, Junbo, Zhou, Yunshuai, Zhang, Yun, Li, Wei, Lin, Shaohui, Zhou, Shenglong, Chen, Binbin, Liao, Jincheng, Zhao, Suiyi, Zhang, Zhao, Wang, Bo, Luo, Yan, Wei, Yanyan, Li, Feng, Wang, Mingshen, Guan, Jinhan, Hu, Dehua, Yu, Jiawei, Xu, Qisheng, Sun, Tao, Lan, Long, Xu, Kele, Lin, Xin, Yue, Jingtong, Yang, Lehan, Du, Shiyi, Qi, Lu, Ren, Chao, Han, Zeyu, Wang, Yuhan, Chen, Chaolin, Li, Haobo, Zheng, Mingjun, Yang, Zhongbao, Song, Lianhong, Yan, Xingzhuo, Fu, Minghan, Zhang, Jingyi, Li, Baiang, Zhu, Qi, Xu, Xiaogang, Guo, Dan, Guo, Chunle, Chen, Jiadi, Long, Huanhuan, Duanmu, Chunjiang, Lei, Xiaoyan, Liu, Jie, Jia, Weilin, Cao, Weifeng, Zhang, Wenlong, Mao, Yanyu, Guo, Ruilong, Zhang, Nihao, Pandey, Manoj, Chernozhukov, Maksym, Le, Giang, Cheng, Shuli, Wang, Hongyuan, Wei, Ziyan, Tang, Qingting, Wang, Liejun, Li, Yongming, Guo, Yanhui, Xu, Hao, Khatami-Rizi, Akram, Mahmoudi-Aznaveh, Ahmad, Hsu, Chih-Chung, Lee, Chia-Ming, Chou, Yi-Shiuan, Joshi, Amogh, Akalwadi, Nikhil, Malagi, Sampada, Yashaswini, Palani, Desai, Chaitra, Tabib, Ramesh Ashok, Patil, Ujwala, and Mudenagudi, Uma
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
This paper provides a comprehensive review of the NTIRE 2024 challenge, focusing on efficient single-image super-resolution (ESR) solutions and their outcomes. The task of this challenge is to super-resolve an input image with a magnification factor of x4 based on pairs of low and corresponding high-resolution images. The primary objective is to develop networks that optimize various aspects such as runtime, parameters, and FLOPs, while still maintaining a peak signal-to-noise ratio (PSNR) of approximately 26.90 dB on the DIV2K_LSDIR_valid dataset and 26.99 dB on the DIV2K_LSDIR_test dataset. In addition, this challenge has 4 tracks including the main track (overall performance), sub-track 1 (runtime), sub-track 2 (FLOPs), and sub-track 3 (parameters). In the main track, all three metrics (ie runtime, FLOPs, and parameter count) were considered. The ranking of the main track is calculated based on a weighted sum-up of the scores of all other sub-tracks. In sub-track 1, the practical runtime performance of the submissions was evaluated, and the corresponding score was used to determine the ranking. In sub-track 2, the number of FLOPs was considered. The score calculated based on the corresponding FLOPs was used to determine the ranking. In sub-track 3, the number of parameters was considered. The score calculated based on the corresponding parameters was used to determine the ranking. RLFN is set as the baseline for efficiency measurement. The challenge had 262 registered participants, and 34 teams made valid submissions. They gauge the state-of-the-art in efficient single-image super-resolution. To facilitate the reproducibility of the challenge and enable other researchers to build upon these findings, the code and the pre-trained model of validated solutions are made publicly available at https://github.com/Amazingren/NTIRE2024_ESR/., Comment: The report paper of NTIRE2024 Efficient Super-resolution, accepted by CVPRW2024
- Published
- 2024
332. Polar vortex hidden in twisted bilayers of paraelectric SrTiO3
- Author
-
Sha, Haozhi, Zhang, Yixuan, Ma, Yunpeng, Li, Wei, Yang, Wenfeng, Cui, Jizhe, Li, Qian, Huang, Houbing, and Yu, Rong
- Subjects
Physics - Applied Physics ,Condensed Matter - Materials Science - Abstract
Polar topologies, such as vortex and skyrmion, have attracted significant interest due to their unique physical properties and promising applications in high-density memory devices. Currently, most polar vortices are observed in heterostructures containing ferroelectric materials and constrained by substrates. In this study, we unravel arrays of polar vortices formed in twisted freestanding bilayers composed of SrTiO3, a quantum-paraelectric material. Depth-resolved structures of the bilayers are measured with deep-sub-angstrom resolution and one picometer accuracy using multislice ptychography, enabling identification of the three-dimensional variations of polarization topology. Our findings reveal the evolution of the polar vortices in the twisted overlapping layers, demonstrating the reverse of rotation manner in the depth direction. Twisted freestanding bilayers provide a unique platform for exploration and modulation of novel polar topologies.
- Published
- 2024
333. InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
- Author
-
Dong, Xiaoyi, Zhang, Pan, Zang, Yuhang, Cao, Yuhang, Wang, Bin, Ouyang, Linke, Zhang, Songyang, Duan, Haodong, Zhang, Wenwei, Li, Yining, Yan, Hang, Gao, Yang, Chen, Zhe, Zhang, Xinyue, Li, Wei, Li, Jingwen, Wang, Wenhai, Chen, Kai, He, Conghui, Zhang, Xingcheng, Dai, Jifeng, Qiao, Yu, Lin, Dahua, and Wang, Jiaqi
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computation and Language - Abstract
The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression has been hindered by challenges in comprehending fine-grained visual content due to limited resolution. Recent efforts have aimed to enhance the high-resolution understanding capabilities of LVLMs, yet they remain capped at approximately 1500 x 1500 pixels and constrained to a relatively narrow resolution range. This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 x 1600) and beyond. Concurrently, considering the ultra-high resolution may not be necessary in all scenarios, it supports a wide range of diverse resolutions from 336 pixels to 4K standard, significantly broadening its scope of applicability. Specifically, this research advances the patch division paradigm by introducing a novel extension: dynamic resolution with automatic patch configuration. It maintains the training image aspect ratios while automatically varying patch counts and configuring layouts based on a pre-trained Vision Transformer (ViT) (336 x 336), leading to dynamic training resolution from 336 pixels to 4K standard. Our research demonstrates that scaling training resolution up to 4K HD leads to consistent performance enhancements without hitting the ceiling of potential improvements. InternLM-XComposer2-4KHD shows superb capability that matches or even surpasses GPT-4V and Gemini Pro in 10 of the 16 benchmarks. The InternLM-XComposer2-4KHD model series with 7B parameters are publicly available at https://github.com/InternLM/InternLM-XComposer., Comment: Code and models are publicly available at https://github.com/InternLM/InternLM-XComposer
- Published
- 2024
334. LIPT: Latency-aware Image Processing Transformer
- Author
-
Qiao, Junbo, Li, Wei, Xie, Haizhen, Chen, Hanting, Zhou, Yunshuai, Tu, Zhijun, Hu, Jie, and Lin, Shaohui
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Transformer is leading a trend in the field of image processing. Despite the great success that existing lightweight image processing transformers have achieved, they are tailored to FLOPs or parameters reduction, rather than practical inference acceleration. In this paper, we present a latency-aware image processing transformer, termed LIPT. We devise the low-latency proportion LIPT block that substitutes memory-intensive operators with the combination of self-attention and convolutions to achieve practical speedup. Specifically, we propose a novel non-volatile sparse masking self-attention (NVSM-SA) that utilizes a pre-computing sparse mask to capture contextual information from a larger window with no extra computation overload. Besides, a high-frequency reparameterization module (HRM) is proposed to make LIPT block reparameterization friendly, which improves the model's detail reconstruction capability. Extensive experiments on multiple image processing tasks (e.g., image super-resolution (SR), JPEG artifact reduction, and image denoising) demonstrate the superiority of LIPT on both latency and PSNR. LIPT achieves real-time GPU inference with state-of-the-art performance on multiple image SR benchmarks.
- Published
- 2024
335. Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
- Author
-
Li, Simiao, Zhang, Yun, Li, Wei, Chen, Hanting, Wang, Wenjia, Jing, Bingyi, Lin, Shaohui, and Hu, Jie
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Knowledge distillation (KD) is a promising yet challenging model compression technique that transfers rich learning representations from a well-performing but cumbersome teacher model to a compact student model. Previous methods for image super-resolution (SR) mostly compare the feature maps directly or after standardizing the dimensions with basic algebraic operations (e.g. average, dot-product). However, the intrinsic semantic differences among feature maps are overlooked, which are caused by the disparate expressive capacity between the networks. This work presents MiPKD, a multi-granularity mixture of prior KD framework, to facilitate efficient SR model through the feature mixture in a unified latent space and stochastic network block mixture. Extensive experiments demonstrate the effectiveness of the proposed MiPKD method.
- Published
- 2024
336. Make Continual Learning Stronger via C-Flat
- Author
-
Bian, Ang, Li, Wei, Yuan, Hangjie, Yu, Chengrong, Wang, Mang, Zhao, Zixiang, Lu, Aojun, Ji, Pengliang, and Feng, Tao
- Subjects
Computer Science - Machine Learning ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Model generalization ability upon incrementally acquiring dynamically updating knowledge from sequentially arriving tasks is crucial to tackle the sensitivity-stability dilemma in Continual Learning (CL). Weight loss landscape sharpness minimization seeking for flat minima lying in neighborhoods with uniform low loss or smooth gradient is proven to be a strong training regime improving model generalization compared with loss minimization based optimizer like SGD. Yet only a few works have discussed this training regime for CL, proving that dedicated designed zeroth-order sharpness optimizer can improve CL performance. In this work, we propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods. A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases. Code is available at https://github.com/WanNaa/C-Flat.
- Published
- 2024
337. Structural, magnetic and magnetocaloric properties of triangular-lattice transition-metal phosphates
- Author
-
Zhang, Chuandi, Xiang, Junsen, Zhu, Quanliang, Wu, Longfei, Zhang, Shanfeng, Xu, Juping, Yin, Wen, Sun, Peijie, Li, Wei, Su, Gang, and Jin, Wentao
- Subjects
Condensed Matter - Strongly Correlated Electrons ,Condensed Matter - Materials Science - Abstract
The recent discovery of the spin supersolid candidate Na$_2$BaCo(PO$_4$)$_2$ stimulates numerous research interest on the triangular-lattice transition-metal phosphates. Here we report a comprehensive study on the structural, magnetic and magnetocaloric properties of polycrystalline Na$_2$$A$$T$(PO$_4$)$_2$ ($A$ = Ba, Sr; $T$ = Co, Ni, Mn). X-ray and neutron diffraction measurements confirm that Na$_2$Ba$T$(PO$_4$)$_2$ (NB$T$P) crystallizes in a trigonal structure, while Na$_2$Sr$T$(PO$_4$)$_2$ (NS$T$P) forms a monoclinic structure with a slight distortion of the triangular network of $T^{2+}$ ions. The dc magnetization data show that all six compounds order antiferromagnetically below 2 K, and the N\'{e}el temperatures of NS$T$P are consistently higher than those of NB$T$P for $T$ = Co, Ni, and Mn, due to the release of geometrical frustration by monoclinic distortions. Further magnetocaloric measurements show that trigonal NB$T$P can reach a lower temperature in the quasi-adiabatic demagnetization process and thus shows a better performance in the magnetic refrigeration, compared with monoclinic NS$T$P. Our findings highlight the outstanding magnetocaloric performances of the trigonal transition-metal phosphates, and disclose two necessary ingredients for a superior magnetic coolant that can reach an ultra-low temperature, including a perfect geometrically frustrated lattice and a small effective spin number associated with the magnetic ions., Comment: 10 Pages, 6 figures, accepted for publication in Physical Review Materials
- Published
- 2024
- Full Text
- View/download PDF
338. IPT-V2: Efficient Image Processing Transformer using Hierarchical Attentions
- Author
-
Tu, Zhijun, Du, Kunpeng, Chen, Hanting, Wang, Hailing, Li, Wei, Hu, Jie, and Wang, Yunhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advances have demonstrated the powerful capability of transformer architecture in image restoration. However, our analysis indicates that existing transformerbased methods can not establish both exact global and local dependencies simultaneously, which are much critical to restore the details and missing content of degraded images. To this end, we present an efficient image processing transformer architecture with hierarchical attentions, called IPTV2, adopting a focal context self-attention (FCSA) and a global grid self-attention (GGSA) to obtain adequate token interactions in local and global receptive fields. Specifically, FCSA applies the shifted window mechanism into the channel self-attention, helps capture the local context and mutual interaction across channels. And GGSA constructs long-range dependencies in the cross-window grid, aggregates global information in spatial dimension. Moreover, we introduce structural re-parameterization technique to feed-forward network to further improve the model capability. Extensive experiments demonstrate that our proposed IPT-V2 achieves state-of-the-art results on various image processing tasks, covering denoising, deblurring, deraining and obtains much better trade-off for performance and computational complexity than previous methods. Besides, we extend our method to image generation as latent diffusion backbone, and significantly outperforms DiTs.
- Published
- 2024
339. Non-Abelian braiding of Fibonacci anyons with a superconducting processor
- Author
-
Xu, Shibo, Sun, Zheng-Zhi, Wang, Ke, Li, Hekang, Zhu, Zitian, Dong, Hang, Deng, Jinfeng, Zhang, Xu, Chen, Jiachen, Wu, Yaozu, Zhang, Chuanyu, Jin, Feitong, Zhu, Xuhao, Gao, Yu, Zhang, Aosai, Wang, Ning, Zou, Yiren, Tan, Ziqi, Shen, Fanhao, Zhong, Jiarun, Bao, Zehang, Li, Weikang, Jiang, Wenjie, Yu, Li-Wei, Song, Zixuan, Zhang, Pengfei, Xiang, Liang, Guo, Qiujiang, Wang, Zhen, Song, Chao, Wang, H., and Deng, Dong-Ling
- Subjects
Quantum Physics - Abstract
Non-Abelian topological orders offer an intriguing path towards fault-tolerant quantum computation, where information can be encoded and manipulated in a topologically protected manner immune to arbitrary local noises and perturbations. However, realizing non-Abelian topologically ordered states is notoriously challenging in both condensed matter and programmable quantum systems, and it was not until recently that signatures of non-Abelian statistics were observed through digital quantum simulation approaches. Despite these exciting progresses, none of them has demonstrated the appropriate type of topological orders and associated non-Abelian anyons whose braidings alone support universal quantum computation. Here, we report the realization of non-Abelian topologically ordered states of the Fibonacci string-net model and demonstrate braidings of Fibonacci anyons featuring universal computational power, with a superconducting quantum processor. We exploit efficient quantum circuits to prepare the desired states and verify their nontrivial topological nature by measuring the topological entanglement entropy. In addition, we create two pairs of Fibonacci anyons and demonstrate their fusion rule and non-Abelian braiding statistics by applying unitary gates on the underlying physical qubits. Our results establish a versatile digital approach to exploring exotic non-Abelian topological states and their associated braiding statistics with current noisy intermediate-scale quantum processors.
- Published
- 2024
- Full Text
- View/download PDF
340. Deriving Neutron Star Equation of State from AdS/QCD
- Author
-
Li, Wei, Wu, Jing-Yi, and Zhang, Kilar
- Subjects
High Energy Physics - Phenomenology ,Astrophysics - High Energy Astrophysical Phenomena ,General Relativity and Quantum Cosmology ,High Energy Physics - Theory - Abstract
Neutron stars are among the main targets for gravitational wave observatories, however, their equation of state is still not well established. Mainly phenomenological models with many parameters are widely used by far, while theoretical models are not so practical. In arXiv:1902.08477, a theoretical equation of state with only one parameter is derived from Witten-Sakai-Sugimoto model, as an application of AdS/QCD, where pointlike instanton case is taken into consideration. When the tidal deformability constraint from gravitational wave event is satisfied, the maximum mass is about 1.7 solar masses. Now we upgrade this model to instanton gas, with one more variable, the instanton width. This is not naively a free parameter, but a function of the chemical potential. Thus we end up with a more complicated and accurate model, but still with only one adjustable parameter. In this case, we find the maximum mass becomes 1.85 solar masses. This is an encouraging result, as a theoretically derived model., Comment: 10 pages, 8 figures; v2: published version, with parameter sensitivity analysis added in Appendix C
- Published
- 2024
- Full Text
- View/download PDF
341. InternLM2 Technical Report
- Author
-
Cai, Zheng, Cao, Maosong, Chen, Haojiong, Chen, Kai, Chen, Keyu, Chen, Xin, Chen, Xun, Chen, Zehui, Chen, Zhi, Chu, Pei, Dong, Xiaoyi, Duan, Haodong, Fan, Qi, Fei, Zhaoye, Gao, Yang, Ge, Jiaye, Gu, Chenya, Gu, Yuzhe, Gui, Tao, Guo, Aijia, Guo, Qipeng, He, Conghui, Hu, Yingfan, Huang, Ting, Jiang, Tao, Jiao, Penglong, Jin, Zhenjiang, Lei, Zhikai, Li, Jiaxing, Li, Jingwen, Li, Linyang, Li, Shuaibin, Li, Wei, Li, Yining, Liu, Hongwei, Liu, Jiangning, Hong, Jiawei, Liu, Kaiwen, Liu, Kuikun, Liu, Xiaoran, Lv, Chengqi, Lv, Haijun, Lv, Kai, Ma, Li, Ma, Runyuan, Ma, Zerun, Ning, Wenchang, Ouyang, Linke, Qiu, Jiantao, Qu, Yuan, Shang, Fukai, Shao, Yunfan, Song, Demin, Song, Zifan, Sui, Zhihao, Sun, Peng, Sun, Yu, Tang, Huanze, Wang, Bin, Wang, Guoteng, Wang, Jiaqi, Wang, Jiayu, Wang, Rui, Wang, Yudong, Wang, Ziyi, Wei, Xingjian, Weng, Qizhen, Wu, Fan, Xiong, Yingtong, Xu, Chao, Xu, Ruiliang, Yan, Hang, Yan, Yirong, Yang, Xiaogui, Ye, Haochen, Ying, Huaiyuan, Yu, Jia, Yu, Jing, Zang, Yuhang, Zhang, Chuyu, Zhang, Li, Zhang, Pan, Zhang, Peng, Zhang, Ruijie, Zhang, Shuo, Zhang, Songyang, Zhang, Wenjian, Zhang, Wenwei, Zhang, Xingcheng, Zhang, Xinyue, Zhao, Hui, Zhao, Qian, Zhao, Xiaomeng, Zhou, Fengzhe, Zhou, Zaida, Zhuo, Jingming, Zou, Yicheng, Qiu, Xipeng, Qiao, Yu, and Lin, Dahua
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
The evolution of Large Language Models (LLMs) like ChatGPT and GPT-4 has sparked discussions on the advent of Artificial General Intelligence (AGI). However, replicating such advancements in open-source models has been challenging. This paper introduces InternLM2, an open-source LLM that outperforms its predecessors in comprehensive evaluations across 6 dimensions and 30 benchmarks, long-context modeling, and open-ended subjective evaluations through innovative pre-training and optimization techniques. The pre-training process of InternLM2 is meticulously detailed, highlighting the preparation of diverse data types including text, code, and long-context data. InternLM2 efficiently captures long-term dependencies, initially trained on 4k tokens before advancing to 32k tokens in pre-training and fine-tuning stages, exhibiting remarkable performance on the 200k ``Needle-in-a-Haystack" test. InternLM2 is further aligned using Supervised Fine-Tuning (SFT) and a novel Conditional Online Reinforcement Learning from Human Feedback (COOL RLHF) strategy that addresses conflicting human preferences and reward hacking. By releasing InternLM2 models in different training stages and model sizes, we provide the community with insights into the model's evolution.
- Published
- 2024
342. Deep learning-based predictive modelling of transonic flow over an aerofoil
- Author
-
Chen, Li-Wei and Thuerey, Nils
- Subjects
Physics - Fluid Dynamics ,Computer Science - Computational Engineering, Finance, and Science - Abstract
Effectively predicting transonic unsteady flow over an aerofoil poses inherent challenges. In this study, we harness the power of deep neural network (DNN) models using the attention U-Net architecture. Through efficient training of these models, we achieve the capability to capture the complexities of transonic and unsteady flow dynamics at high resolution, even when faced with previously unseen conditions. We demonstrate that by leveraging the differentiability inherent in neural network representations, our approach provides a framework for assessing fundamental physical properties via global instability analysis. This integration bridges deep neural network models and traditional modal analysis, offering valuable insights into transonic flow dynamics and enhancing the interpretability of neural network models in flowfield diagnostics.
- Published
- 2024
343. CodeS: Natural Language to Code Repository via Multi-Layer Sketch
- Author
-
Zan, Daoguang, Yu, Ailun, Liu, Wei, Chen, Dong, Shen, Bo, Li, Wei, Yao, Yafen, Gong, Yongshun, Chen, Xiaolin, Guan, Bei, Yang, Zhiguang, Wang, Yongji, Wang, Qianxiang, and Cui, Lizhen
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Software Engineering - Abstract
The impressive performance of large language models (LLMs) on code-related tasks has shown the potential of fully automated software development. In light of this, we introduce a new software engineering task, namely Natural Language to code Repository (NL2Repo). This task aims to generate an entire code repository from its natural language requirements. To address this task, we propose a simple yet effective framework CodeS, which decomposes NL2Repo into multiple sub-tasks by a multi-layer sketch. Specifically, CodeS includes three modules: RepoSketcher, FileSketcher, and SketchFiller. RepoSketcher first generates a repository's directory structure for given requirements; FileSketcher then generates a file sketch for each file in the generated structure; SketchFiller finally fills in the details for each function in the generated file sketch. To rigorously assess CodeS on the NL2Repo task, we carry out evaluations through both automated benchmarking and manual feedback analysis. For benchmark-based evaluation, we craft a repository-oriented benchmark, SketchEval, and design an evaluation metric, SketchBLEU. For feedback-based evaluation, we develop a VSCode plugin for CodeS and engage 30 participants in conducting empirical studies. Extensive experiments prove the effectiveness and practicality of CodeS on the NL2Repo task., Comment: https://github.com/NL2Code/CodeS
- Published
- 2024
344. Distilling Semantic Priors from SAM to Efficient Image Restoration Models
- Author
-
Zhang, Quan, Liu, Xiaoyu, Li, Wei, Chen, Hanting, Liu, Junchao, Hu, Jie, Xiong, Zhiwei, Yuan, Chun, and Wang, Yunhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In image restoration (IR), leveraging semantic priors from segmentation models has been a common approach to improve performance. The recent segment anything model (SAM) has emerged as a powerful tool for extracting advanced semantic priors to enhance IR tasks. However, the computational cost of SAM is prohibitive for IR, compared to existing smaller IR models. The incorporation of SAM for extracting semantic priors considerably hampers the model inference efficiency. To address this issue, we propose a general framework to distill SAM's semantic knowledge to boost exiting IR models without interfering with their inference process. Specifically, our proposed framework consists of the semantic priors fusion (SPF) scheme and the semantic priors distillation (SPD) scheme. SPF fuses two kinds of information between the restored image predicted by the original IR model and the semantic mask predicted by SAM for the refined restored image. SPD leverages a self-distillation manner to distill the fused semantic priors to boost the performance of original IR models. Additionally, we design a semantic-guided relation (SGR) module for SPD, which ensures semantic feature representation space consistency to fully distill the priors. We demonstrate the effectiveness of our framework across multiple IR models and tasks, including deraining, deblurring, and denoising.
- Published
- 2024
345. pyKCN: A Python Tool for Bridging Scientific Knowledge
- Author
-
Lu, Zhenyuan, Li, Wei, Ozek, Burcu, Zhou, Haozhou, Radhakrishnan, Srinivasan, and Kamarthi, Sagar
- Subjects
Computer Science - Digital Libraries - Abstract
The study of research trends is pivotal for understanding scientific development on specific topics. Traditionally, this involves keyword analysis within scholarly literature, yet comprehensive tools for such analysis are scarce, especially those capable of parsing large datasets with precision. pyKCN, a Python toolkit, addresses this gap by automating keyword cleaning, extraction and trend analysis from extensive academic corpora. It is equipped with modules for text processing, deduplication, extraction, and advanced keyword co-occurrence and analysis, providing a granular view of research trends. This toolkit stands out by enabling researchers to visualize keyword relationships, thereby identifying seminal works and emerging trends. Its application spans diverse domains, enhancing scholars' capacity to understand developments within their fields. The implications of using pyKCN are significant. It offers an empirical basis for predicting research trends, which can inform funding directions, policy-making, and academic curricula. The code source and details can be found on: https://github.com/zhenyuanlu/pyKCN
- Published
- 2024
346. IS-Fusion: Instance-Scene Collaborative Fusion for Multimodal 3D Object Detection
- Author
-
Yin, Junbo, Shen, Jianbing, Chen, Runnan, Li, Wei, Yang, Ruigang, Frossard, Pascal, and Wang, Wenguan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Bird's eye view (BEV) representation has emerged as a dominant solution for describing 3D space in autonomous driving scenarios. However, objects in the BEV representation typically exhibit small sizes, and the associated point cloud context is inherently sparse, which leads to great challenges for reliable 3D perception. In this paper, we propose IS-Fusion, an innovative multimodal fusion framework that jointly captures the Instance- and Scene-level contextual information. IS-Fusion essentially differs from existing approaches that only focus on the BEV scene-level fusion by explicitly incorporating instance-level multimodal information, thus facilitating the instance-centric tasks like 3D object detection. It comprises a Hierarchical Scene Fusion (HSF) module and an Instance-Guided Fusion (IGF) module. HSF applies Point-to-Grid and Grid-to-Region transformers to capture the multimodal scene context at different granularities. IGF mines instance candidates, explores their relationships, and aggregates the local multimodal context for each instance. These instances then serve as guidance to enhance the scene feature and yield an instance-aware BEV representation. On the challenging nuScenes benchmark, IS-Fusion outperforms all the published multimodal works to date. Code is available at: https://github.com/yinjunbo/IS-Fusion., Comment: Accepted to CVPR 2024; Code: https://github.com/yinjunbo/IS-Fusion
- Published
- 2024
347. Benchmarking Chinese Commonsense Reasoning of LLMs: From Chinese-Specifics to Reasoning-Memorization Correlations
- Author
-
Sun, Jiaxing, Huang, Weiquan, Wu, Jiang, Gu, Chenya, Li, Wei, Zhang, Songyang, Yan, Hang, and He, Conghui
- Subjects
Computer Science - Computation and Language - Abstract
We introduce CHARM, the first benchmark for comprehensively and in-depth evaluating the commonsense reasoning ability of large language models (LLMs) in Chinese, which covers both globally known and Chinese-specific commonsense. We evaluated 7 English and 12 Chinese-oriented LLMs on CHARM, employing 5 representative prompt strategies for improving LLMs' reasoning ability, such as Chain-of-Thought. Our findings indicate that the LLM's language orientation and the task's domain influence the effectiveness of the prompt strategy, which enriches previous research findings. We built closely-interconnected reasoning and memorization tasks, and found that some LLMs struggle with memorizing Chinese commonsense, affecting their reasoning ability, while others show differences in reasoning despite similar memorization performance. We also evaluated the LLMs' memorization-independent reasoning abilities and analyzed the typical errors. Our study precisely identified the LLMs' strengths and weaknesses, providing the clear direction for optimization. It can also serve as a reference for studies in other fields. We will release CHARM at https://github.com/opendatalab/CHARM ., Comment: Equal contribution: Jiaxing Sun, Weiquan Huang, Jiang Wu; Corresponding author: Conghui He
- Published
- 2024
- Full Text
- View/download PDF
348. Adaptive Finite Element Interpolated Neural Networks
- Author
-
Badia, Santiago, Li, Wei, and Martín, Alberto F.
- Subjects
Mathematics - Numerical Analysis - Abstract
The use of neural networks to approximate partial differential equations (PDEs) has gained significant attention in recent years. However, the approximation of PDEs with localised phenomena, e.g., sharp gradients and singularities, remains a challenge, due to ill-defined cost functions in terms of pointwise residual sampling or poor numerical integration. In this work, we introduce $h$-adaptive finite element interpolated neural networks. The method relies on the interpolation of a neural network onto a finite element space that is gradually adapted to the solution during the training process to equidistribute a posteriori error indicator. The use of adaptive interpolation is essential in preserving the non-linear approximation capabilities of the neural networks to effectively tackle problems with localised features. The training relies on a gradient-based optimisation of a loss function based on the (dual) norm of the finite element residual of the interpolated neural network. Automatic mesh adaptation (i.e., refinement and coarsening) is performed based on a posteriori error indicators till a certain level of accuracy is reached. The proposed methodology can be applied to indefinite and nonsymmetric problems. We carry out a detailed numerical analysis of the scheme and prove several a priori error estimates, depending on the expressiveness of the neural network compared to the interpolation mesh. Our numerical experiments confirm the effectiveness of the method in capturing sharp gradients and singularities for forward PDE problems, both in 2D and 3D scenarios. We also show that the proposed preconditioning strategy (i.e., using a dual residual norm of the residual as a cost function) enhances training robustness and accelerates convergence.
- Published
- 2024
349. RAR: Retrieving And Ranking Augmented MLLMs for Visual Recognition
- Author
-
Liu, Ziyu, Sun, Zeyi, Zang, Yuhang, Li, Wei, Zhang, Pan, Dong, Xiaoyi, Xiong, Yuanjun, Lin, Dahua, and Wang, Jiaqi
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
CLIP (Contrastive Language-Image Pre-training) uses contrastive learning from noise image-text pairs to excel at recognizing a wide array of candidates, yet its focus on broad associations hinders the precision in distinguishing subtle differences among fine-grained items. Conversely, Multimodal Large Language Models (MLLMs) excel at classifying fine-grained categories, thanks to their substantial knowledge from pre-training on web-level corpora. However, the performance of MLLMs declines with an increase in category numbers, primarily due to growing complexity and constraints of limited context window size. To synergize the strengths of both approaches and enhance the few-shot/zero-shot recognition abilities for datasets characterized by extensive and fine-grained vocabularies, this paper introduces RAR, a Retrieving And Ranking augmented method for MLLMs. We initially establish a multi-modal retriever based on CLIP to create and store explicit memory for different categories beyond the immediate context window. During inference, RAR retrieves the top-k similar results from the memory and uses MLLMs to rank and make the final predictions. Our proposed approach not only addresses the inherent limitations in fine-grained recognition but also preserves the model's comprehensive knowledge base, significantly boosting accuracy across a range of vision-language recognition tasks. Notably, our approach demonstrates a significant improvement in performance on 5 fine-grained visual recognition benchmarks, 11 few-shot image recognition datasets, and the 2 object detection datasets under the zero-shot recognition setting., Comment: Project: https://github.com/Liuziyu77/RAR
- Published
- 2024
350. Thermal Tensor Network Approach for Spin-Lattice Relaxation in Quantum Magnets
- Author
-
Xi, Ning, Gao, Yuan, Li, Chengchen, Liang, Shuang, Yu, Rong, Wang, Xiaoqun, and Li, Wei
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
Low-dimensional quantum magnets, particularly those with strong spin frustration, are characterized by their notable spin fluctuations. Nuclear magnetic resonance (NMR) serves as a sensitive probe of low-energy fluctuations that offers valuable insight into rich magnetic phases and emergent phenomena in quantum magnets. Although experimentally accessible, the numerical simulation of NMR relaxation rates, specifically the spin-lattice relaxation rate $1/T_1$, remains a significant challenge. Analytical continuation based on Monte Carlo calculations are hampered by the notorious negative sign for frustrated systems, and the real-time simulations incur significant costs to capture low-energy fluctuations. Here we propose computing the relaxation rate using thermal tensor networks (TTNs), which provides a streamlined approach by calculating its imaginary-time proxy. We showcase the accuracy and versatility of our methodology by applying it to one-dimensional spin chains and two-dimensional lattices, where we find that the critical exponents $\eta$ and $z\nu$ can be extracted from the low-temperature scalings of the simulated $1/T_1$ near quantum critical points. Our results also provide insights into the low-dimensional and frustrated magnetic materials, elucidating universal scaling behaviors in the Ising chain compound CoNb$_2$O$_6$ and revealing the renormalized classical behaviors in the triangular-lattice antiferromagnet Ba$_8$CoNb$_6$O$_{24}$. We apply the approach to effective model of the family of frustrated magnets AYbCh$_2$ (A = Na, K, Cs, and Ch = O, S, Se), and find dramatic changes from spin ordered to the proposed quantum spin liquid phase. Overall, with high reliability and accuracy, the TTN methodology offers a systematic strategy for studying the intricate dynamics observed across a broad spectrum of quantum magnets and related fields., Comment: 15 pages, 12 figures
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.