24,429 results on '"He, Wei"'
Search Results
2. Towards Distributed Graph Representation Learning
- Author
-
Zhang, Hanlin, primary, Zhang, Yue, additional, He, Wei, additional, Xu, Yonghui, additional, and Cui, Lizhen, additional
- Published
- 2024
- Full Text
- View/download PDF
3. A Comprehensive Review of the Oversmoothing in Graph Neural Networks
- Author
-
Zhang, Xu, primary, Xu, Yonghui, additional, He, Wei, additional, Guo, Wei, additional, and Cui, Lizhen, additional
- Published
- 2024
- Full Text
- View/download PDF
4. Real Emotions in Virtual Play: The Impact of Honor of Kings on Players’ Attitudes Toward and Cognition of Historical Figures
- Author
-
He, Wei, primary and Li, Yue, additional
- Published
- 2024
- Full Text
- View/download PDF
5. Study on Corrosion of Gas Injection Well with Oxygen-Reduced Air Flooding in Gasikule E31 Reservoir
- Author
-
Cheng, Tao, primary, Hu, Fu-tang, additional, He, Wei-rong, additional, Xing, Zhan-long, additional, Dang, Yang-bin, additional, Yang, Hong-gang, additional, Ma, Sha-sha, additional, and Mao, Xiao-qian, additional
- Published
- 2024
- Full Text
- View/download PDF
6. Research and Application of 5G and Condition Monitoring in Predictive Maintenance of Ironmaking Blast Furnace
- Author
-
Zhu, Minjie, primary, Gao, Fan, additional, Guo, Lihong, additional, and He, Wei, additional
- Published
- 2024
- Full Text
- View/download PDF
7. Mean Reflected Backward Stochastic Differential Equations Driven by G-Brownian Motion with Double Constraints
- Author
-
He, Wei and Li, Hanwu
- Subjects
Mathematics - Probability - Abstract
In this paper, we study the backward stochastic differential equations driven by G-Brownian motion with double mean reflections, which means that the constraints are made on the law of the solution. Making full use of the backward Skorokhod problem with two nonlinear reflecting boundaries and the fixed-point theory, the existence and uniqueness of solutions are established. We also consider the case where the coefficients satisfy a non-Lipschitz condition using the Picard iteration argument only for the Y component. Moreover, some basic properties including a new version of comparison theorem and connection with a deterministic optimization problem are also obtained.
- Published
- 2024
8. Identifying every building's function in large-scale urban areas with multi-modality remote-sensing data
- Author
-
Li, Zhuohong, He, Wei, Li, Jiepan, and Zhang, Hongyan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Buildings, as fundamental man-made structures in urban environments, serve as crucial indicators for understanding various city function zones. Rapid urbanization has raised an urgent need for efficiently surveying building footprints and functions. In this study, we proposed a semi-supervised framework to identify every building's function in large-scale urban areas with multi-modality remote-sensing data. In detail, optical images, building height, and nighttime-light data are collected to describe the morphological attributes of buildings. Then, the area of interest (AOI) and building masks from the volunteered geographic information (VGI) data are collected to form sparsely labeled samples. Furthermore, the multi-modality data and weak labels are utilized to train a segmentation model with a semi-supervised strategy. Finally, results are evaluated by 20,000 validation points and statistical survey reports from the government. The evaluations reveal that the produced function maps achieve an OA of 82% and Kappa of 71% among 1,616,796 buildings in Shanghai, China. This study has the potential to support large-scale urban management and sustainable urban development. All collected data and produced maps are open access at https://github.com/LiZhuoHong/BuildingMap., Comment: 5 pages, 7 figures, accepted by IGARSS 2024
- Published
- 2024
9. Research Artifacts in Software Engineering Publications: Status and Trends
- Author
-
Liu, Mugeng, Huang, Xiaolong, He, Wei, Xie, Yibing, Zhang, Jie M., Jing, Xiang, Chen, Zhenpeng, and Ma, Yun
- Subjects
Computer Science - Software Engineering - Abstract
The Software Engineering (SE) community has been embracing the open science policy and encouraging researchers to disclose artifacts in their publications. However, the status and trends of artifact practice and quality remain unclear, lacking insights on further improvement. In this paper, we present an empirical study to characterize the research artifacts in SE publications. Specifically, we manually collect 1,487 artifacts from all 2,196 papers published in top-tier SE conferences (ASE, FSE, ICSE, and ISSTA) from 2017 to 2022. We investigate the common practices (e.g., URL location and format, storage websites), maintenance activities (e.g., last update time and URL validity), popularity (e.g., the number of stars on GitHub and characteristics), and quality (e.g., documentation and code smell) of these artifacts. Based on our analysis, we reveal a rise in publications providing artifacts. The usage of Zenodo for sharing artifacts has significantly increased. However, artifacts stored in GitHub tend to receive few stars, indicating a limited influence on real-world SE applications. We summarize the results and provide suggestions to different stakeholders in conjunction with current guidelines., Comment: Accepted by Journal of Systems and Software (JSS 2024). Please include JSS in any citations
- Published
- 2024
10. Witnessing Quantum Entanglement Using Resonant Inelastic X-ray Scattering
- Author
-
Ren, Tianhao, Shen, Yao, TenHuisen, Sophia F. R., Sears, Jennifer, He, Wei, Upton, Mary H., Casa, Diego, Becker, Petra, Mitrano, Matteo, Dean, Mark P. M., and Konik, Robert M.
- Subjects
Condensed Matter - Strongly Correlated Electrons ,Quantum Physics - Abstract
Although entanglement is both a central ingredient in our understanding of quantum many-body systems and an essential resource for quantum technologies, we only have a limited ability to quantify entanglement in real quantum materials. Thus far, entanglement metrology in quantum materials has been limited to measurements involving Hermitian operators, such as the detection of spin entanglement using inelastic neutron scattering. Here, we devise a method to extract the quantum Fisher information (QFI) from non-Hermitian operators and formulate an entanglement witness for resonant inelastic x-ray scattering (RIXS). Our approach is then applied to the model iridate dimer system Ba$_3$CeIr$_2$O$_9$ and used to directly test for entanglement of the electronic orbitals between neighboring Ir sites. We find that entanglement is challenging to detect under standard conditions, but that it could be achieved by analyzing the outgoing x-ray polarization or via specific choices of momentum and energy. Our protocol provides a new handle for entanglement detection, which offers routes to related types of entanglement witness (such as orbitally-resolved measurements) and to the generalization to out-of-equilibrium settings accessed in ultrafast settings., Comment: 10 pages, 8 figures
- Published
- 2024
11. Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models
- Author
-
He, Wei, Liu, Shichun, Zhao, Jun, Ding, Yiwen, Lu, Yi, Xi, Zhiheng, Gui, Tao, Zhang, Qi, and Huang, Xuanjing
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) have shown promising abilities of in-context learning (ICL), adapting swiftly to new tasks with only few-shot demonstrations. However, current few-shot methods heavily depend on high-quality, query-specific demos, which are often lacking. When faced with out-of-demonstration (OOD) queries, methods that rely on hand-crafted demos or external retrievers might fail. To bridge the gap between limited demos and OOD queries, we propose Self-Demos, a novel prompting method that elicits the inherent generalizability in LLMs by query-aware demo generation. The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID. To evaluate the effectiveness of our approach, we manually constructed OOD-Toolset, a dataset in the tool-using scenario with over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases as demos and an OOD query. Thorough experiments on our dataset and two public math benchmarks have shown that our method can outperform state-of-the-art baselines in the OOD setting. Moreover, we conduct a range of analyses to validate Self-Demos's generalization and provide more insights., Comment: Accepted to NAACL 2024 Findings
- Published
- 2024
12. Shear viscosity of quark-gluon plasma at finite temperature and chemical potential and QCD phase transitions
- Author
-
He, Wei-be, Shao, Guo-yun, Xie, Chong-long, and Xu, Ren-xin
- Subjects
High Energy Physics - Phenomenology ,Nuclear Theory - Abstract
We explore the shear viscosity of quark-gluon plasma (QGP) in the full QCD phase diagram within the framework of kinetic theory with the relaxation time approximation based on the $2 \leftrightarrow 2$ elastic scatterings of quark quasiparticles. The temperature and chemical potential dependent masses of particles, including $u, d, s$ quarks, their antiparticles, and exchanged mesons, are calculated in the Polyakov-loop extended Nambu--Jona Lasinio (PNJL) model. The results indicate that, at small chemical potential, the value of $\eta/s$ has a minimum near the Mott dissociation of mesons and increases rapidly in the lower-temperature side of the chiral crossover phase transition. At large chemical potential (high density), $\eta/s$ in the QGP phase is dominated by the temperature, and the value of $\eta/s$ is greatly enhanced at lower temperature. At intermediate temperature and chemical potential near the QCD phase transition, the situation is relatively complicated. The behavior of $\eta/s$ is influenced by the competition between temperature, density effect, and QCD phase transition., Comment: 12pages, 10 figures
- Published
- 2024
13. Learning without Exact Guidance: Updating Large-scale High-resolution Land Cover Maps from Low-resolution Historical Labels
- Author
-
Li, Zhuohong, He, Wei, Li, Jiepan, Lu, Fangxiao, and Zhang, Hongyan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Large-scale high-resolution (HR) land-cover mapping is a vital task to survey the Earth's surface and resolve many challenges facing humanity. However, it is still a non-trivial task hindered by complex ground details, various landforms, and the scarcity of accurate training labels over a wide-span geographic area. In this paper, we propose an efficient, weakly supervised framework (Paraformer) to guide large-scale HR land-cover mapping with easy-access historical land-cover data of low resolution (LR). Specifically, existing land-cover mapping approaches reveal the dominance of CNNs in preserving local ground details but still suffer from insufficient global modeling in various landforms. Therefore, we design a parallel CNN-Transformer feature extractor in Paraformer, consisting of a downsampling-free CNN branch and a Transformer branch, to jointly capture local and global contextual information. Besides, facing the spatial mismatch of training data, a pseudo-label-assisted training (PLAT) module is adopted to reasonably refine LR labels for weakly supervised semantic segmentation of HR images. Experiments on two large-scale datasets demonstrate the superiority of Paraformer over other state-of-the-art methods for automatically updating HR land-cover maps from LR historical labels., Comment: 11 pages, 9 figures, accepted by CVPR 2024
- Published
- 2024
14. The $Z_b$ states as the mixture of the molecular and diquark-anti-diquark components within the effective field theory
- Author
-
He, Wei, Zhang, De-Shun, and Sun, Zhi-Feng
- Subjects
High Energy Physics - Phenomenology ,High Energy Physics - Experiment - Abstract
In this study, we reconsider the states $Z_b(10610)$ and $Z_b(10650)$ by investigating the presence of diquark-anti-diquark components as well as the hadronic molecule components in the framework of effective field theory. The different masses of pseudoscalar mesons such as $\pi^{0}$, $\eta_{8}$, and $\eta_{0}$, as well as vector mesons like $\rho^{0}$ and $\omega$ violate the OZI rule that is well depicted under the $[U(3)_L\otimes U(3)_R]_{global}\otimes [U(3)_V]_{local}$ symmetry. To account for the contribution of intermediate bosons of heavy masses within the OBE model, we introduce an exponential form factor instead of the commonly used monopole form factor in the past. By solving the coupled-channel Schr\"{o}dinger equation with the Gaussian expansion method, our numerical results indicate that the $Z_b(10610)$ and $Z_b(10650)$ states can be explained as hadronic molecules slightly mixing with diquark-anti-diquark states., Comment: 9 pages, 2 figures
- Published
- 2024
15. Fabrication of PVC Based Composites and Nanocomposites by Mechanical Mixing
- Author
-
Jiao, Zhiwei, primary and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
16. A Fast Algorithm for Satellite Coverage Window Based on Segmented Dichotomy
- Author
-
Li, Fusheng, primary, He, Wei, additional, Chao, Tao, additional, Sun, Weibo, additional, and Quan, Shenming, additional
- Published
- 2023
- Full Text
- View/download PDF
17. Exploration and Reflection on the Construction of Embedded C Language Course
- Author
-
Liu, Danjuan, primary and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
18. Real-Time Adaptive Safety-Critical Control with Gaussian Processes in High-Order Uncertain Models
- Author
-
Zhang, Yu, Wen, Long, Yao, Xiangtong, Bing, Zhenshan, Kong, Linghuan, He, Wei, and Knoll, Alois
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control - Abstract
This paper presents an adaptive online learning framework for systems with uncertain parameters to ensure safety-critical control in non-stationary environments. Our approach consists of two phases. The initial phase is centered on a novel sparse Gaussian process (GP) framework. We first integrate a forgetting factor to refine a variational sparse GP algorithm, thus enhancing its adaptability. Subsequently, the hyperparameters of the Gaussian model are trained with a specially compound kernel, and the Gaussian model's online inferential capability and computational efficiency are strengthened by updating a solitary inducing point derived from new samples, in conjunction with the learned hyperparameters. In the second phase, we propose a safety filter based on high-order control barrier functions (HOCBFs), synergized with the previously trained learning model. By leveraging the compound kernel from the first phase, we effectively address the inherent limitations of GPs in handling high-dimensional problems for real-time applications. The derived controller ensures a rigorous lower bound on the probability of satisfying the safety specification. Finally, the efficacy of our proposed algorithm is demonstrated through real-time obstacle avoidance experiments executed using both a simulation platform and a real-world 7-DOF robot.
- Published
- 2024
19. DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models
- Author
-
He, Wei, Han, Kai, Tang, Yehui, Wang, Chengcheng, Yang, Yujie, Guo, Tianyu, and Wang, Yunhe
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Large language models (LLMs) face a daunting challenge due to the excessive computational and memory requirements of the commonly used Transformer architecture. While state space model (SSM) is a new type of foundational network architecture offering lower computational complexity, their performance has yet to fully rival that of Transformers. This paper introduces DenseSSM, a novel approach to enhance the flow of hidden information between layers in SSMs. By selectively integrating shallowlayer hidden states into deeper layers, DenseSSM retains fine-grained information crucial for the final output. Dense connections enhanced DenseSSM still maintains the training parallelizability and inference efficiency. The proposed method can be widely applicable to various SSM types like RetNet and Mamba. With similar model size, DenseSSM achieves significant improvements, exemplified by DenseRetNet outperforming the original RetNet with up to 5% accuracy improvement on public benchmarks. code is avalaible at https://github.com/WailordHe/DenseSSM
- Published
- 2024
20. Online Efficient Safety-Critical Control for Mobile Robots in Unknown Dynamic Multi-Obstacle Environments
- Author
-
Zhang, Yu, Tian, Guangyao, Wen, Long, Yao, Xiangtong, Zhang, Liding, Bing, Zhenshan, He, Wei, and Knoll, Alois
- Subjects
Computer Science - Robotics ,Computer Science - Artificial Intelligence - Abstract
This paper proposes a LiDAR-based goal-seeking and exploration framework, addressing the efficiency of online obstacle avoidance in unstructured environments populated with static and moving obstacles. This framework addresses two significant challenges associated with traditional dynamic control barrier functions (D-CBFs): their online construction and the diminished real-time performance caused by utilizing multiple D-CBFs. To tackle the first challenge, the framework's perception component begins with clustering point clouds via the DBSCAN algorithm, followed by encapsulating these clusters with the minimum bounding ellipses (MBEs) algorithm to create elliptical representations. By comparing the current state of MBEs with those stored from previous moments, the differentiation between static and dynamic obstacles is realized, and the Kalman filter is utilized to predict the movements of the latter. Such analysis facilitates the D-CBF's online construction for each MBE. To tackle the second challenge, we introduce buffer zones, generating Type-II D-CBFs online for each identified obstacle. Utilizing these buffer zones as activation areas substantially reduces the number of D-CBFs that need to be activated. Upon entering these buffer zones, the system prioritizes safety, autonomously navigating safe paths, and hence referred to as the exploration mode. Exiting these buffer zones triggers the system's transition to goal-seeking mode. We demonstrate that the system's states under this framework achieve safety and asymptotic stabilization. Experimental results in simulated and real-world environments have validated our framework's capability, allowing a LiDAR-equipped mobile robot to efficiently and safely reach the desired location within dynamic environments containing multiple obstacles.
- Published
- 2024
21. LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration
- Author
-
Zhao, Jun, Zu, Can, Xu, Hao, Lu, Yi, He, Wei, Ding, Yiwen, Gui, Tao, Zhang, Qi, and Huang, Xuanjing
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks. However, LLMs with long context windows have been notorious for their expensive training costs and high inference latency. Even the most advanced models such as GPT-4 and Claude2 often make mistakes when processing inputs of over $100k$ tokens, a phenomenon also known as \textit{lost in the middle}. In this paper, we propose \textsc{LongAgent}, a method based on multi-agent collaboration, which scales LLMs (e.g., LLaMA) to a context of 128K and demonstrates potential superiority in long-text processing compared to GPT-4. In \textsc{LongAgent}, a leader is responsible for understanding user intent and directing team members to acquire information from documents. Due to members' hallucinations, it is non-trivial for a leader to obtain accurate information from the responses of dozens to hundreds of members. To address this, we develop an \textit{inter-member communication} mechanism to resolve response conflicts caused by hallucinations through information sharing. Our experimental results indicate that \textsc{LongAgent} offers a promising alternative for long-text processing. The agent team instantiated with LLaMA-7B achieves significant improvements in tasks such as 128k-long text retrieval, multi-hop question answering, compared to GPT-4.
- Published
- 2024
22. LongHeads: Multi-Head Attention is Secretly a Long Context Processor
- Author
-
Lu, Yi, Zhou, Xin, He, Wei, Zhao, Jun, Ji, Tao, Gui, Tao, Zhang, Qi, and Huang, Xuanjing
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) have achieved impressive performance in numerous domains but often struggle to process lengthy inputs effectively and efficiently due to limited length generalization and attention's quadratic computational demands. Many sought to mitigate this by restricting the attention window within the pre-trained length. However, these methods introduce new issues such as ignoring the middle context and requiring additional training. To address these problems, we propose LongHeads, a training-free framework that enhances LLM's long context ability by unlocking multi-head attention's untapped potential. Instead of allowing each head to attend to the full sentence, which struggles with generalizing to longer sequences due to out-of-distribution (OOD) issues, we allow each head to process in-distribution length by selecting and attending to important context chunks. To this end, we propose a chunk selection strategy that relies on the inherent correlation between the query and the key representations, efficiently distributing context chunks to different heads. In this way, each head ensures it can effectively process attended tokens within the trained length, while different heads in different layers can collectively process longer contexts. LongHeads works efficiently in linear time, fits seamlessly with many LLMs that use relative positional encoding. LongHeads achieves 100% accuracy at the 128k length on passkey retrieval task, verifying LongHeads's efficacy in extending the usable context window for existing models. We release our code at https://github.com/LuLuLuyi/LongHeads .
- Published
- 2024
23. Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
- Author
-
Xi, Zhiheng, Chen, Wenxiang, Hong, Boyang, Jin, Senjie, Zheng, Rui, He, Wei, Ding, Yiwen, Liu, Shichun, Guo, Xin, Wang, Junzhe, Guo, Honglin, Shen, Wei, Fan, Xiaoran, Zhou, Yuhao, Dou, Shihan, Wang, Xiao, Zhang, Xinbo, Sun, Peng, Gui, Tao, Zhang, Qi, and Huang, Xuanjing
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models. The core challenge in applying RL to complex reasoning is to identify a sequence of actions that result in positive rewards and provide appropriate supervision for optimization. Outcome supervision provides sparse rewards for final results without identifying error locations, whereas process supervision offers step-wise rewards but requires extensive manual annotation. R$^3$ overcomes these limitations by learning from correct demonstrations. Specifically, R$^3$ progressively slides the start state of reasoning from a demonstration's end to its beginning, facilitating easier model exploration at all stages. Thus, R$^3$ establishes a step-wise curriculum, allowing outcome supervision to offer step-level signals and precisely pinpoint errors. Using Llama2-7B, our method surpasses RL baseline on eight reasoning tasks by $4.1$ points on average. Notebaly, in program-based reasoning on GSM8K, it exceeds the baseline by $4.2$ points across three backbone models, and without any extra data, Codellama-7B + R$^3$ performs comparable to larger models or closed-source models., Comment: Preprint. Codes released: https://github.com/WooooDyy/LLM-Reverse-Curriculum-RL
- Published
- 2024
24. Adaptive Regularized Low-Rank Tensor Decomposition for Hyperspectral Image Denoising and Destriping
- Author
-
Li, Dongyi, Chu, Dong, Guan, Xiaobin, He, Wei, and Shen, Huanfeng
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Hyperspectral images (HSIs) are inevitably degraded by a mixture of various types of noise, such as Gaussian noise, impulse noise, stripe noise, and dead pixels, which greatly limits the subsequent applications. Although various denoising methods have already been developed, accurately recovering the spatial-spectral structure of HSIs remains a challenging problem to be addressed. Furthermore, serious stripe noise, which is common in real HSIs, is still not fully separated by the previous models. In this paper, we propose an adaptive hyperLaplacian regularized low-rank tensor decomposition (LRTDAHL) method for HSI denoising and destriping. On the one hand, the stripe noise is separately modeled by the tensor decomposition, which can effectively encode the spatial-spectral correlation of the stripe noise. On the other hand, adaptive hyper-Laplacian spatial-spectral regularization is introduced to represent the distribution structure of different HSI gradient data by adaptively estimating the optimal hyper-Laplacian parameter, which can reduce the spatial information loss and over-smoothing caused by the previous total variation regularization. The proposed model is solved using the alternating direction method of multipliers (ADMM) algorithm. Extensive simulation and real-data experiments all demonstrate the effectiveness and superiority of the proposed method.
- Published
- 2024
25. LightCLIP: Learning Multi-Level Interaction for Lightweight Vision-Language Models
- Author
-
Nie, Ying, He, Wei, Han, Kai, Tang, Yehui, Guo, Tianyu, Du, Fanyi, and Wang, Yunhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Vision-language pre-training like CLIP has shown promising performance on various downstream tasks such as zero-shot image classification and image-text retrieval. Most of the existing CLIP-alike works usually adopt relatively large image encoders like ResNet50 and ViT, while the lightweight counterparts are rarely discussed. In this paper, we propose a multi-level interaction paradigm for training lightweight CLIP models. Firstly, to mitigate the problem that some image-text pairs are not strictly one-to-one correspondence, we improve the conventional global instance-level alignment objective by softening the label of negative samples progressively. Secondly, a relaxed bipartite matching based token-level alignment objective is introduced for finer-grained alignment between image patches and textual words. Moreover, based on the observation that the accuracy of CLIP model does not increase correspondingly as the parameters of text encoder increase, an extra objective of masked language modeling (MLM) is leveraged for maximizing the potential of the shortened text encoder. In practice, an auxiliary fusion module injecting unmasked image embedding into masked text embedding at different network stages is proposed for enhancing the MLM. Extensive experiments show that without introducing additional computational cost during inference, the proposed method achieves a higher performance on multiple downstream tasks.
- Published
- 2023
26. Evaluation of European-based polygenic risk score for breast cancer in Ashkenazi Jewish women in Israel
- Author
-
Levi, Hagai, Carmi, Shai, Rosset, Saharon, Yerushalmi, Rinat, Zick, Aviad, Yablonski-Peretz, Tamar, Consortium, The BCAC, Wang, Qin, Bolla, Manjeet K, Dennis, Joe, Michailidou, Kyriaki, Lush, Michael, Ahearn, Thomas, Andrulis, Irene L, Anton-Culver, Hoda, Antoniou, Antonis C, Arndt, Volker, Augustinsson, Annelie, Auvinen, Päivi, Freeman, Laura Beane, Beckmann, Matthias, Behrens, Sabine, Bermisheva, Marina, Bodelon, Clara, Bogdanova, Natalia V, Bojesen, Stig E, Brenner, Hermann, Byers, Helen, Camp, Nicola, Castelao, Jose, Chang-Claude, Jenny, Chirlaque, María-Dolores, Chung, Wendy, Clarke, Christine, Collaborators, NBCS, Collee, Margriet J, Colonna, Sarah, Consortium, CTS, Couch, Fergus, Cox, Angela, Cross, Simon S, Czene, Kamila, Daly, Mary, Devilee, Peter, Dork, Thilo, Dossus, Laure, Eccles, Diana M, Eliassen, A Heather, Eriksson, Mikael, Evans, Gareth, Fasching, Peter, Fletcher, Olivia, Flyger, Henrik, Fritschi, Lin, Gabrielson, Marike, Gago-Dominguez, Manuela, García-Closas, Montserrat, Garcia-Saenz, Jose Angel, Genkinger, Jeanine, Giles, Graham G, Goldberg, Mark, Guénel, Pascal, Hall, Per, Hamann, Ute, He, Wei, Hillemanns, Peter, Hollestelle, Antoinette, Hoppe, Reiner, Hopper, John, Investigators, ABCTB, Jakovchevska, Simona, Jakubowska, Anna, Jernström, Helena, John, Esther, Johnson, Nichola, Jones, Michael, Vijai, Joseph, Kaaks, Rudolf, Khusnutdinova, Elza, Kitahara, Cari, Koutros, Stella, Kristensen, Vessela, Kurian, Allison W, Lacey, James, Lambrechts, Diether, Le Marchand, Loic, Lejbkowicz, Flavio, Lindblom, Annika, Loibl, Sibylle, Lori, Adriana, Lubinski, Jan, Mannermaa, Arto, Manoochehri, Mehdi, Mavroudis, Dimitrios, Menon, Usha, Mulligan, AnnaMarie, Murphy, Rachel, Nevelsteen, Ines, Newman, William G, and Obi, Nadia
- Subjects
Biological Sciences ,Biomedical and Clinical Sciences ,Genetics ,Oncology and Carcinogenesis ,Breast Cancer ,Prevention ,Cancer ,Humans ,Female ,Breast Neoplasms ,Genome-Wide Association Study ,Jews ,Israel ,Genetic Predisposition to Disease ,Risk Factors ,Multifactorial Inheritance ,Transcription Factors ,Genomics ,Polymorphism ,Genetic ,BCAC Consortium ,NBCS Collaborators ,CTS Consortium ,ABCTB Investigators ,Polymorphism ,Genetic ,Medical and Health Sciences ,Genetics & Heredity ,Clinical sciences - Abstract
BackgroundPolygenic risk score (PRS), calculated based on genome-wide association studies (GWASs), can improve breast cancer (BC) risk assessment. To date, most BC GWASs have been performed in individuals of European (EUR) ancestry, and the generalisation of EUR-based PRS to other populations is a major challenge. In this study, we examined the performance of EUR-based BC PRS models in Ashkenazi Jewish (AJ) women.MethodsWe generated PRSs based on data on EUR women from the Breast Cancer Association Consortium (BCAC). We tested the performance of the PRSs in a cohort of 2161 AJ women from Israel (1437 cases and 724 controls) from BCAC (BCAC cohort from Israel (BCAC-IL)). In addition, we tested the performance of these EUR-based BC PRSs, as well as the established 313-SNP EUR BC PRS, in an independent cohort of 181 AJ women from Hadassah Medical Center (HMC) in Israel.ResultsIn the BCAC-IL cohort, the highest OR per 1 SD was 1.56 (±0.09). The OR for AJ women at the top 10% of the PRS distribution compared with the middle quintile was 2.10 (±0.24). In the HMC cohort, the OR per 1 SD of the EUR-based PRS that performed best in the BCAC-IL cohort was 1.58±0.27. The OR per 1 SD of the commonly used 313-SNP BC PRS was 1.64 (±0.28).ConclusionsExtant EUR GWAS data can be used for generating PRSs that identify AJ women with markedly elevated risk of BC and therefore hold promise for improving BC risk assessment in AJ women.
- Published
- 2023
27. Quark matter with an anisotropic momentum distribution
- Author
-
He, Wei-bo and Shao, Guo-yun
- Subjects
High Energy Physics - Phenomenology - Abstract
Motivated by the anisotropic momentum distribution of particles in heavy-ion collisions, we study the angular dependence of quark average momentum and quark distribution function in the Polyakov-Nambu--Jona-Lasinio (PNJL) quark model. We also investigate the phase transitions and net baryon number fluctuations in anisotropic quark matter. The numerical results suggest that the QCD phase structure and isentropic trajectories are sensitive to the anisotropic parameter at finite density, in particular, in the area near the critical region and the first-order phase transition. Compared with the isotropic quark matter, the values of baryon number kurtosis and skewness at lower collision energies are possibly enhanced with the anisotropic momentum distribution squeezed along the direction of nucleus-nucleus collision in experiments., Comment: 8 pages, 6 figures
- Published
- 2023
- Full Text
- View/download PDF
28. Local strain inhomogeneities during the electrical triggering of a metal-insulator transition revealed by the x-ray microscopy
- Author
-
Salev, Pavel, Kisiel, Elliot, Sasaki, Dayne, Gunn, Brandon, He, Wei, Feng, Mingzhen, Li, Junjie, Tamura, Nobumichi, Poudyal, Ishwor, Islam, Zahir, Takamura, Yayoi, Frano, Alex, and Schuller, Ivan K.
- Subjects
Condensed Matter - Materials Science - Abstract
Electrical triggering of a metal-insulator transition (MIT) often results in the formation of characteristic spatial patterns such as a metallic filament percolating through an insulating matrix or an insulating barrier splitting a conducting matrix. When the MIT triggering is driven by electrothermal effects, the temperature of the filament or barrier can be substantially higher than the rest of material. Using x-ray microdiffraction and dark-field x-ray microscopy, we show that electrothermal MIT triggering leads to the development of an inhomogeneous strain profile across the switching device, even when the material does not undergo a 1st order structural phase transition coinciding with the MIT. Diffraction measurements further reveal evidence of lattice distortions and twinning occurring within the MIT switching device, highlighting a qualitative distinction between the electrothermal process and equilibrium thermal lattice expansion in nonlinear electrical systems. Electrically induced strain development, lattice distortions, and twinning could have important contributions in the MIT triggering process and could drive the material into non-equilibrium states, providing an unconventional pathway to explore the phase space of strongly correlated electronic systems.
- Published
- 2023
29. Species196: A One-Million Semi-supervised Dataset for Fine-grained Species Recognition
- Author
-
He, Wei, Han, Kai, Nie, Ying, Wang, Chengcheng, and Wang, Yunhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
The development of foundation vision models has pushed the general visual recognition to a high level, but cannot well address the fine-grained recognition in specialized domain such as invasive species classification. Identifying and managing invasive species has strong social and ecological value. Currently, most invasive species datasets are limited in scale and cover a narrow range of species, which restricts the development of deep-learning based invasion biometrics systems. To fill the gap of this area, we introduced Species196, a large-scale semi-supervised dataset of 196-category invasive species. It collects over 19K images with expert-level accurate annotations Species196-L, and 1.2M unlabeled images of invasive species Species196-U. The dataset provides four experimental settings for benchmarking the existing models and algorithms, namely, supervised learning, semi-supervised learning, self-supervised pretraining and zero-shot inference ability of large multi-modal models. To facilitate future research on these four learning paradigms, we conduct an empirical study of the representative methods on the introduced dataset. The dataset is publicly available at https://species-dataset.github.io/., Comment: Accepted by NeurIPS 2023 Track Datasets and Benchmarks
- Published
- 2023
30. Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism
- Author
-
Wang, Chengcheng, He, Wei, Nie, Ying, Guo, Jianyuan, Liu, Chuanjian, Han, Kai, and Wang, Yunhe
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
In the past years, YOLO-series models have emerged as the leading approaches in the area of real-time object detection. Many studies pushed up the baseline to a higher level by modifying the architecture, augmenting data and designing new losses. However, we find previous models still suffer from information fusion problem, although Feature Pyramid Network (FPN) and Path Aggregation Network (PANet) have alleviated this. Therefore, this study provides an advanced Gatherand-Distribute mechanism (GD) mechanism, which is realized with convolution and self-attention operations. This new designed model named as Gold-YOLO, which boosts the multi-scale feature fusion capabilities and achieves an ideal balance between latency and accuracy across all model scales. Additionally, we implement MAE-style pretraining in the YOLO-series for the first time, allowing YOLOseries models could be to benefit from unsupervised pretraining. Gold-YOLO-N attains an outstanding 39.9% AP on the COCO val2017 datasets and 1030 FPS on a T4 GPU, which outperforms the previous SOTA model YOLOv6-3.0-N with similar FPS by +2.4%. The PyTorch code is available at https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO, and the MindSpore code is available at https://gitee.com/mindspore/models/tree/master/research/cv/Gold_YOLO., Comment: Accepted by NeurIPS 2023
- Published
- 2023
31. Large Nonreciprocity of Shear-Horizontal Surface Acoustic Waves induced by Magnetoelastic Bilayers
- Author
-
Huang, Mingxian, Liu, Yuanyuan, Hu, Wenbin, Wu, Yutong, Wang, Wen, He, Wei, Zhang, Huaiwu, and Bai, Feiming
- Subjects
Physics - Applied Physics - Abstract
We report large nonreciprocity in the transmission of shear-horizontal surface acoustic waves (SAWs) on LiTaO3 substrate coated with a FeCoSiB/NiFeCu magnetoelastic bilayer. The large difference in saturation magnetization of the two layers not only brings nonreciprocal spin waves (SWs), but also ensures the phonon-magnon (SAWs-SWs) coupling at relatively low wavenumbers. It is found that the angle between the magnetization and the wavevector play important roles in determining the strength of magnetoelastic coupling and nonreciprocity, simultaneously. A large nonreciprocal transmission of SAWs about 30 dB (i.e. 60 dB/mm) is demonstrated at 2.33 GHz. In addition, the dispersion relation between coupled SH-SAWs and nonreciprocal SWs is developed, which provide a good insight into the observed phenomena. Our results offer a convenient approach to implement nonreciprocal SAW isolators or circulators.
- Published
- 2023
32. A New Adaptive Phase-locked Loop for Synchronization of a Grid-Connected Voltage Source Converter: Simulation and Experimental Results
- Author
-
He, Wei, Yan, Jiachen, Ortega, Romeo, Zonetti, Daniele, and Zhou, Wangping
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
In [1] a new adaptive phase-locked loop scheme for synchronization of a grid connected voltage source converter with guaranteed (almost) global stability properties was reported. To guarantee a suitable synchronization with the angle of the three-phase grid voltage we design an adaptive observer for such a signal requiring measurements only at the point of common coupling. An interesting feature of this scheme is the ability to synchronize in the challenging condition of connection with a grid with reduced short-circuit ratio. In this paper we present some simulation and experimental illustration of the excellent performance of the proposed solution., Comment: Something needs to be modified so that this paper is more clear
- Published
- 2023
33. The Rise and Potential of Large Language Model Based Agents: A Survey
- Author
-
Xi, Zhiheng, Chen, Wenxiang, Guo, Xin, He, Wei, Ding, Yiwen, Hong, Boyang, Zhang, Ming, Wang, Junzhe, Jin, Senjie, Zhou, Enyu, Zheng, Rui, Fan, Xiaoran, Wang, Xiao, Xiong, Limao, Zhou, Yuhao, Wang, Weiran, Jiang, Changhao, Zou, Yicheng, Liu, Xiangyang, Yin, Zhangyue, Dou, Shihan, Weng, Rongxiang, Cheng, Wensen, Zhang, Qi, Qin, Wenjuan, Zheng, Yongyan, Qiu, Xipeng, Huang, Xuanjing, and Gui, Tao
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List., Comment: 86 pages, 12 figures
- Published
- 2023
34. Real-time Monitoring for the Next Core-Collapse Supernova in JUNO
- Author
-
Abusleme, Angel, Adam, Thomas, Ahmad, Shakeel, Ahmed, Rizwan, Aiello, Sebastiano, Akram, Muhammad, Aleem, Abid, An, Fengpeng, An, Qi, Andronico, Giuseppe, Anfimov, Nikolay, Antonelli, Vito, Antoshkina, Tatiana, Asavapibhop, Burin, de André, João Pedro Athayde Marcondes, Auguste, Didier, Bai, Weidong, Balashov, Nikita, Baldini, Wander, Barresi, Andrea, Basilico, Davide, Baussan, Eric, Bellato, Marco, Beretta, Marco, Bergnoli, Antonio, Bick, Daniel, Bieger, Lukas, Biktemerova, Svetlana, Birkenfeld, Thilo, Morton-Blake, Iwan, Blum, David, Blyth, Simon, Bolshakova, Anastasia, Bongrand, Mathieu, Bordereau, Clément, Breton, Dominique, Brigatti, Augusto, Brugnera, Riccardo, Bruno, Riccardo, Budano, Antonio, Busto, Jose, Cabrera, Anatael, Caccianiga, Barbara, Cai, Hao, Cai, Xiao, Cai, Yanke, Cai, Zhiyan, Callier, Stéphane, Cammi, Antonio, Campeny, Agustin, Cao, Chuanya, Cao, Guofu, Cao, Jun, Caruso, Rossella, Cerna, Cédric, Cerrone, Vanessa, Chan, Chi, Chang, Jinfan, Chang, Yun, Chatrabhuti, Auttakit, Chen, Chao, Chen, Guoming, Chen, Pingping, Chen, Shaomin, Chen, Yixue, Chen, Yu, Chen, Zhangming, Chen, Zhiyuan, Chen, Zikang, Cheng, Jie, Cheng, Yaping, Cheng, Yu Chin, Chepurnov, Alexander, Chetverikov, Alexey, Chiesa, Davide, Chimenti, Pietro, Chin, Yen-Ting, Chu, Ziliang, Chukanov, Artem, Claverie, Gérard, Clementi, Catia, Clerbaux, Barbara, Molla, Marta Colomer, Di Lorenzo, Selma Conforti, Coppi, Alberto, Corti, Daniele, Csakli, Simon, Corso, Flavio Dal, Dalager, Olivia, Datta, Jaydeep, De La Taille, Christophe, Deng, Zhi, Deng, Ziyan, Ding, Xiaoyu, Ding, Xuefeng, Ding, Yayun, Dirgantara, Bayu, Dittrich, Carsten, Dmitrievsky, Sergey, Dohnal, Tadeas, Dolzhikov, Dmitry, Donchenko, Georgy, Dong, Jianmeng, Doroshkevich, Evgeny, Dou, Wei, Dracos, Marcos, Druillole, Frédéric, Du, Ran, Du, Shuxian, Dugas, Katherine, Dusini, Stefano, Duyang, Hongyue, Eck, Jessica, Enqvist, Timo, Fabbri, Andrea, Fahrendholz, Ulrike, Fan, Lei, Fang, Jian, Fang, Wenxing, Fargetta, Marco, Fedoseev, Dmitry, Fei, Zhengyong, Feng, Li-Cheng, Feng, Qichun, Ferraro, Federico, Fournier, Amélie, Gan, Haonan, Gao, Feng, Garfagnini, Alberto, Gavrikov, Arsenii, Giammarchi, Marco, Giudice, Nunzio, Gonchar, Maxim, Gong, Guanghua, Gong, Hui, Gornushkin, Yuri, Göttel, Alexandre, Grassi, Marco, Gromov, Maxim, Gromov, Vasily, Gu, Minghao, Gu, Xiaofei, Gu, Yu, Guan, Mengyun, Guan, Yuduo, Guardone, Nunzio, Guo, Cong, Guo, Wanlei, Guo, Xinheng, Hagner, Caren, Han, Ran, Han, Yang, He, Miao, He, Wei, Heinz, Tobias, Hellmuth, Patrick, Heng, Yuekun, Herrera, Rafael, Hor, YuenKeung, Hou, Shaojing, Hsiung, Yee, Hu, Bei-Zhen, Hu, Hang, Hu, Jianrun, Hu, Jun, Hu, Shouyang, Hu, Tao, Hu, Yuxiang, Hu, Zhuojun, Huang, Guihong, Huang, Hanxiong, Huang, Jinhao, Huang, Junting, Huang, Kaixuan, Huang, Wenhao, Huang, Xin, Huang, Xingtao, Huang, Yongbo, Hui, Jiaqi, Huo, Lei, Huo, Wenju, Huss, Cédric, Hussain, Safeer, Imbert, Leonard, Ioannisian, Ara, Isocrate, Roberto, Jafar, Arshak, Jelmini, Beatrice, Jeria, Ignacio, Ji, Xiaolu, Jia, Huihui, Jia, Junji, Jian, Siyu, Jiang, Cailian, Jiang, Di, Jiang, Wei, Jiang, Xiaoshan, Jing, Xiaoping, Jollet, Cécile, Kampmann, Philipp, Kang, Li, Karaparambil, Rebin, Kazarian, Narine, Khan, Ali, Khatun, Amina, Khosonthongkee, Khanchai, Korablev, Denis, Kouzakov, Konstantin, Krasnoperov, Alexey, Kuleshov, Sergey, Kutovskiy, Nikolay, Labit, Loïc, Lachenmaier, Tobias, Landini, Cecilia, Leblanc, Sébastien, Lebrin, Victor, Lefevre, Frederic, Lei, Ruiting, Leitner, Rupert, Leung, Jason, Li, Demin, Li, Fei, Li, Fule, Li, Gaosong, Li, Huiling, Li, Jiajun, Li, Mengzhao, Li, Min, Li, Nan, Li, Qingjiang, Li, Ruhui, Li, Rui, Li, Shanfeng, Li, Tao, Li, Teng, Li, Weidong, Li, Weiguo, Li, Xiaomei, Li, Xiaonan, Li, Xinglong, Li, Yi, Li, Yichen, Li, Yufeng, Li, Zhaohan, Li, Zhibing, Li, Ziyuan, Li, Zonghai, Liang, Hao, Liao, Jiajun, Limphirat, Ayut, Lin, Guey-Lin, Lin, Shengxin, Lin, Tao, Ling, Jiajie, Ling, Xin, Lippi, Ivano, Liu, Caimei, Liu, Fang, Liu, Fengcheng, Liu, Haidong, Liu, Haotian, Liu, Hongbang, Liu, Hongjuan, Liu, Hongtao, Liu, Hui, Liu, Jianglai, Liu, Jiaxi, Liu, Jinchang, Liu, Min, Liu, Qian, Liu, Qin, Liu, Runxuan, Liu, Shenghui, Liu, Shubin, Liu, Shulin, Liu, Xiaowei, Liu, Xiwen, Liu, Xuewei, Liu, Yankai, Liu, Zhen, Lokhov, Alexey, Lombardi, Paolo, Lombardo, Claudio, Loo, Kai, Lu, Chuan, Lu, Haoqi, Lu, Jingbin, Lu, Junguang, Lu, Peizhi, Lu, Shuxiang, Lu, Xianguo, Lubsandorzhiev, Bayarto, Lubsandorzhiev, Sultim, Ludhova, Livia, Lukanov, Arslan, Luo, Daibin, Luo, Fengjiao, Luo, Guang, Luo, Jianyi, Luo, Shu, Luo, Wuming, Luo, Xiaojie, Lyashuk, Vladimir, Ma, Bangzheng, Ma, Bing, Ma, Qiumei, Ma, Si, Ma, Xiaoyan, Ma, Xubo, Maalmi, Jihane, Magoni, Marco, Mai, Jingyu, Malyshkin, Yury, Mandujano, Roberto Carlos, Mantovani, Fabio, Mao, Xin, Mao, Yajun, Mari, Stefano M., Marini, Filippo, Martini, Agnese, Mayer, Matthias, Mayilyan, Davit, Mednieks, Ints, Meng, Yue, Meraviglia, Anita, Meregaglia, Anselmo, Meroni, Emanuela, Meyhöfer, David, Miramonti, Lino, Mohan, Nikhil, Montuschi, Michele, Müller, Axel, Nastasi, Massimiliano, Naumov, Dmitry V., Naumova, Elena, Navas-Nicolas, Diana, Nemchenok, Igor, Thi, Minh Thuan Nguyen, Nikolaev, Alexey, Ning, Feipeng, Ning, Zhe, Nunokawa, Hiroshi, Oberauer, Lothar, Ochoa-Ricoux, Juan Pedro, Olshevskiy, Alexander, Orestano, Domizia, Ortica, Fausto, Othegraven, Rainer, Paoloni, Alessandro, Parmeggiano, Sergio, Pei, Yatian, Pelicci, Luca, Peng, Anguo, Peng, Haiping, Peng, Yu, Peng, Zhaoyuan, Perrot, Frédéric, Petitjean, Pierre-Alexandre, Petrucci, Fabrizio, Pilarczyk, Oliver, Rico, Luis Felipe Piñeres, Popov, Artyom, Poussot, Pascal, Previtali, Ezio, Qi, Fazhi, Qi, Ming, Qi, Xiaohui, Qian, Sen, Qian, Xiaohui, Qian, Zhen, Qiao, Hao, Qin, Zhonghua, Qiu, Shoukang, Qu, Manhao, Qu, Zhenning, Ranucci, Gioacchino, Rasheed, Reem, Re, Alessandra, Rebii, Abdel, Redchuk, Mariia, Ren, Bin, Ren, Jie, Ricci, Barbara, Rientong, Komkrit, Rifai, Mariam, Roche, Mathieu, Rodphai, Narongkiat, Romani, Aldo, Roskovec, Bedřich, Ruan, Xichao, Rybnikov, Arseniy, Sadovsky, Andrey, Saggese, Paolo, Sandanayake, Deshan, Sangka, Anut, Sava, Giuseppe, Sawangwit, Utane, Schever, Michaela, Schwab, Cédric, Schweizer, Konstantin, Selyunin, Alexandr, Serafini, Andrea, Settimo, Mariangela, Sharov, Vladislav, Shaydurova, Arina, Shi, Jingyan, Shi, Yanan, Shutov, Vitaly, Sidorenkov, Andrey, Šimkovic, Fedor, Singhal, Apeksha, Sirignano, Chiara, Siripak, Jaruchit, Sisti, Monica, Smirnov, Mikhail, Smirnov, Oleg, Sogo-Bezerra, Thiago, Sokolov, Sergey, Songwadhana, Julanan, Soonthornthum, Boonrucksar, Sotnikov, Albert, Šrámek, Ondřej, Sreethawong, Warintorn, Stahl, Achim, Stanco, Luca, Stankevich, Konstantin, Steiger, Hans, Steinmann, Jochen, Sterr, Tobias, Stock, Matthias Raphael, Strati, Virginia, Studenikin, Alexander, Su, Aoqi, Su, Jun, Sun, Shifeng, Sun, Xilei, Sun, Yongjie, Sun, Yongzhao, Sun, Zhengyang, Suwonjandee, Narumon, Szelezniak, Michal, Takenaka, Akira, Tang, Jian, Tang, Qiang, Tang, Quan, Tang, Xiao, Hariharan, Vidhya Thara, Theisen, Eric, Tietzsch, Alexander, Tkachev, Igor, Tmej, Tomas, Torri, Marco Danilo Claudio, Tortorici, Francesco, Treskov, Konstantin, Triossi, Andrea, Triozzi, Riccardo, Trzaska, Wladyslaw, Tung, Yu-Chen, Tuve, Cristina, Ushakov, Nikita, Vedin, Vadim, Venettacci, Carlo, Verde, Giuseppe, Vialkov, Maxim, Viaud, Benoit, Vollbrecht, Cornelius Moritz, von Sturm, Katharina, Vorobel, Vit, Voronin, Dmitriy, Votano, Lucia, Walker, Pablo, Wang, Caishen, Wang, Chung-Hsiang, Wang, En, Wang, Guoli, Wang, Jian, Wang, Jun, Wang, Li, Wang, Lu, Wang, Meng, Wang, Ruiguang, Wang, Siguang, Wang, Wei, Wang, Wenshuai, Wang, Xi, Wang, Xiangyue, Wang, Yangfu, Wang, Yaoguang, Wang, Yi, Wang, Yifang, Wang, Yuanqing, Wang, Yuyi, Wang, Zhe, Wang, Zheng, Wang, Zhimin, Watcharangkool, Apimook, Wei, Wei, Wei, Wenlu, Wei, Yadong, Wei, Yuehuan, Wen, Kaile, Wen, Liangjian, Weng, Jun, Wiebusch, Christopher, Wirth, Rosmarie, Wonsak, Bjoern, Wu, Diru, Wu, Qun, Wu, Yiyang, Wu, Zhi, Wurm, Michael, Wurtz, Jacques, Wysotzki, Christian, Xi, Yufei, Xia, Dongmei, Xiao, Fei, Xiao, Xiang, Xie, Xiaochuan, Xie, Yuguang, Xie, Zhangquan, Xin, Zhao, Xing, Zhizhong, Xu, Benda, Xu, Cheng, Xu, Donglian, Xu, Fanrong, Xu, Hangkun, Xu, Jilei, Xu, Jing, Xu, Meihang, Xu, Xunjie, Xu, Yin, Xu, Yu, Yan, Baojun, Yan, Qiyu, Yan, Taylor, Yan, Xiongbo, Yan, Yupeng, Yang, Changgen, Yang, Chengfeng, Yang, Jie, Yang, Lei, Yang, Xiaoyu, Yang, Yifan, Yao, Haifeng, Ye, Jiaxuan, Ye, Mei, Ye, Ziping, Yermia, Frédéric, You, Zhengyun, Yu, Boxiang, Yu, Chiye, Yu, Chunxu, Yu, Guojun, Yu, Hongzhao, Yu, Miao, Yu, Xianghui, Yu, Zeyuan, Yu, Zezhong, Yuan, Cenxi, Yuan, Chengzhuo, Yuan, Ying, Yuan, Zhenxiong, Yue, Baobiao, Zafar, Noman, Zavadskyi, Vitalii, Zeng, Fanrui, Zeng, Shan, Zeng, Tingxuan, Zeng, Yuda, Zhan, Liang, Zhang, Aiqiang, Zhang, Bin, Zhang, Binting, Zhang, Feiyang, Zhang, Haosen, Zhang, Honghao, Zhang, Jialiang, Zhang, Jiawen, Zhang, Jie, Zhang, Jingbo, Zhang, Jinnan, ZHANG, Lei, Zhang, Mohan, Zhang, Peng, Zhang, Ping, Zhang, Qingmin, Zhang, Shiqi, Zhang, Shu, Zhang, Shuihan, Zhang, Siyuan, Zhang, Tao, Zhang, Xiaomei, Zhang, Xin, Zhang, Xuantong, Zhang, Yinhong, Zhang, Yiyu, Zhang, Yongpeng, Zhang, Yu, Zhang, Yuanyuan, Zhang, Yumei, Zhang, Zhenyu, Zhang, Zhijian, Zhao, Jie, Zhao, Rong, Zhao, Runze, Zhao, Shujun, Zheng, Dongqin, Zheng, Hua, Zheng, Yangheng, Zhong, Weirong, Zhou, Jing, Zhou, Li, Zhou, Nan, Zhou, Shun, Zhou, Tong, Zhou, Xiang, Zhu, Jingsen, Zhu, Kangfu, Zhu, Kejun, Zhu, Zhihang, Zhuang, Bo, Zhuang, Honglin, Zong, Liang, Zou, Jiaheng, and Züfle, Jan
- Subjects
High Energy Physics - Experiment ,Astrophysics - High Energy Astrophysical Phenomena ,High Energy Physics - Phenomenology - Abstract
The core-collapse supernova (CCSN) is considered one of the most energetic astrophysical events in the universe. The early and prompt detection of neutrinos before (pre-SN) and during the supernova (SN) burst presents a unique opportunity for multi-messenger observations of CCSN events. In this study, we describe the monitoring concept and present the sensitivity of the system to pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), a 20 kton liquid scintillator detector currently under construction in South China. The real-time monitoring system is designed to ensure both prompt alert speed and comprehensive coverage of progenitor stars. It incorporates prompt monitors on the electronic board as well as online monitors at the data acquisition stage. Assuming a false alert rate of 1 per year, this monitoring system exhibits sensitivity to pre-SN neutrinos up to a distance of approximately 1.6 (0.9) kiloparsecs and SN neutrinos up to about 370 (360) kiloparsecs for a progenitor mass of 30 solar masses, considering both normal and inverted mass ordering scenarios. The pointing ability of the CCSN is evaluated by analyzing the accumulated event anisotropy of inverse beta decay interactions from pre-SN or SN neutrinos. This, along with the early alert, can play a crucial role in facilitating follow-up multi-messenger observations of the next galactic or nearby extragalactic CCSN., Comment: 24 pages, 9 figures, accepted for the publication at JCAP
- Published
- 2023
35. Fast and Parallel Algorithms for Orbit and Attitude Computation
- Author
-
Wang, Xuechuan, primary, Feng, Haoyang, additional, and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
36. Stability of $p$-adic valuations of Hecke L-values
- Author
-
He, Wei
- Subjects
Mathematics - Number Theory ,11F67, 11F41 - Abstract
In this paper, we prove $p$-stability results for the critical L-values of algebraic Hecke characters over CM fields in $\ell$-adic anticyclotomic twist family with $\ell\neq p$., Comment: 41 pages
- Published
- 2023
37. Exploring wavefunction hybridization of magnon-magnon hybrid state
- Author
-
Hu, Bo, Xie, Zong-Kai, Lu, Jie, and He, Wei
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics ,Condensed Matter - Materials Science - Abstract
We investigate magnon magnon hybrid states using a non Hermitian two band Hamiltonian and the concept of wavefunction hybridization. By comparing our model with micromagnetic simulations conducted on a synthetic antiferromagnet with strong magnon magnon coupling, we successfully reproduce not only the resonance frequencies and linewidths but also the phases and amplitudes of the magnon wavefunction. The hybridization effect influences the dissipation rate, leading to the crossing of linewidths. Additionally, we quantify the magnon hybridization within a magnonic Bloch sphere, which enhances the ability to manipulate hybrid magnons for coherent information processing., Comment: 4 figures
- Published
- 2023
38. Local probe investigation of the spin dynamics in the kagome and inter-layers of orthorhombic barlowite Cu$_4$(OD)$_6$FBr: $^{79}$Br and $^{63}$Cu NQR study
- Author
-
Imai, Takashi, Wang, Jiaming, Smaha, Rebecca W., He, Wei, Wen, Jiajia, and Lee, Young S.
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
We report $^{79}$Br and $^{63}$Cu nuclear quadrupole resonance (NQR) in the paramagnetic state above $T_\text{N} = 15$ K of the antiferromagnetic orthorhombic phase of barlowite Cu$_4$(OD)$_6$FBr consisting of a layered kagome structure. The divergent behavior of the longitudinal $^{79}(1/T_{1})$ and transverse $^{79}(1/T_{2})$ relaxation rates observed at $^{79}$Br sites evidences that critical slowing down of Cu spin fluctuations sets in below $\sim20$ K. This means that one or more Cu sites, most likely at the interlayer Cu(3,4,5) sites between the kagome planes, undergo the antiferromagnetic phase transition in a fairly conventional way. On the other hand, the $^{63}$Cu NQR signal intensity is gradually wiped out below $\sim30$ K, pointing toward gradual spin freezing of the kagome layers instead. These contrasting findings suggest significant roles played by magnetic frustration effects within the kagome layers., Comment: Accepted for publication in Phys. Rev. Mater. 5 figures
- Published
- 2023
- Full Text
- View/download PDF
39. iEDA: An Open-Source Intelligent Physical Implementation Toolkit and Library
- Author
-
Li, Xingquan, Tao, Simin, Huang, Zengrong, Chen, Shijian, Zeng, Zhisheng, Ni, Liwei, Huang, Zhipeng, Zhuang, Chunan, Wu, Hongxi, Li1, Weiguo, Zhao, Xueyan, Liu, He, Long, Shuaiying, He, Wei, Liu, Bojun, Gan, Sifeng, Yu, Zihao, Liu, Tong, Miao, Yuchi, Yan, Zhiyuan, Wang, Hao, Zhao, Jie, Li, Yifan, Liu, Ruizhi, Lin, Xiaoze, Yang, Bo, Xue, Zhen, Huang, Fuxing, Yang, Zonglin, Wu, Zhenggang, Li, Jiangkao, Liu, Yuezuo, Peng, Ming, Qiu, Yihang, Wu, Wenrui, Shao, Zheqing, Mo, Kai, Liu, Jikang, Liang, Yuyao, Zhang, Mingzhe, Ma, Zhuang, Cong, Xiang, Huang, Daxiang, Luo, Guojie, Li, Huawei, Shen, Haihua, Chen, Mingyu, Bu, Dongbo, Zhu, Wenxing, Cai, Ye, Xiong, Xiaoming, Jiang, Ying, Heng, Yi, Zhang, Peng, Xie, Biwei, and Bao, Yungang
- Subjects
Computer Science - Hardware Architecture - Abstract
Open-source EDA shows promising potential in unleashing EDA innovation and lowering the cost of chip design. This paper presents an open-source EDA project, iEDA, aiming for building a basic infrastructure for EDA technology evolution and closing the industrial-academic gap in the EDA area. iEDA now covers the whole flow of physical design (including Floorplan, Placement, CTS, Routing, Timing Optimization etc.), and part of the analysis tools (Static Timing Analysis and Power Analysis). To demonstrate the effectiveness of iEDA, we implement and tape out three chips of different scales (from 700k to 1.5M gates) on different process nodes (110nm and 28nm) with iEDA. iEDA is publicly available from the project home page http://ieda.oscc.cc.
- Published
- 2023
40. Towards Understanding the Capability of Large Language Models on Code Clone Detection: A Survey
- Author
-
Dou, Shihan, Shan, Junjie, Jia, Haoxiang, Deng, Wenhao, Xi, Zhiheng, He, Wei, Wu, Yueming, Gui, Tao, Liu, Yang, and Huang, Xuanjing
- Subjects
Computer Science - Software Engineering - Abstract
Code cloning, the duplication of code fragments, is common in software development. While some reuse aids productivity, excessive cloning hurts maintainability and introduces bugs. Hence, automatic code clone detection is vital. Meanwhile, large language models (LLMs) possess diverse code-related knowledge, making them versatile for various software engineering challenges. However, LLMs' performance in code clone detection is unclear and needs more study for accurate assessment. In this paper, we provide the first comprehensive evaluation of LLMs for clone detection, covering different clone types, languages, and prompts. We find advanced LLMs excel in detecting complex semantic clones, surpassing existing methods. Adding intermediate reasoning steps via chain-of-thought prompts noticeably enhances performance. Additionally, representing code as vector embeddings, especially with text encoders, effectively aids clone detection.Lastly, the ability of LLMs to detect code clones differs among various programming languages. Our study suggests that LLMs have potential for clone detection due to their language capabilities, offering insights for developing robust LLM-based methods to enhance software engineering., Comment: 13 pages, 3 figures
- Published
- 2023
41. Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network
- Author
-
He, Wei, Li, Jiepan, Cao, Weinan, Zhang, Liangpei, and Zhang, Hongyan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Building extraction aims to segment building pixels from remote sensing images and plays an essential role in many applications, such as city planning and urban dynamic monitoring. Over the past few years, deep learning methods with encoder-decoder architectures have achieved remarkable performance due to their powerful feature representation capability. Nevertheless, due to the varying scales and styles of buildings, conventional deep learning models always suffer from uncertain predictions and cannot accurately distinguish the complete footprints of the building from the complex distribution of ground objects, leading to a large degree of omission and commission. In this paper, we realize the importance of uncertain prediction and propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem. To verify the performance of our proposed UANet, we conduct extensive experiments on three public building datasets, including the WHU building dataset, the Massachusetts building dataset, and the Inria aerial image dataset. Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
- Published
- 2023
42. JUNO sensitivity to the annihilation of MeV dark matter in the galactic halo
- Author
-
JUNO Collaboration, Abusleme, Angel, Adam, Thomas, Ahmad, Shakeel, Ahmed, Rizwan, Aiello, Sebastiano, Akram, Muhammad, Aleem, Abid, Alexandros, Tsagkarakis, An, Fengpeng, An, Qi, Andronico, Giuseppe, Anfimov, Nikolay, Antonelli, Vito, Antoshkina, Tatiana, Asavapibhop, Burin, de André, João Pedro Athayde Marcondes, Auguste, Didier, Bai, Weidong, Balashov, Nikita, Baldini, Wander, Barresi, Andrea, Basilico, Davide, Baussan, Eric, Bellato, Marco, Bergnoli, Antonio, Bick, Daniel, Birkenfeld, Thilo, Blin, Sylvie, Blum, David, Blyth, Simon, Bolshakova, Anastasia, Bongrand, Mathieu, Bordereau, Clément, Breton, Dominique, Brigatti, Augusto, Brugnera, Riccardo, Bruno, Riccardo, Budano, Antonio, Busto, Jose, Butorov, Ilya, Cabrera, Anatael, Caccianiga, Barbara, Cai, Hao, Cai, Xiao, Cai, Yanke, Cai, Zhiyan, Callegari, Riccardo, Cammi, Antonio, Campeny, Agustin, Cao, Chuanya, Cao, Guofu, Cao, Jun, Caruso, Rossella, Cerna, Cédric, Chan, Chi, Chang, Jinfan, Chang, Yun, Chen, Guoming, Chen, Pingping, Chen, Po-An, Chen, Shaomin, Chen, Yixue, Chen, Yu, Chen, Zhiyuan, Chen, Zikang, Cheng, Jie, Cheng, Yaping, Cheng, Yu Chin, Chepurnov, Alexander, Chetverikov, Alexey, Chiesa, Davide, Chimenti, Pietro, Chu, Ziliang, Chukanov, Artem, Claverie, Gérard, Clementi, Catia, Clerbaux, Barbara, Molla, Marta Colomer, Di Lorenzo, Selma Conforti, Corti, Daniele, Corso, Flavio Dal, Dalager, Olivia, De La Taille, Christophe, Deng, Zhi, Deng, Ziyan, Depnering, Wilfried, Diaz, Marco, Ding, Xuefeng, Ding, Yayun, Dirgantara, Bayu, Dmitrievsky, Sergey, Dohnal, Tadeas, Dolzhikov, Dmitry, Donchenko, Georgy, Dong, Jianmeng, Doroshkevich, Evgeny, Dou, Wei, Dracos, Marcos, Druillole, Frédéric, Du, Ran, Du, Shuxian, Dusini, Stefano, Dvorak, Martin, Eck, Jessica, Enqvist, Timo, Fabbri, Andrea, Fahrendholz, Ulrike, Fan, Donghua, Fan, Lei, Fang, Jian, Fang, Wenxing, Fargetta, Marco, Fedoseev, Dmitry, Fei, Zhengyong, Feng, Li-Cheng, Feng, Qichun, Ford, Richard, Fournier, Amélie, Gan, Haonan, Gao, Feng, Garfagnini, Alberto, Gavrikov, Arsenii, Giammarchi, Marco, Giudice, Nunzio, Gonchar, Maxim, Gong, Guanghua, Gong, Hui, Gornushkin, Yuri, Göttel, Alexandre, Grassi, Marco, Gromov, Maxim, Gromov, Vasily, Gu, Minghao, Gu, Xiaofei, Gu, Yu, Guan, Mengyun, Guan, Yuduo, Guardone, Nunzio, Guo, Cong, Guo, Wanlei, Guo, Xinheng, Guo, Yuhang, Hagner, Caren, Han, Ran, Han, Yang, He, Miao, He, Wei, Heinz, Tobias, Hellmuth, Patrick, Heng, Yuekun, Herrera, Rafael, Hor, YuenKeung, Hou, Shaojing, Hsiung, Yee, Hu, Bei-Zhen, Hu, Hang, Hu, Jianrun, Hu, Jun, Hu, Shouyang, Hu, Tao, Hu, Yuxiang, Hu, Zhuojun, Huang, Guihong, Huang, Hanxiong, Huang, Kaixuan, Huang, Wenhao, Huang, Xin, Huang, Xingtao, Huang, Yongbo, Hui, Jiaqi, Huo, Lei, Huo, Wenju, Huss, Cédric, Hussain, Safeer, Ioannisian, Ara, Isocrate, Roberto, Jelmini, Beatrice, Jeria, Ignacio, Ji, Xiaolu, Jia, Huihui, Jia, Junji, Jian, Siyu, Jiang, Di, Jiang, Wei, Jiang, Xiaoshan, Jing, Xiaoping, Jollet, Cécile, Kalousis, Leonidas, Kampmann, Philipp, Kang, Li, Karaparambil, Rebin, Kazarian, Narine, Khatun, Amina, Khosonthongkee, Khanchai, Korablev, Denis, Kouzakov, Konstantin, Krasnoperov, Alexey, Kutovskiy, Nikolay, Kuusiniemi, Pasi, Lachenmaier, Tobias, Landini, Cecilia, Leblanc, Sébastien, Lebrin, Victor, Lefevre, Frederic, Lei, Ruiting, Leitner, Rupert, Leung, Jason, Li, Daozheng, Li, Demin, Li, Fei, Li, Fule, Li, Gaosong, Li, Huiling, Li, Mengzhao, Li, Min, Li, Nan, Li, Qingjiang, Li, Ruhui, Li, Rui, Li, Shanfeng, Li, Tao, Li, Teng, Li, Weidong, Li, Weiguo, Li, Xiaomei, Li, Xiaonan, Li, Xinglong, Li, Yi, Li, Yichen, Li, Yufeng, Li, Zepeng, Li, Zhaohan, Li, Zhibing, Li, Ziyuan, Li, Zonghai, Liang, Hao, Liao, Jiajun, Limphirat, Ayut, Lin, Guey-Lin, Lin, Shengxin, Lin, Tao, Ling, Jiajie, Lippi, Ivano, Liu, Fang, Liu, Haidong, Liu, Haotian, Liu, Hongbang, Liu, Hongjuan, Liu, Hongtao, Liu, Hui, Liu, Jianglai, Liu, Jinchang, Liu, Min, Liu, Qian, Liu, Qin, Liu, Runxuan, Liu, Shubin, Liu, Shulin, Liu, Xiaowei, Liu, Xiwen, Liu, Yan, Liu, Yunzhe, Lokhov, Alexey, Lombardi, Paolo, Lombardo, Claudio, Loo, Kai, Lu, Chuan, Lu, Haoqi, Lu, Jingbin, Lu, Junguang, Lu, Peizhi, Lu, Shuxiang, Lubsandorzhiev, Bayarto, Lubsandorzhiev, Sultim, Ludhova, Livia, Lukanov, Arslan, Luo, Daibin, Luo, Fengjiao, Luo, Guang, Luo, Shu, Luo, Wuming, Luo, Xiaojie, Lyashuk, Vladimir, Ma, Bangzheng, Ma, Bing, Ma, Qiumei, Ma, Si, Ma, Xiaoyan, Ma, Xubo, Maalmi, Jihane, Mai, Jingyu, Malyshkin, Yury, Mandujano, Roberto Carlos, Mantovani, Fabio, Mao, Xin, Mao, Yajun, Mari, Stefano M., Marini, Filippo, Martin-Chassard, Gisele, Martini, Agnese, Mayer, Matthias, Mayilyan, Davit, Mednieks, Ints, Meinusch, Artur, Meng, Yue, Meregaglia, Anselmo, Meroni, Emanuela, Meyhöfer, David, Mezzetto, Mauro, Miller, Jonathan, Miramonti, Lino, Montini, Paolo, Montuschi, Michele, Müller, Axel, Nastasi, Massimiliano, Naumov, Dmitry V., Naumova, Elena, Navas-Nicolas, Diana, Nemchenok, Igor, Thi, Minh Thuan Nguyen, Nikolaev, Alexey, Ning, Feipeng, Ning, Zhe, Nunokawa, Hiroshi, Oberauer, Lothar, Ochoa-Ricoux, Juan Pedro, Olshevskiy, Alexander, Orestano, Domizia, Ortica, Fausto, Othegraven, Rainer, Paoloni, Alessandro, Parmeggiano, Sergio, Pei, Yatian, Pelicci, Luca, Peng, Anguo, Peng, Haiping, Peng, Yu, Peng, Zhaoyuan, Perrot, Frédéric, Petitjean, Pierre-Alexandre, Petrucci, Fabrizio, Pilarczyk, Oliver, Rico, Luis Felipe Piñeres, Popov, Artyom, Poussot, Pascal, Previtali, Ezio, Qi, Fazhi, Qi, Ming, Qian, Sen, Qian, Xiaohui, Qian, Zhen, Qiao, Hao, Qin, Zhonghua, Qiu, Shoukang, Ranucci, Gioacchino, Rasheed, Reem, Re, Alessandra, Rebber, Henning, Rebii, Abdel, Redchuk, Mariia, Ren, Bin, Ren, Jie, Ricci, Barbara, Rifai, Mariam, Roche, Mathieu, Rodphai, Narongkiat, Romani, Aldo, Roskovec, Bedřich, Ruan, Xichao, Rybnikov, Arseniy, Sadovsky, Andrey, Saggese, Paolo, Sanfilippo, Simone, Sangka, Anut, Sawangwit, Utane, Sawatzki, Julia, Schever, Michaela, Schwab, Cédric, Schweizer, Konstantin, Selyunin, Alexandr, Serafini, Andrea, Settanta, Giulio, Settimo, Mariangela, Shao, Zhuang, Sharov, Vladislav, Shaydurova, Arina, Shi, Jingyan, Shi, Yanan, Shutov, Vitaly, Sidorenkov, Andrey, Šimkovic, Fedor, Sirignano, Chiara, Siripak, Jaruchit, Sisti, Monica, Slupecki, Maciej, Smirnov, Mikhail, Smirnov, Oleg, Sogo-Bezerra, Thiago, Sokolov, Sergey, Songwadhana, Julanan, Soonthornthum, Boonrucksar, Sotnikov, Albert, Šrámek, Ondřej, Sreethawong, Warintorn, Stahl, Achim, Stanco, Luca, Stankevich, Konstantin, Štefánik, Dušan, Steiger, Hans, Steinmann, Jochen, Sterr, Tobias, Stock, Matthias Raphael, Strati, Virginia, Studenikin, Alexander, Su, Jun, Sun, Shifeng, Sun, Xilei, Sun, Yongjie, Sun, Yongzhao, Sun, Zhengyang, Suwonjandee, Narumon, Szelezniak, Michal, Tang, Jian, Tang, Qiang, Tang, Quan, Tang, Xiao, Hariharan, Vidhya Thara, Theisen, Eric, Tietzsch, Alexander, Tkachev, Igor, Tmej, Tomas, Torri, Marco Danilo Claudio, Treskov, Konstantin, Triossi, Andrea, Troni, Giancarlo, Trzaska, Wladyslaw, Tung, Yu-Chen, Tuve, Cristina, Ushakov, Nikita, Vedin, Vadim, Verde, Giuseppe, Vialkov, Maxim, Viaud, Benoit, Vollbrecht, Cornelius Moritz, Volpe, Cristina, von Sturm, Katharina, Vorobel, Vit, Voronin, Dmitriy, Votano, Lucia, Walker, Pablo, Wang, Caishen, Wang, Chung-Hsiang, Wang, En, Wang, Guoli, Wang, Jian, Wang, Jun, Wang, Lu, Wang, Meifen, Wang, Meng, Wang, Ruiguang, Wang, Siguang, Wang, Wei, Wang, Wenshuai, Wang, Xi, Wang, Xiangyue, Wang, Yangfu, Wang, Yaoguang, Wang, Yi, Wang, Yifang, Wang, Yuanqing, Wang, Yuman, Wang, Zhe, Wang, Zheng, Wang, Zhimin, Wang, Zongyi, Watcharangkool, Apimook, Wei, Wei, Wei, Wenlu, Wei, Yadong, Wen, Kaile, Wen, Liangjian, Weng, Jun, Wiebusch, Christopher, Wonsak, Bjoern, Wu, Diru, Wu, Qun, Wu, Zhi, Wurm, Michael, Wurtz, Jacques, Wysotzki, Christian, Xi, Yufei, Xia, Dongmei, Xiao, Xiang, Xie, Xiaochuan, Xie, Yuguang, Xie, Zhangquan, Xin, Zhao, Xing, Zhizhong, Xu, Benda, Xu, Cheng, Xu, Donglian, Xu, Fanrong, Xu, Hangkun, Xu, Jilei, Xu, Jing, Xu, Meihang, Xu, Yin, Xu, Yu, Yan, Baojun, Yan, Qiyu, Yan, Taylor, Yan, Wenqi, Yan, Xiongbo, Yan, Yupeng, Yang, Changgen, Yang, Chengfeng, Yang, Huan, Yang, Jie, Yang, Lei, Yang, Xiaoyu, Yang, Yifan, Yao, Haifeng, Ye, Jiaxuan, Ye, Mei, Ye, Ziping, Yermia, Frédéric, You, Zhengyun, Yu, Boxiang, Yu, Chiye, Yu, Chunxu, Yu, Hongzhao, Yu, Miao, Yu, Xianghui, Yu, Zeyuan, Yu, Zezhong, Yuan, Cenxi, Yuan, Chengzhuo, Yuan, Ying, Yuan, Zhenxiong, Yue, Baobiao, Zafar, Noman, Zavadskyi, Vitalii, Zeng, Shan, Zeng, Tingxuan, Zeng, Yuda, Zhan, Liang, Zhang, Aiqiang, Zhang, Bin, Zhang, Binting, Zhang, Feiyang, Zhang, Guoqing, Zhang, Honghao, Zhang, Jialiang, Zhang, Jiawen, Zhang, Jie, Zhang, Jin, Zhang, Jingbo, Zhang, Jinnan, Zhang, Mohan, Zhang, Peng, Zhang, Qingmin, Zhang, Shiqi, Zhang, Shu, Zhang, Tao, Zhang, Xiaomei, Zhang, Xin, Zhang, Xuantong, Zhang, Yinhong, Zhang, Yiyu, Zhang, Yongpeng, Zhang, Yu, Zhang, Yuanyuan, Zhang, Yumei, Zhang, Zhenyu, Zhang, Zhijian, Zhao, Jie, Zhao, Rong, Zhao, Runze, Zhao, Shujun, Zheng, Dongqin, Zheng, Hua, Zheng, Yangheng, Zhong, Weirong, Zhou, Jing, Zhou, Li, Zhou, Nan, Zhou, Shun, Zhou, Tong, Zhou, Xiang, Zhu, Jingsen, Zhu, Kangfu, Zhu, Kejun, Zhu, Zhihang, Zhuang, Bo, Zhuang, Honglin, Zong, Liang, Zou, Jiaheng, and Zwickel, Sebastian
- Subjects
High Energy Physics - Experiment ,Astrophysics - High Energy Astrophysical Phenomena ,High Energy Physics - Phenomenology - Abstract
We discuss JUNO sensitivity to the annihilation of MeV dark matter in the galactic halo via detecting inverse beta decay reactions of electron anti-neutrinos resulting from the annihilation. We study possible backgrounds to the signature, including the reactor neutrinos, diffuse supernova neutrino background, charged- and neutral-current interactions of atmospheric neutrinos, backgrounds from muon-induced fast neutrons and cosmogenic isotopes. A fiducial volume cut, as well as the pulse shape discrimination and the muon veto are applied to suppress the above backgrounds. It is shown that JUNO sensitivity to the thermally averaged dark matter annihilation rate in 10 years of exposure would be significantly better than the present-day best limit set by Super-Kamiokande and would be comparable to that expected by Hyper-Kamiokande., Comment: 25 pages, 9 figures, matches the publised version
- Published
- 2023
- Full Text
- View/download PDF
43. On the masses of light pseudoscalar mesons
- Author
-
Liu, Chang-Yong and He, Wei
- Subjects
Physics - General Physics - Abstract
We investigate the masses of light pseudoscalar mesons by the method based on a new anomaly free condition for axial vector current. By this viewpoint, the field theories discussed here do not have the $U(1)$ problem. We calculate the masses of nine light pseudoscalar mesons, with theoretical result agrees reasonably good with experiment., Comment: 22 pages, 9 figures
- Published
- 2023
44. Radiomics analysis of lung CT for multidrug resistance prediction in active tuberculosis: a multicentre study
- Author
-
Li, Ye, Xu, Zexuan, Lv, Xinna, Li, Chenghai, He, Wei, Lv, Yan, and Hou, Dailun
- Subjects
Tuberculosis ,Rare Diseases ,Infection ,Good Health and Well Being ,Humans ,Retrospective Studies ,Tomography ,X-Ray Computed ,Tuberculosis ,Multidrug-Resistant ,Lung ,Drug Resistance ,Multiple ,Pulmonary tuberculosis ,Drug resistance ,Radiomics ,Machine learning ,Clinical Sciences ,Nuclear Medicine & Medical Imaging - Abstract
ObjectivesMultidrug-resistant TB (MDR-TB) is a severe burden and public health threat worldwide. This study aimed to develop a radiomics model based on the tree-in-bud (TIB) sign and nodules and validate its predictive performance for MDR-TB.MethodsWe retrospectively recruited 454 patients with proven active TB from two hospitals and classified them into three training and testing cohorts: TIB (n = 295, 102), nodules (n = 302, 97), and their combination (n = 261, 81). Radiomics features relating to TIB and nodules were separately extracted. The maximal information coefficient and recursive feature elimination were used to select informative features per the two signs. Two radiomics models were constructed to predict MDR-TB using a random forest classifier. Then, a combined model was built incorporating radiomics features based on these two signs. The capability of the models in the combined training and testing cohorts was validated with ROC curves.ResultsSixteen features were extracted from TIB and 15 from nodules. The AUCs of the combined model were slightly higher than those of the TIB model in the combined training cohort (0.911 versus 0.877, p > 0.05) and testing cohort (0.820 versus 0.786, p 0.05) and testing cohort (0.820 versus 0.855, p > 0.05).ConclusionsThe CT-based radiomics models hold promise for use as a non-invasive tool in the prediction of MDR-TB.Clinical relevance statementOur study revealed that complementary information regarding MDR-TB can be provided by radiomics based on the TIB sign and nodules. The proposed radiomics models may be new markers to predict MDR in active TB patients.Key points• This is the first study to build, validate, and apply radiomics based on tree-in-bud sign and nodules for the prediction of MDR-TB. • The radiomics model showed a favorable performance for the identification of MDR-TB. • The combined model holds potential to be used as a diagnostic tool in routine clinical practice.
- Published
- 2023
45. DPH Probe Method for Liposome-Membrane Fluidity Determination
- Author
-
He, Wei, primary
- Published
- 2023
- Full Text
- View/download PDF
46. RPnP Pose Estimation Optimized by Comprehensive Learning Pigeon-Inspired Optimization for Autonomous Aerial Refueling
- Author
-
Sun, Yongbin, primary, Xia, Xiaofeng, additional, Xin, Long, additional, and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
47. Domestic Heating, Cooking and Baseload Emissions and Life Cycle Cost Analysis of Technological Solutions
- Author
-
Ryland, Michael, primary and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
48. Behavior Analysis of Cooperative-Antagonistic Networks with Heterogeneous Delays
- Author
-
Zou, Yao, primary, Zhong, Liangyin, additional, and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
49. A Hyperparameter Quality Assessment Method for UAV Object Detection Based on IER Rule
- Author
-
Kang, Xiao, primary, Mu, Quanqi, additional, Han, Wence, additional, Zhu, Hailong, additional, He, Wei, additional, and Huang, Zhipeng, additional
- Published
- 2023
- Full Text
- View/download PDF
50. Wind load characteristics of single column three-sided billboards on sloping terrain
- Author
-
Zhang, Yu, primary, Yuan, Qi, additional, and He, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.