13,604 results on '"Han, Xu"'
Search Results
52. A performance evaluation method based on combination of knowledge graph and surrogate model
- Author
-
Han, Xu, Liu, Xinyu, Wang, Honghui, and Liu, Guijie
- Published
- 2024
- Full Text
- View/download PDF
53. New poly-types of LPSO structures in a non-equilibrium Mg97Zn1Y1.6Ca0.4 alloy
- Author
-
Jin, Qian-qian, Tang, Zi-hui, Xiao, Wen-long, Qu, Xiu-yu, Han, Xu-hao, Mei, Lin, Shao, Xiao-hong, and Ma, Xiu-liang
- Published
- 2024
- Full Text
- View/download PDF
54. An Edge Algorithm for Assessing the Severity of Insulator Discharges Using a Lightweight Improved YOLOv8
- Author
-
Yang, Yang, Geng, SanPing, Cheng, Chi, Yang, Xuan, Wu, PeiYao, Han, Xu, and Zhang, HangYuan
- Published
- 2024
- Full Text
- View/download PDF
55. Topological valley Hall polariton condensation
- Author
-
Peng, Kai, Li, Wei, Sun, Meng, Rivero, Jose D. H., Ti, Chaoyang, Han, Xu, Ge, Li, Yang, Lan, Zhang, Xiang, and Bao, Wei
- Published
- 2024
- Full Text
- View/download PDF
56. Unified View of Grokking, Double Descent and Emergent Abilities: A Perspective from Circuits Competition
- Author
-
Huang, Yufei, Hu, Shengding, Han, Xu, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Machine Learning - Abstract
Recent studies have uncovered intriguing phenomena in deep learning, such as grokking, double descent, and emergent abilities in large language models, which challenge human intuition and are crucial for a deeper understanding of neural models. In this paper, we present a comprehensive framework that provides a unified view of these three phenomena, focusing on the competition between memorization and generalization circuits. This approach, initially employed to explain grokking, is extended in our work to encompass a wider range of model sizes and training data volumes. Our framework delineates four distinct training dynamics, each depending on varying combinations of model size and training data quantity. Utilizing this framework, we provide a detailed analysis of the double descent phenomenon and propose two verifiable predictions regarding its occurrence, both substantiated by our experimental results. Moreover, we expand our framework to the multi-task learning paradigm, demonstrating how algorithm tasks can be turned into emergent abilities. This offers a novel perspective to understand emergent abilities in Large Language Models., Comment: 13 pages, 10 figures
- Published
- 2024
57. Ouroboros: Generating Longer Drafts Phrase by Phrase for Faster Speculative Decoding
- Author
-
Zhao, Weilin, Huang, Yuxiang, Han, Xu, Xu, Wang, Xiao, Chaojun, Zhang, Xinrong, Fang, Yewei, Zhang, Kaihuo, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
Speculative decoding is a widely used method that accelerates the generation process of large language models (LLMs) with no compromise in model performance. It achieves this goal by using an existing smaller model for drafting and then employing the target LLM to verify the draft in a low-cost parallel manner. Under such a drafting-verification framework, drafting efficiency has become a bottleneck in the final speedup of speculative decoding. Therefore, generating longer drafts at less cost can lead to better decoding speedup. To achieve this, we introduce Ouroboros, which can generate draft phrases to parallelize the drafting process and meanwhile lengthen drafts in a training-free manner. The experimental results on various typical text generation tasks show that Ouroboros can achieve speedups of up to $2.8\times$ over speculative decoding and $3.9\times$ over vanilla decoding, without fine-tuning draft and target models. The source code of Ouroboros is available at https://github.com/thunlp/Ouroboros., Comment: EMNLP 2024
- Published
- 2024
58. $\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens
- Author
-
Zhang, Xinrong, Chen, Yingfa, Hu, Shengding, Xu, Zihang, Chen, Junhao, Hao, Moo Khai, Han, Xu, Thai, Zhen Leng, Wang, Shuo, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction. Despite recent strides in making LLMs process contexts with more than 100K tokens, there is currently a lack of a standardized benchmark to evaluate this long-context capability. Existing public benchmarks typically focus on contexts around 10K tokens, limiting the assessment and comparison of LLMs in processing longer contexts. In this paper, we propose $\infty$Bench, the first LLM benchmark featuring an average data length surpassing 100K tokens. $\infty$Bench comprises synthetic and realistic tasks spanning diverse domains, presented in both English and Chinese. The tasks in $\infty$Bench are designed to require well understanding of long dependencies in contexts, and make simply retrieving a limited number of passages from contexts not sufficient for these tasks. In our experiments, based on $\infty$Bench, we evaluate the state-of-the-art proprietary and open-source LLMs tailored for processing long contexts. The results indicate that existing long context LLMs still require significant advancements to effectively process 100K+ context. We further present three intriguing analyses regarding the behavior of LLMs processing long context.
- Published
- 2024
59. OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems
- Author
-
He, Chaoqun, Luo, Renjie, Bai, Yuzhuo, Hu, Shengding, Thai, Zhen Leng, Shen, Junhao, Hu, Jinyi, Han, Xu, Huang, Yujie, Zhang, Yuxiang, Liu, Jie, Qi, Lei, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
Recent advancements have seen Large Language Models (LLMs) and Large Multimodal Models (LMMs) surpassing general human capabilities in various tasks, approaching the proficiency level of human experts across multiple domains. With traditional benchmarks becoming less challenging for these models, new rigorous challenges are essential to gauge their advanced abilities. In this work, we present OlympiadBench, an Olympiad-level bilingual multimodal scientific benchmark, featuring 8,476 problems from Olympiad-level mathematics and physics competitions, including the Chinese college entrance exam. Each problem is detailed with expert-level annotations for step-by-step reasoning. Evaluating top-tier models on OlympiadBench, we implement a comprehensive assessment methodology to accurately evaluate model responses. Notably, the best-performing model, GPT-4V, attains an average score of 17.97% on OlympiadBench, with a mere 10.74% in physics, highlighting the benchmark rigor and the intricacy of physical reasoning. Our analysis orienting GPT-4V points out prevalent issues with hallucinations, knowledge omissions, and logical fallacies. We hope that our challenging benchmark can serve as a valuable resource for helping future AGI research endeavors. The data and evaluation code are available at \url{https://github.com/OpenBMB/OlympiadBench}, Comment: Accepted by ACL 2024 (main), update
- Published
- 2024
60. ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
- Author
-
Song, Chenyang, Han, Xu, Zhang, Zhengyan, Hu, Shengding, Shi, Xiyu, Li, Kuai, Chen, Chen, Liu, Zhiyuan, Li, Guangli, Yang, Tao, and Sun, Maosong
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,I.2.7 - Abstract
Activation sparsity refers to the existence of considerable weakly-contributed elements among activation outputs. As a prevalent property of the models using the ReLU activation function, activation sparsity has been proven a promising paradigm to boost model inference efficiency. Nevertheless, most large language models (LLMs) adopt activation functions without intrinsic activation sparsity (e.g., GELU and Swish). Some recent efforts have explored introducing ReLU or its variants as the substitutive activation function to help LLMs achieve activation sparsity and inference acceleration, but few can simultaneously obtain high sparsity and comparable model performance. This paper introduces a simple and effective sparsification method named "ProSparse" to push LLMs for higher activation sparsity while maintaining comparable performance. Specifically, after substituting the activation function of LLMs with ReLU, ProSparse adopts progressive sparsity regularization with a factor smoothly increasing along the multi-stage sine curves. This can enhance activation sparsity and mitigate performance degradation by avoiding radical shifts in activation distributions. With ProSparse, we obtain high sparsity of 89.32% for LLaMA2-7B, 88.80% for LLaMA2-13B, and 87.89% for end-size MiniCPM-1B, respectively, achieving comparable performance to their original Swish-activated versions. These present the most sparsely activated models among open-source LLaMA versions and competitive end-size models, considerably surpassing ReluLLaMA-7B (66.98%) and ReluLLaMA-13B (71.56%). Our inference acceleration experiments further demonstrate the significant practical acceleration potential of LLMs with higher activation sparsity, obtaining up to 4.52$\times$ inference speedup., Comment: 19 pages, 4 figures, 9 tables
- Published
- 2024
61. Shall We Team Up: Exploring Spontaneous Cooperation of Competing LLM Agents
- Author
-
Wu, Zengqing, Peng, Run, Zheng, Shuyuan, Liu, Qianying, Han, Xu, Kwon, Brian Inhyuk, Onizuka, Makoto, Tang, Shaojie, and Xiao, Chuan
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Computers and Society ,Computer Science - Multiagent Systems ,Economics - General Economics - Abstract
Large Language Models (LLMs) have increasingly been utilized in social simulations, where they are often guided by carefully crafted instructions to stably exhibit human-like behaviors during simulations. Nevertheless, we doubt the necessity of shaping agents' behaviors for accurate social simulations. Instead, this paper emphasizes the importance of spontaneous phenomena, wherein agents deeply engage in contexts and make adaptive decisions without explicit directions. We explored spontaneous cooperation across three competitive scenarios and successfully simulated the gradual emergence of cooperation, findings that align closely with human behavioral data. This approach not only aids the computational social science community in bridging the gap between simulations and real-world dynamics but also offers the AI community a novel method to assess LLMs' capability of deliberate reasoning., Comment: EMNLP 2024 Findings. Source codes available at https://github.com/wuzengqing001225/SABM_ShallWeTeamUp
- Published
- 2024
62. LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks
- Author
-
Wang, Hanqing, Ping, Bowen, Wang, Shuo, Han, Xu, Chen, Yun, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
LoRA employs lightweight modules to customize large language models (LLMs) for each downstream task or domain, where different learned additional modules represent diverse skills. Combining existing LoRAs to address new tasks can enhance the reusability of learned LoRAs, particularly beneficial for tasks with limited annotated data. Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights. However, in generative tasks, different tokens may necessitate diverse skills to manage. Taking the Chinese math task as an example, understanding the problem description may depend more on the Chinese LoRA, while the calculation part may rely more on the math LoRA. To this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the impact of different LoRAs. The weights at each step are determined by a fusion gate with extremely few parameters, which can be learned with only 200 training examples. Experiments across six generative tasks demonstrate that our method consistently outperforms baselines with task-level fusion weights. This underscores the necessity of introducing dynamic fusion weights for LoRA combination., Comment: Work in Progress
- Published
- 2024
63. MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization
- Author
-
Yang, Zhiyu, Zhou, Zihan, Wang, Shuo, Cong, Xin, Han, Xu, Yan, Yukun, Liu, Zhenghao, Tan, Zhixing, Liu, Pengyuan, Yu, Dong, Liu, Zhiyuan, Shi, Xiaodong, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
Scientific data visualization plays a crucial role in research by enabling the direct display of complex information and assisting researchers in identifying implicit patterns. Despite its importance, the use of Large Language Models (LLMs) for scientific data visualization remains rather unexplored. In this study, we introduce MatPlotAgent, an efficient model-agnostic LLM agent framework designed to automate scientific data visualization tasks. Leveraging the capabilities of both code LLMs and multi-modal LLMs, MatPlotAgent consists of three core modules: query understanding, code generation with iterative debugging, and a visual feedback mechanism for error correction. To address the lack of benchmarks in this field, we present MatPlotBench, a high-quality benchmark consisting of 100 human-verified test cases. Additionally, we introduce a scoring approach that utilizes GPT-4V for automatic evaluation. Experimental results demonstrate that MatPlotAgent can improve the performance of various LLMs, including both commercial and open-source models. Furthermore, the proposed evaluation method shows a strong correlation with human-annotated scores., Comment: Work in Progress
- Published
- 2024
64. OneBit: Towards Extremely Low-bit Large Language Models
- Author
-
Xu, Yuzhuang, Han, Xu, Yang, Zonghan, Wang, Shuo, Zhu, Qingfu, Liu, Zhiyuan, Liu, Weidong, and Che, Wanxiang
- Subjects
Computer Science - Computation and Language - Abstract
Model quantification uses low bit-width values to represent the weight matrices of existing models to be quantized, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, current quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit model compressing framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the quantization framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 81% of the non-quantized performance on LLaMA models) with robust training processes when only using 1-bit weight matrices., Comment: Accepted by NeurIPS 2024
- Published
- 2024
65. Slow-Wave Hybrid Magnonics
- Author
-
Xu, Jing, Zhong, Changchun, Zhuang, Shihao, Qian, Chen, Jiang, Yu, Pishehvar, Amin, Han, Xu, Jin, Dafei, Jornet, Josep M., Zhen, Bo, Hu, Jiamian, Jiang, Liang, and Zhang, Xufeng
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics ,Physics - Applied Physics - Abstract
Cavity magnonics is an emerging research area focusing on the coupling between magnons and photons. Despite its great potential for coherent information processing, it has been long restricted by the narrow interaction bandwidth. In this work, we theoretically propose and experimentally demonstrate a novel approach to achieve broadband photon-magnon coupling by adopting slow waves on engineered microwave waveguides. To the best of our knowledge, this is the first time that slow wave is combined with hybrid magnonics. Its unique properties promise great potentials for both fundamental research and practical applications, for instance, by deepening our understanding of the light-matter interaction in the slow wave regime and providing high-efficiency spin wave transducers. The device concept can be extended to other systems such as optomagnonics and magnomechanics, opening up new directions for hybrid magnonics., Comment: 16 pages, 10 figures
- Published
- 2024
66. InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory
- Author
-
Xiao, Chaojun, Zhang, Pengle, Han, Xu, Xiao, Guangxuan, Lin, Yankai, Zhang, Zhengyan, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Large language models (LLMs) have emerged as a cornerstone in real-world applications with lengthy streaming inputs (e.g., LLM-driven agents). However, existing LLMs, pre-trained on sequences with a restricted maximum length, cannot process longer sequences due to the out-of-domain and distraction issues. Common solutions often involve continual pre-training on longer sequences, which will introduce expensive computational overhead and uncontrollable change in model capabilities. In this paper, we unveil the intrinsic capacity of LLMs for understanding extremely long sequences without any fine-tuning. To this end, we introduce a training-free memory-based method, InfLLM. Specifically, InfLLM stores distant contexts into additional memory units and employs an efficient mechanism to lookup token-relevant units for attention computation. Thereby, InfLLM allows LLMs to efficiently process long sequences with a limited context window and well capture long-distance dependencies. Without any training, InfLLM enables LLMs that are pre-trained on sequences consisting of a few thousand tokens to achieve comparable performance with competitive baselines that continually train these LLMs on long sequences. Even when the sequence length is scaled to $1,024$K, InfLLM still effectively captures long-distance dependencies. Our code can be found in \url{https://github.com/thunlp/InfLLM}.
- Published
- 2024
67. UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
- Author
-
Wang, Haoyu, Wang, Shuo, Yan, Yukun, Wang, Xujia, Yang, Zhiyu, Xu, Yuzhuang, Liu, Zhenghao, Yang, Liner, Ding, Ning, Han, Xu, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities. In this work, we therefore construct an open-source multilingual supervised fine-tuning dataset. Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs. Firstly, we introduce a knowledge-grounded data augmentation approach to elicit more language-specific knowledge of LLMs, improving their ability to serve users from different countries. Moreover, we find modern LLMs possess strong cross-lingual transfer capabilities, thus repeatedly learning identical content in various languages is not necessary. Consequently, we can substantially prune the language-agnostic supervised fine-tuning (SFT) data without any performance degradation, making multilingual SFT more efficient. The resulting UltraLink dataset comprises approximately 1 million samples across five languages (i.e., En, Zh, Ru, Fr, Es), and the proposed data construction method can be easily extended to other languages. UltraLink-LM, which is trained on UltraLink, outperforms several representative baselines across many tasks., Comment: Work in Progress
- Published
- 2024
68. ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs
- Author
-
Zhang, Zhengyan, Song, Yixin, Yu, Guanghui, Han, Xu, Lin, Yankai, Xiao, Chaojun, Song, Chenyang, Liu, Zhiyuan, Mi, Zeyu, and Sun, Maosong
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Sparse computation offers a compelling solution for the inference of Large Language Models (LLMs) in low-resource scenarios by dynamically skipping the computation of inactive neurons. While traditional approaches focus on ReLU-based LLMs, leveraging zeros in activation values, we broaden the scope of sparse LLMs beyond zero activation values. We introduce a general method that defines neuron activation through neuron output magnitudes and a tailored magnitude threshold, demonstrating that non-ReLU LLMs also exhibit sparse activation. To find the most efficient activation function for sparse computation, we propose a systematic framework to examine the sparsity of LLMs from three aspects: the trade-off between sparsity and performance, the predictivity of sparsity, and the hardware affinity. We conduct thorough experiments on LLMs utilizing different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$^2$. The results indicate that models employing ReLU$^2$ excel across all three evaluation aspects, highlighting its potential as an efficient activation function for sparse LLMs. We will release the code to facilitate future research.
- Published
- 2024
69. MatSAM: Efficient Extraction of Microstructures of Materials via Visual Large Model
- Author
-
Li, Changtai, Han, Xu, Yao, Chao, and Ban, Xiaojuan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Efficient and accurate extraction of microstructures in micrographs of materials is essential in process optimization and the exploration of structure-property relationships. Deep learning-based image segmentation techniques that rely on manual annotation are laborious and time-consuming and hardly meet the demand for model transferability and generalization on various source images. Segment Anything Model (SAM), a large visual model with powerful deep feature representation and zero-shot generalization capabilities, has provided new solutions for image segmentation. In this paper, we propose MatSAM, a general and efficient microstructure extraction solution based on SAM. A simple yet effective point-based prompt generation strategy is designed, grounded on the distribution and shape of microstructures. Specifically, in an unsupervised and training-free way, it adaptively generates prompt points for different microscopy images, fuses the centroid points of the coarsely extracted region of interest (ROI) and native grid points, and integrates corresponding post-processing operations for quantitative characterization of microstructures of materials. For common microstructures including grain boundary and multiple phases, MatSAM achieves superior zero-shot segmentation performance to conventional rule-based methods and is even preferable to supervised learning methods evaluated on 16 microscopy datasets whose micrographs are imaged by the optical microscope (OM) and scanning electron microscope (SEM). Especially, on 4 public datasets, MatSAM shows unexpected competitive segmentation performance against their specialist models. We believe that, without the need for human labeling, MatSAM can significantly reduce the cost of quantitative characterization and statistical analysis of extensive microstructures of materials, and thus accelerate the design of new materials., Comment: 18 pages, 8 figures, and 5 tables. Updated with revision and code repository
- Published
- 2024
70. Under Einstein’s Microscope: Measuring Properties of Individual Rotating Massive Stars from Extragalactic Microcaustic Crossings
- Author
-
Han, Xu and Dai, Liang
- Subjects
Astronomical Sciences ,Physical Sciences ,Astronomical and Space Sciences ,Atomic ,Molecular ,Nuclear ,Particle and Plasma Physics ,Physical Chemistry (incl. Structural) ,Astronomy & Astrophysics ,Astronomical sciences ,Particle and high energy physics ,Space sciences - Abstract
Highly magnified stars residing in caustic crossing lensed galaxies at z ≃ 0.7-1.5 in galaxy cluster lensing fields inevitably exhibit recurrent brightening events as they traverse a microcaustic network cast down by foreground intracluster stars. The detectable ones belong to nature’s most massive and luminous class of stars, with evolved blue supergiants being the brightest ones at optical wavelengths. Considering single stars in this work, we study to what extent intrinsic stellar parameters are measurable from multifilter light curves, which can be obtained with optical/near-IR space telescopes during one or multiple caustic crossing events. We adopt a realistic model for the axisymmetric surface brightness profiles of rotating O/B stars and develop a numerical lensing code that treats finite source size effects. With a single microcaustic crossing, the ratio of the surface rotation velocity to the breakup value is measurable to a precision of ∼0.1-0.2 for feasible observation parameters with current space telescopes, with all unknown intrinsic and extrinsic parameters marginalized over and without a degeneracy with inclination. Equatorial radius and bolometric luminosity can be measured to 1/3 and 2/3 of the fractional uncertainty in the microcaustic strength, for which the value is not known at each crossing but an informative prior can be obtained from theory. Parameter inference precision may be further improved if multiple caustic crossing events for the same lensed star are jointly analyzed. Our results imply new opportunities to survey individual massive stars in star formation sites at z ≃ 0.7-1.5 or beyond.
- Published
- 2024
71. Generating High-Precision Force Fields for Molecular Dynamics Simulations to Study Chemical Reaction Mechanisms using Molecular Configuration Transformer
- Author
-
Yuan, Sihao, Han, Xu, Zhang, Jun, Xie, Zhaoxin, Fan, Cheng, Xiao, Yunlong, Gao, Yi Qin, and Yang, Yi Isaac
- Subjects
Physics - Chemical Physics ,Condensed Matter - Soft Condensed Matter ,Computer Science - Artificial Intelligence - Abstract
Theoretical studies on chemical reaction mechanisms have been crucial in organic chemistry. Traditionally, calculating the manually constructed molecular conformations of transition states for chemical reactions using quantum chemical calculations is the most commonly used method. However, this way is heavily dependent on individual experience and chemical intuition. In our previous study, we proposed a research paradigm that uses enhanced sampling in molecular dynamics simulations to study chemical reactions. This approach can directly simulate the entire process of a chemical reaction. However, the computational speed limits the use of high-precision potential energy functions for simulations. To address this issue, we present a scheme for training high-precision force fields for molecular modeling using a previously developed graph-neural-network-based molecular model, molecular configuration transformer. This potential energy function allows for highly accurate simulations at a low computational cost, leading to more precise calculations of the mechanism of chemical reactions. We applied this approach to study a Claisen rearrangement reaction and a Carbonyl insertion reaction catalyzed by Manganese.
- Published
- 2023
72. Cryogenic hybrid magnonic circuits based on spalled YIG thin films
- Author
-
Xu, Jing, Horn, Connor, Jiang, Yu, Li, Xinhao, Rosenmann, Daniel, Han, Xu, Levy, Miguel, Guha, Supratik, and Zhang, Xufeng
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics ,Quantum Physics - Abstract
Yttrium iron garnet (YIG) magnonics has sparked extensive research interests toward harnessing magnons (quasiparticles of collective spin excitation) for signal processing. In particular, YIG magnonics-based hybrid systems exhibit great potentials for quantum information science because of their wide frequency tunability and excellent compatibility with other platforms. However, the broad application and scalability of thin-film YIG devices in the quantum regime has been severely limited due to the substantial microwave loss in the host substrate for YIG, gadolinium gallium garnet (GGG), at cryogenic temperatures. In this study, we demonstrate that substrate-free YIG thin films can be obtained by introducing the controlled spalling and layer transfer technology to YIG/GGG samples. Our approach is validated by measuring a hybrid device consisting of a superconducting resonator and a spalled YIG film, which gives a strong coupling feature indicating the good coherence of our system. This advancement paves the way for enhanced on-chip integration and the scalability of YIG-based quantum devices., Comment: 10 pages, 8 figures
- Published
- 2023
73. Under Einstein's Microscope: Measuring Properties of Individual Rotating Massive Stars From Extragalactic Micro Caustic Crossings
- Author
-
Han, Xu and Dai, Liang
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Highly magnified stars residing in caustic crossing lensed galaxies at z ~ 0.7-1.5 in galaxy cluster lensing fields inevitably exhibit recurrent brightening events as they traverse a micro caustic network cast down by foreground intracluster stars. The detectable ones belong to Nature's most massive and luminous class of stars, with evolved blue supergiants being the brightest ones at optical wavelengths. Considering single stars in this work, we study to what extent intrinsic stellar parameters are measurable from multi-filter lightcurves, which can be obtained with optical/near-IR space telescopes during one or multiple caustic crossing events. We adopt a realistic model for the axisymmetric surface brightness profiles of rotating O/B stars and develop a numerical lensing code that treats finite-source-size effects. With a single micro caustic crossing, the ratio of the surface rotation velocity to the breakup value is measurable to an precision of ~ 0.1-0.2 for feasible observation parameters with current space telescopes, with all unknown intrinsic and extrinsic parameters marginalized over and without a degeneracy with inclination. Equatorial radius and bolometric luminosity can be measured to 1/3 and 2/3 of the fractional uncertainty in the micro caustic strength, for which the value is not known at each crossing but an informative prior can be obtained from theory. Parameter inference precision may be further improved if multiple caustic crossing events for the same lensed star are jointly analyzed. Our results imply new opportunities to survey individual massive stars in star-formation sites at z ~ 0.7-1.5 or beyond., Comment: 16 pages, 7 figures. It's accepted to the Astrophysical Journal
- Published
- 2023
74. Stochastic gravitational waves produced by the first-order QCD phase transition
- Author
-
Han, Xu and Shao, Guoyun
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We investigate the stochastic gravitational waves background arising from the first-order QCD chiral phase transition, considering three distinct sources: bubble collisions, sound waves, and fluid turbulence. Within the framework of the Polyakov-Nambu-Jona-Lasinio (PNJL) model, we calculate the parameters governing the intensity of the gravitational wave phase transition and determine their magnitudes along the adiabatic evolutionary path. We introduce the effective bag constant $B_{\mathrm{eff}}$ related to the dynamical evolution of quarks to evaluate the intensity of the phase transition. By calculating expanded potential at the point of false vacuum, we find that all the bubbles are in the mode of runaway, leading the velocity of the bubble wall to the speed of light. The resulting gravitational wave energy spectrum is estimated, revealing a characteristic amplitude of the generated gravitational waves within the centihertz frequency range. We present the gravitational wave spectrum and compare it with the sensitivity range of detectors, and find that the gravitational wave spectra generated by these sources have the potential to be detected by future detectors such as BBO and $\mu$ARES.
- Published
- 2023
75. Hypoxia drives estrogen receptor β-mediated cell growth via transcription activation in non-small cell lung cancer
- Author
-
Su, Qi, Chen, Kun, Ren, Jiayan, Zhang, Yu, Han, Xu, Leong, Sze Wei, Wang, Jingjing, Wu, Qing, Tu, Kaihui, Sarwar, Ammar, and Zhang, Yanmin
- Published
- 2024
- Full Text
- View/download PDF
76. Spectral-Spatial Blockwise Masked Transformer With Contrastive Multi-View Learning for Hyperspectral Image Classification
- Author
-
Hu, Han, Liu, Zhenhui, Xu, Ziqing, Wang, Haoyi, Li, Xianju, Han, Xu, Peng, Jianyi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
77. Novel use of a − 20°C cryoprotectant yields high viability and improved aggregation of marine sponge cells
- Author
-
Urban-Gedamke, Elizabeth, Conkling, Megan, Goodman, Cynthia, Han, Xu, and Pomponi, Shirley A.
- Published
- 2024
- Full Text
- View/download PDF
78. Structural and functional changes following brain surgery in pediatric patients with intracranial space-occupying lesions
- Author
-
Guan, Xueyi, Zheng, Wenjian, Fan, Kaiyu, Han, Xu, Hu, Bohan, Li, Xiang, Yan, Zihan, Lu, Zheng, and Gong, Jian
- Published
- 2024
- Full Text
- View/download PDF
79. Recent progress in printing flexible electronics: A review
- Author
-
Bi, Sheng, Gao, BuHan, Han, Xu, He, ZhengRan, Metts, Jacob, Jiang, ChengMing, and Asare-Yeboah, Kyeiwaa
- Published
- 2024
- Full Text
- View/download PDF
80. 2D metal azolate framework for efficient CO2 photoreduction
- Author
-
Gu, Jianxia, Wang, Lingxin, Han, Xu, He, Jingting, You, Siqi, Dong, Man, Shan, Guogang, He, Danfeng, Zhou, Fujiang, Sun, Chunyi, and Su, Zhongmin
- Published
- 2024
- Full Text
- View/download PDF
81. The genetic basis and process of inbreeding depression in an elite hybrid rice
- Author
-
Xu, Xiaodong, Xu, Yawen, Che, Jian, Han, Xu, Wang, Zhengji, Wang, Xianmeng, Zhang, Qinghua, Li, Xu, Zhang, Qinglu, Xiao, Jinghua, Li, Xianghua, Zhang, Qifa, and Ouyang, Yidan
- Published
- 2024
- Full Text
- View/download PDF
82. Parkin deficiency promotes liver cancer metastasis by TMEFF1 transcription activation via TGF-β/Smad2/3 pathway
- Author
-
Su, Qi, Wang, Jing-jing, Ren, Jia-yan, Wu, Qing, Chen, Kun, Tu, Kai-hui, Zhang, Yu, Leong, Sze Wei, Sarwar, Ammar, Han, Xu, Zhang, Mi, Dai, Wei-feng, and Zhang, Yan-min
- Published
- 2024
- Full Text
- View/download PDF
83. PDE3B regulates KRT6B and increases the sensitivity of bladder cancer cells to copper ionophores
- Author
-
Feng, Yuankang, Huang, Zhenlin, Song, Liang, Li, Ningyang, Li, Xiang, Shi, Huihui, Liu, Ruoyang, Lu, Fubo, Han, Xu, Ding, Yafei, Ding, Yinghui, Wang, Jun, Yang, Jinjian, and Jia, Zhankui
- Published
- 2024
- Full Text
- View/download PDF
84. Sulfonated poly (aryl ether ketone sulfone) modified by polyoxometalates LaW10 clusters for proton exchange membranes with high proton conduction performance
- Author
-
Liu, Meng-Long, Han, Xu, He, Wen-Wen, Jiang, Feng-Yu, Ji, Fang, Shen, Wang-Wang, Zhou, Tao, Xu, Jing-Mei, and Lan, Ya-Qian
- Published
- 2024
- Full Text
- View/download PDF
85. Covalent triazine frameworks modified by ultrafine Pt nanoparticles for efficient photocatalytic hydrogen production
- Author
-
Han, Xu, Ge, Xueying, He, Wen-Wen, Shen, Wangwang, Zhou, Tao, Wang, Jian-Sen, Zhong, Rong-Lin, Al-Enizi, Abdullah M., Nafady, Ayman, and Ma, Shengqian
- Published
- 2024
- Full Text
- View/download PDF
86. Traffic Sign Interpretation in Real Road Scene
- Author
-
Yang, Chuang, Zhuang, Kai, Chen, Mulin, Ma, Haozhao, Han, Xu, Han, Tao, Guo, Changxing, Han, Han, Zhao, Bingxuan, and Wang, Qi
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Most existing traffic sign-related works are dedicated to detecting and recognizing part of traffic signs individually, which fails to analyze the global semantic logic among signs and may convey inaccurate traffic instruction. Following the above issues, we propose a traffic sign interpretation (TSI) task, which aims to interpret global semantic interrelated traffic signs (e.g.,~driving instruction-related texts, symbols, and guide panels) into a natural language for providing accurate instruction support to autonomous or assistant driving. Meanwhile, we design a multi-task learning architecture for TSI, which is responsible for detecting and recognizing various traffic signs and interpreting them into a natural language like a human. Furthermore, the absence of a public TSI available dataset prompts us to build a traffic sign interpretation dataset, namely TSI-CN. The dataset consists of real road scene images, which are captured from the highway and the urban way in China from a driver's perspective. It contains rich location labels of texts, symbols, and guide panels, and the corresponding natural language description labels. Experiments on TSI-CN demonstrate that the TSI task is achievable and the TSI architecture can interpret traffic signs from scenes successfully even if there is a complex semantic logic among signs. The TSI-CN dataset and the source code of the TSI architecture will be publicly available after the revision process.
- Published
- 2023
87. MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation
- Author
-
Wang, Xiaozhi, Peng, Hao, Guan, Yong, Zeng, Kaisheng, Chen, Jianhui, Hou, Lei, Han, Xu, Lin, Yankai, Liu, Zhiyuan, Xie, Ruobing, Zhou, Jie, and Li, Juanzi
- Subjects
Computer Science - Computation and Language - Abstract
Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships. However, due to the annotation challenges brought by task complexity, a large-scale dataset covering the full process of event understanding has long been absent. In this paper, we introduce MAVEN-Arg, which augments MAVEN datasets with event argument annotations, making the first all-in-one dataset supporting event detection, event argument extraction (EAE), and event relation extraction. As an EAE benchmark, MAVEN-Arg offers three main advantages: (1) a comprehensive schema covering 162 event types and 612 argument roles, all with expert-written definitions and examples; (2) a large data scale, containing 98,591 events and 290,613 arguments obtained with laborious human annotation; (3) the exhaustive annotation supporting all task variants of EAE, which annotates both entity and non-entity event arguments in document level. Experiments indicate that MAVEN-Arg is quite challenging for both fine-tuned EAE models and proprietary large language models (LLMs). Furthermore, to demonstrate the benefits of an all-in-one dataset, we preliminarily explore a potential application, future event prediction, with LLMs. MAVEN-Arg and codes can be obtained from https://github.com/THU-KEG/MAVEN-Argument., Comment: Accepted at ACL 2024. Camera-ready version
- Published
- 2023
88. Bayesian Conditional Diffusion Models for Versatile Spatiotemporal Turbulence Generation
- Author
-
Gao, Han, Han, Xu, Fan, Xiantao, Sun, Luning, Liu, Li-Ping, Duan, Lian, and Wang, Jian-Xun
- Subjects
Physics - Fluid Dynamics ,Computer Science - Machine Learning - Abstract
Turbulent flows have historically presented formidable challenges to predictive computational modeling. Traditional numerical simulations often require vast computational resources, making them infeasible for numerous engineering applications. As an alternative, deep learning-based surrogate models have emerged, offering data-drive solutions. However, these are typically constructed within deterministic settings, leading to shortfall in capturing the innate chaotic and stochastic behaviors of turbulent dynamics. We introduce a novel generative framework grounded in probabilistic diffusion models for versatile generation of spatiotemporal turbulence. Our method unifies both unconditional and conditional sampling strategies within a Bayesian framework, which can accommodate diverse conditioning scenarios, including those with a direct differentiable link between specified conditions and generated unsteady flow outcomes, and scenarios lacking such explicit correlations. A notable feature of our approach is the method proposed for long-span flow sequence generation, which is based on autoregressive gradient-based conditional sampling, eliminating the need for cumbersome retraining processes. We showcase the versatile turbulence generation capability of our framework through a suite of numerical experiments, including: 1) the synthesis of LES simulated instantaneous flow sequences from URANS inputs; 2) holistic generation of inhomogeneous, anisotropic wall-bounded turbulence, whether from given initial conditions, prescribed turbulence statistics, or entirely from scratch; 3) super-resolved generation of high-speed turbulent boundary layer flows from low-resolution data across a range of input resolutions. Collectively, our numerical experiments highlight the merit and transformative potential of the proposed methods, making a significant advance in the field of turbulence generation., Comment: 37 pages, 31 figures
- Published
- 2023
89. Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations
- Author
-
Wu, Zengqing, Peng, Run, Han, Xu, Zheng, Shuyuan, Zhang, Yixin, and Xiao, Chuan
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computational Engineering, Finance, and Science ,Computer Science - Computation and Language ,Computer Science - Multiagent Systems ,Economics - General Economics - Abstract
Computer simulations offer a robust toolset for exploring complex systems across various disciplines. A particularly impactful approach within this realm is Agent-Based Modeling (ABM), which harnesses the interactions of individual agents to emulate intricate system dynamics. ABM's strength lies in its bottom-up methodology, illuminating emergent phenomena by modeling the behaviors of individual components of a system. Yet, ABM has its own set of challenges, notably its struggle with modeling natural language instructions and common sense in mathematical equations or rules. This paper seeks to transcend these boundaries by integrating Large Language Models (LLMs) like GPT into ABM. This amalgamation gives birth to a novel framework, Smart Agent-Based Modeling (SABM). Building upon the concept of smart agents -- entities characterized by their intelligence, adaptability, and computation ability -- we explore in the direction of utilizing LLM-powered agents to simulate real-world scenarios with increased nuance and realism. In this comprehensive exploration, we elucidate the state of the art of ABM, introduce SABM's potential and methodology, and present three case studies (source codes available at https://github.com/Roihn/SABM), demonstrating the SABM methodology and validating its effectiveness in modeling real-world systems. Furthermore, we cast a vision towards several aspects of the future of SABM, anticipating a broader horizon for its applications. Through this endeavor, we aspire to redefine the boundaries of computer simulations, enabling a more profound understanding of complex systems., Comment: Source codes are available at https://github.com/Roihn/SABM
- Published
- 2023
90. Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
- Author
-
Xiao, Chaojun, Luo, Yuqi, Zhang, Wenbin, Zhang, Pengle, Han, Xu, Lin, Yankai, Zhang, Zhengyan, Xie, Ruobing, Liu, Zhiyuan, Sun, Maosong, and Zhou, Jie
- Subjects
Computer Science - Computation and Language - Abstract
Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs. In this paper, we propose Variator, a parameter-efficient acceleration method that enhances computational efficiency through plug-and-play compression plugins. Compression plugins are designed to reduce the sequence length via compressing multiple hidden vectors into one and trained with original PLMs frozen. Different from traditional model acceleration methods, which compress PLMs to smaller sizes, Variator offers two distinct advantages: (1) In real-world applications, the plug-and-play nature of our compression plugins enables dynamic selection of different compression plugins with varying acceleration ratios based on the current workload. (2) The compression plugin comprises a few compact neural network layers with minimal parameters, significantly saving storage and memory overhead, particularly in scenarios with a growing number of tasks. We validate the effectiveness of Variator on seven datasets. Experimental results show that Variator can save 53% computational costs using only 0.9% additional parameters with a performance drop of less than 2%. Moreover, when the model scales to billions of parameters, Variator matches the strong performance of uncompressed PLMs., Comment: Accepted by Findings of EMNLP
- Published
- 2023
91. Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models
- Author
-
Chen, Weize, Xu, Xiaoyue, Han, Xu, Lin, Yankai, Xie, Ruobing, Liu, Zhiyuan, Sun, Maosong, and Zhou, Jie
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings., Comment: EMNLP 2023 Findings
- Published
- 2023
92. ARM: Refining Multivariate Forecasting with Adaptive Temporal-Contextual Learning
- Author
-
Lu, Jiecheng, Han, Xu, and Yang, Shihao
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Long-term time series forecasting (LTSF) is important for various domains but is confronted by challenges in handling the complex temporal-contextual relationships. As multivariate input models underperforming some recent univariate counterparts, we posit that the issue lies in the inefficiency of existing multivariate LTSF Transformers to model series-wise relationships: the characteristic differences between series are often captured incorrectly. To address this, we introduce ARM: a multivariate temporal-contextual adaptive learning method, which is an enhanced architecture specifically designed for multivariate LTSF modelling. ARM employs Adaptive Univariate Effect Learning (AUEL), Random Dropping (RD) training strategy, and Multi-kernel Local Smoothing (MKLS), to better handle individual series temporal patterns and correctly learn inter-series dependencies. ARM demonstrates superior performance on multiple benchmarks without significantly increasing computational costs compared to vanilla Transformer, thereby advancing the state-of-the-art in LTSF. ARM is also generally applicable to other LTSF architecture beyond vanilla Transformer.
- Published
- 2023
93. Predicting Emergent Abilities with Infinite Resolution Evaluation
- Author
-
Hu, Shengding, Liu, Xin, Han, Xu, Zhang, Xinrong, He, Chaoqun, Zhao, Weilin, Lin, Yankai, Ding, Ning, Ou, Zebin, Zeng, Guoyang, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language - Abstract
The scientific scale-up of large language models (LLMs) necessitates a comprehensive understanding of their scaling properties. However, the existing literature on the scaling properties only yields an incomplete answer: optimization loss decreases predictably as the model size increases, in line with established scaling law; yet no scaling law for task has been established and the task performances are far from predictable during scaling. Task performances typically show minor gains on small models until they improve dramatically once models exceed a size threshold, exemplifying the ``emergent abilities''. In this study, we discover that small models, although they exhibit minor performance, demonstrate critical and consistent task performance improvements that are not captured by conventional evaluation strategies due to insufficient measurement resolution. To measure such improvements, we introduce PassUntil, an evaluation strategy with theoretically infinite resolution, through massive sampling in the decoding phase. With PassUntil, we conduct a quantitative investigation into the scaling law of task performance. The investigation contains two parts. Firstly, a strict task scaling law that is not conventionally known to exist, is identified, enhancing the predictability of task performances. Remarkably, we are able to predict the performance of the 2.4B model on code generation with merely 0.05\% deviation before training starts, which is the first systematic attempt to verify predictable scaling proposed by GPT-4's report. Secondly, we are able to study emergent abilities quantitatively. We identify a kind of accelerated emergence whose scaling curve cannot be fitted by standard scaling law function and has a increasing speed. We then examine two hypothesis and imply that the ``multiple circuits hypothesis'' might be responsible for the accelerated emergence., Comment: After revision
- Published
- 2023
94. ConPET: Continual Parameter-Efficient Tuning for Large Language Models
- Author
-
Song, Chenyang, Han, Xu, Zeng, Zheni, Li, Kuai, Chen, Chen, Liu, Zhiyuan, Sun, Maosong, and Yang, Tao
- Subjects
Computer Science - Computation and Language ,I.2.7 - Abstract
Continual learning necessitates the continual adaptation of models to newly emerging tasks while minimizing the catastrophic forgetting of old ones. This is extremely challenging for large language models (LLMs) with vanilla full-parameter tuning due to high computation costs, memory consumption, and forgetting issue. Inspired by the success of parameter-efficient tuning (PET), we propose Continual Parameter-Efficient Tuning (ConPET), a generalizable paradigm for continual task adaptation of LLMs with task-number-independent training complexity. ConPET includes two versions with different application scenarios. First, Static ConPET can adapt former continual learning methods originally designed for relatively smaller models to LLMs through PET and a dynamic replay strategy, which largely reduces the tuning costs and alleviates the over-fitting and forgetting issue. Furthermore, to maintain scalability, Dynamic ConPET adopts separate PET modules for different tasks and a PET module selector for dynamic optimal selection. In our extensive experiments, the adaptation of Static ConPET helps multiple former methods reduce the scale of tunable parameters by over 3,000 times and surpass the PET-only baseline by at least 5 points on five smaller benchmarks, while Dynamic ConPET gains its advantage on the largest dataset. The codes and datasets are available at https://github.com/Raincleared-Song/ConPET., Comment: 12 pages, 3 figures. This work has been submitted to the IEEE for possible publication
- Published
- 2023
95. QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation
- Author
-
Zhu, Kunlun, Liang, Shihao, Han, Xu, Zheng, Zhi, Zeng, Guoyang, Liu, Zhiyuan, and Sun, Maosong
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Recent years have witnessed the success of question answering (QA), especially its potential to be a foundation paradigm for tackling diverse NLP tasks. However, obtaining sufficient data to build an effective and stable QA system still remains an open problem. For this problem, we introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball), which can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples. Specifically, QASnowball consists of three modules, an answer extractor to extract core phrases in unlabeled documents as candidate answers, a question generator to generate questions based on documents and candidate answers, and a QA data filter to filter out high-quality QA data. Moreover, QASnowball can be self-enhanced by reseeding the seed set to fine-tune itself in different iterations, leading to continual improvements in the generation quality. We conduct experiments in the high-resource English scenario and the medium-resource Chinese scenario, and the experimental results show that the data generated by QASnowball can facilitate QA models: (1) training models on the generated data achieves comparable results to using supervised data, and (2) pre-training on the generated data and fine-tuning on supervised data can achieve better performance. Our code and generated data will be released to advance further work.
- Published
- 2023
96. Retraction Note: Visual system based on optical sensor in Wushu training image trajectory simulation
- Author
-
Jia, Liang and Han, Xu
- Published
- 2024
- Full Text
- View/download PDF
97. Machine learning-guided synthesis of nanomaterials for breast cancer therapy
- Author
-
Kun Zhou, Baoxing Tian, Ji Lu, Bing Dong, and Han Xu
- Subjects
Breast cancer ,Hydrogel ,Machine learning ,Medicine ,Science - Abstract
Abstract Breast cancer is a common malignant tumor, which mostly occurs in female population and is caused by excessive proliferation of breast epithelial cells. Breast cancer can cause nipple discharge, breast lumps and other symptoms, but these symptoms lack certain specificity and are easily confused with other diseases, thus affecting the early treatment of the disease. Once the tumor progresses to the advanced stage, distant metastasis can occur, leading to dysfunction of the affected organs, and even threatening the patients’ lives. In this study, we synthesized high drug-loading gel particles and applied them to control the release of insoluble drugs. This method is simple to prepare, cost-effective, and validates their potential in breast cancer therapy. We first characterized the morphology and physicochemical properties of gel loaded with newly synthesized compound 1 by scanning electron microscopy (SEM), Fourier-transform infrared spectroscopy (FT-IR), and thermal gravimetric analysis (TGA). Using newly synthesized insoluble compound 1 as a model drug, its efficacy in treating breast cancer was investigated. The results showed that hydrogel@compound 1 was able to significantly inhibit the proliferation, migration and invasion of breast cancer cells. Additionally, we utilized machine learning to screen three structurally similar compounds, which showed promising therapeutic effects, providing a new approach for the development of novel small-molecule drugs.
- Published
- 2024
- Full Text
- View/download PDF
98. New Environmental/Thermal Barrier Coatings Suitable for Hydrogen Doped Gas Turbines
- Author
-
WANG You, ZHANG Xiaodong, HAO Pei, HAN Xu, DENG Luwei, LI Guoqiang, WEI Fushuang, and JI Xiang
- Subjects
gas turbine ,hydrogen fuel ,hydrogen doped gas turbines ,thermal barrier coatings (tbc) ,corrosion prevention ,structural design of coatings ,Applications of electric power ,TK4001-4102 ,Production of electric energy or power. Powerplants. Central stations ,TK1001-1841 ,Science - Abstract
ObjectivesWith the implementation of the national “Dual Carbon Strategy” (carbon peak and carbon neutrality), it is anticipated that the existing coating structures may not meet the requirements of future gas turbine thermal protection coatings. The concept of a new type of environmental/thermal barrier coating (E/TBC) structure with high temperature corrosion resistance has been proposed to meet the demand for thermal protection coatings in hybrid hydrogen combustion engines.MethodsThe development history and research status of thermal barrier coating (TBC), environmental barrier coating (EBC), thermal/environmental barrier coating (T/EBC) and thermal and environmental barrier coating (TEBC) were reviewed and analyzed from the perspective of thermal protection coating materials and coating structures. Moreover, the gap between the above coating structures and the requirementsof thermal protection coating for mixed hydrogen gas turbines was investigated.ResultsIt is reasonable to superimpose the function of EBC onto the thermal protection coating of current mixed hydrogen gas turbines, thereby forming a new type of E/TBC structure with high temperature corrosion resistance on the high-temperature alloy substrate.ConclusionsThrough the preliminary test, it is proved that the new E/TBC structure is suitable for the thermal protection coating requirements of mixed hydrogen gas turbine against high temperature water oxygen corrosion, and it is pointed out that the theory and application research of this new E/TBC thermal protection coating should be vigorously carried out.
- Published
- 2024
- Full Text
- View/download PDF
99. Ligand engineering towards electrocatalytic urea synthesis on a molecular catalyst
- Author
-
Han Li, Leitao Xu, Shuowen Bo, Yujie Wang, Han Xu, Chen Chen, Ruping Miao, Dawei Chen, Kefan Zhang, Qinghua Liu, Jingjun Shen, Huaiyu Shao, Jianfeng Jia, and Shuangyin Wang
- Subjects
Science - Abstract
Abstract Electrocatalytic C-N coupling from carbon dioxide and nitrate provides a sustainable alternative to the conventional energy-intensive urea synthetic protocol, enabling wastes upgrading and value-added products synthesis. The design of efficient and stable electrocatalysts is vital to promote the development of electrocatalytic urea synthesis. In this work, copper phthalocyanine (CuPc) is adopted as a modeling catalyst toward urea synthesis owing to its accurate and adjustable active configurations. Combining experimental and theoretical studies, it can be observed that the intramolecular Cu-N coordination can be strengthened with optimization in electronic structure by amino substitution (CuPc-Amino) and the electrochemically induced demetallation is efficiently suppressed, serving as the origination of its excellent activity and stability. Compared to that of CuPc (the maximum urea yield rate of 39.9 ± 1.9 mmol h−1 g−1 with 67.4% of decay in 10 test cycles), a high rate of 103.1 ± 5.3 mmol h−1 g−1 and remarkable catalytic durability have been achieved on CuPc-Amino. Isotope-labelling operando electrochemical spectroscopy measurements are performed to disclose reaction mechanisms and validate the C-N coupling processes. This work proposes a unique scheme for the rational design of molecular electrocatalysts for urea synthesis.
- Published
- 2024
- Full Text
- View/download PDF
100. Virulence plasmid with IroBCDN deletion promoted cross-regional transmission of ST11-KL64 carbapenem-resistant hypervirulent Klebsiella pneumoniae in central China
- Author
-
Han-xu Hong, Bing-Hui Huo, Tian-Xin Xiang, Dan-Dan Wei, Qi-Sen Huang, Peng Liu, Wei Zhang, Ying Xu, and Yang Liu
- Subjects
Carbapenem-resistant hypervirulent Klebsiella pneumoniae ,Virulence plasmid ,Whole genome sequencing ,MLST ,Capsular serotypes ,Microbiology ,QR1-502 - Abstract
Abstract Background Carbapenem-resistant and hypervirulent Klebsiella pneumoniae (CR-hvKP) caused infections of high mortality and brought a serious impact on public health. This study aims to evaluate the epidemiology, resistance and virulence characteristics of CR-hvKP and to identify potential drivers of cross-regional transmission in different regions of China, in order to provide a basis for developing targeted prevention measures. Methods Clinical K. pneumoniae strains were collected from Jiujiang and Nanchang in Jiangxi province between November 2021 to June 2022. Clinical data of patients (age, sex, source of infection, and diagnosis) were also gathered. We characterized these strains for their genetic relatedness using PFGE, antimicrobial and virulence plasmid structures using whole-genome sequencing, and toxicity using Galleria mellonella infection model. Results Among 609 strains, 45 (7.4%) CR-hvKP were identified, while the strains. isolated from Nanchang and Jiujiang accounted for 10.05% (36/358) and 3.59% (9/251). We observed that ST11-KL64 CR-hvKP had an overwhelming epidemic dominance in these two regions. Significant genetic diversity was identified among all ST11-KL64 CR-hvKP cross-regional transmission between Nanchang and Jiujiang and this diversity served as the primary driver of the dissemination of clonal groups. Virulence genes profile revealed that ST11-KL64 CR-hvKP might harbour incomplete pLVPK-like plasmids and primarily evolved from CRKP by acquiring the hypervirulence plasmid. We found the predominance of truncated-IncFIB/IncHI1B type virulence plasmids with a 25 kb fragment deletion that encoded iroBCDN clusters. Conclusion ST11-KL64 is the most cross-regional prevalent type CR-hvKPs in Jiangxi province, which mainly evolved from CRKPs by acquiring a truncated-IncHI1B/IncFIB virulence plasmid with the deletion of iroBCDN. Stricter surveillance and control measures are urgently needed to prevent the epidemic transmission of ST11-KL64 CR-hvKP.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.