22,311 results on '"Feng, Wei"'
Search Results
2. “A false dance”: Rules and Freedom in the Ludic World of Cormac McCarthy’s Blood Meridian
- Author
-
Feng, Wei
- Published
- 2023
- Full Text
- View/download PDF
3. Chinese Adaptations of Brecht: Appropriation and Intertextuality by Wei Zhang (review)
- Author
-
Feng, Wei
- Published
- 2023
4. Transforming Tradition: The Reform of Chinese Theater In The 1950S and Early 1960s by Siyuan Liu (review)
- Author
-
Feng, Wei
- Published
- 2023
- Full Text
- View/download PDF
5. Gravitational Waves from Primordial Black Hole Dark Matter Spikes
- Author
-
Feng, Wei-Xiang, Bird, Simeon, and Yu, Hai-Bo
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - High Energy Astrophysical Phenomena ,General Relativity and Quantum Cosmology ,High Energy Physics - Phenomenology - Abstract
The origin of the binary black hole mergers observed by LIGO-Virgo-KAGRA (LVK) remains an open question. We calculate the merger rate from primordial black holes (PBHs) within the density spike around supermassive black holes (SMBHs) at the center of galaxies. We show that the merger rate within the spike is comparable to that within the wider dark matter halo. We also calculate the extreme mass ratio inspiral (EMRI) signal from PBHs hosted within the density spike spiralling into their host SMBHs due to GW emission. We predict that LISA may detect $\sim10^4$ of these EMRIs with signal-to-noise ratio of 5 within a 4-year observation run, if all dark matter is made up of PBHs. Uncertainties in our rates come from the uncertain mass fraction of PBHs within the dark matter spike, relative to the host central SMBHs, which defines the parameter space LISA can constrain., Comment: 6 pages, 3 figures, plus Appendix (3 figures, 1 table)
- Published
- 2024
6. Super-resolution generalized eigenvalue method with truly sub-Nyquist sampling
- Author
-
Liu, Baoguo, Zhang, Huiguang, Feng, Wei, Liu, Zongyao, Zhang, Zhen, and Liu, Yanxu
- Subjects
Computer Science - Information Theory - Abstract
The achievement of spectral super-resolution sensing is critically important for a variety of applications, such as radar, remote sensing, and wireless communication. However, in compressed spectrum sensing, challenges such as spectrum leakage and the picket-fence effect significantly complicate the accurate extraction of super-resolution signal components. Additionally, the practical implementation of random sampling poses a significant hurdle to the widespread adoption of compressed spectrum sensing techniques. To overcome these challenges, this study introduces a generalized eigenvalue method that leverages the incoherence between signal components and the linearity-preserving characteristics of differential operations. This method facilitates the precise extraction of signal component parameters with super-resolution capabilities under sub-Nyquist sampling conditions. The proposed technique is founded on uniform sub-Nyquist sampling, which represents a true sub-Nyquist approach and effectively mitigates the complexities associated with hardware implementation. Furthermore, the proposed method diverges from traditional compressed sensing techniques by operating outside the discrete Fourier transform framework. This departure successfully eliminates spectral leakage and the picket-fence effect. Moreover, it substantially reduces the detrimental impacts of random sampling on signal reconstruction and hardware implementation, thereby enhancing the overall effectiveness and feasibility of spectral super-resolution sensing.
- Published
- 2024
7. Sensing-Communication-Computing-Control Closed-Loop Optimization for 6G Unmanned Robotic Systems
- Author
-
Fang, Xinran, Lei, Chengleyang, Feng, Wei, Chen, Yunfei, Xiao, Ming, Ge, Ning, and Wang, Chengxiang
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Rapid advancements in field robots have brought a new kind of cyber physical system (CPS)--unmanned robotic system--under the spotlight. In the upcoming sixth-generation (6G) era, these systems hold great potential to replace humans in hazardous tasks. This paper investigates an unmanned robotic system comprising a multi-functional unmanned aerial vehicle (UAV), sensors, and actuators. The UAV carries communication and computing modules, acting as an edge information hub (EIH) that transfers and processes information. During the task execution, the EIH gathers sensing data, calculates control commands, and transmits commands to actuators--leading to reflex-arc-like sensing-communication-computing-control ($\mathbf{SC}^3$) loops. Unlike existing studies that design $\mathbf{SC}^3$ loop components separately, we take each $\mathbf{SC}^3$ loop as an integrated structure and propose a goal-oriented closed-loop optimization scheme. This scheme jointly optimizes uplink and downlink (UL&DL) communication and computing within and across the $\mathbf{SC}^3$ loops to minimize the total linear quadratic regulator (LQR) cost. We derive optimal closed-form solutions for intra-loop allocation and propose an efficient iterative algorithm for inter-loop optimization. Under the condition of adequate CPU frequency availability, we derive an approximate closed-form solution for inter-loop bandwidth allocation. Simulation results demonstrate that the proposed scheme achieves a two-tier task-level balance within and across $\mathbf{SC}^3$ loops.
- Published
- 2024
8. Structured Connectivity for 6G Reflex Arc: Task-Oriented Virtual User and New Uplink-Downlink Tradeoff
- Author
-
Fang, Xinran, Lei, Chengleyang, Feng, Wei, Chen, Yunfei, Ge, Ning, and Jin, Shi
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
To accommodate the evolving demands of unmanned operations, the future sixth-generation (6G) network will support not only communication links but also sensing-communication-computing-control ($\mathbf{SC}^3$) loops. In each $\mathbf{SC}^3$ cycle, the sensor uploads sensing data to the computing center, and the computing center calculates the control command and sends it to the actuator to take action. To maintain the task-level connections between the sensor-computing center link and the computing center-actuator link, we propose to treat the sensor and actuator as a virtual user. In this way, the two communication links of the $\mathbf{SC}^3$ loop become the uplink and downlink (UL&DL) of the virtual user. Based on the virtual user, we propose a task-oriented UL&DL optimization scheme. This scheme jointly optimizes UL&DL transmit power, time, bandwidth, and CPU frequency to minimize the control linear quadratic regulator (LQR) cost. We decouple the complex problem into a convex UL&DL bandwidth allocation problem with the closed-form solution for the optimal time allocation. Simulation results demonstrate that the proposed scheme achieves a task-level balance between the UL&DL, surpassing conventional communication schemes that optimize each link separately.
- Published
- 2024
9. VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video Understanding
- Author
-
Chen, Houlun, Wang, Xin, Chen, Hong, Zhang, Zeyang, Feng, Wei, Huang, Bin, Jia, Jia, and Zhu, Wenwu
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Existing Video Corpus Moment Retrieval (VCMR) is limited to coarse-grained understanding, which hinders precise video moment localization when given fine-grained queries. In this paper, we propose a more challenging fine-grained VCMR benchmark requiring methods to localize the best-matched moment from the corpus with other partially matched candidates. To improve the dataset construction efficiency and guarantee high-quality data annotations, we propose VERIFIED, an automatic \underline{V}id\underline{E}o-text annotation pipeline to generate captions with \underline{R}el\underline{I}able \underline{FI}n\underline{E}-grained statics and \underline{D}ynamics. Specifically, we resort to large language models (LLM) and large multimodal models (LMM) with our proposed Statics and Dynamics Enhanced Captioning modules to generate diverse fine-grained captions for each video. To filter out the inaccurate annotations caused by the LLM hallucination, we propose a Fine-Granularity Aware Noise Evaluator where we fine-tune a video foundation model with disturbed hard-negatives augmented contrastive and matching losses. With VERIFIED, we construct a more challenging fine-grained VCMR benchmark containing Charades-FIG, DiDeMo-FIG, and ActivityNet-FIG which demonstrate a high level of annotation quality. We evaluate several state-of-the-art VCMR models on the proposed dataset, revealing that there is still significant scope for fine-grained video understanding in VCMR. Code and Datasets are in \href{https://github.com/hlchen23/VERIFIED}{https://github.com/hlchen23/VERIFIED}., Comment: Accepted by 38th NeurIPS Datasets & Benchmarks Track (NeurIPS 2024)
- Published
- 2024
10. VOVTrack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking
- Author
-
Qian, Zekun, Han, Ruize, Hou, Junhui, Song, Linqi, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Open-vocabulary multi-object tracking (OVMOT) represents a critical new challenge involving the detection and tracking of diverse object categories in videos, encompassing both seen categories (base classes) and unseen categories (novel classes). This issue amalgamates the complexities of open-vocabulary object detection (OVD) and multi-object tracking (MOT). Existing approaches to OVMOT often merge OVD and MOT methodologies as separate modules, predominantly focusing on the problem through an image-centric lens. In this paper, we propose VOVTrack, a novel method that integrates object states relevant to MOT and video-centric training to address this challenge from a video object tracking standpoint. First, we consider the tracking-related state of the objects during tracking and propose a new prompt-guided attention mechanism for more accurate localization and classification (detection) of the time-varying objects. Subsequently, we leverage raw video data without annotations for training by formulating a self-supervised object similarity learning technique to facilitate temporal object association (tracking). Experimental results underscore that VOVTrack outperforms existing methods, establishing itself as a state-of-the-art solution for open-vocabulary tracking task.
- Published
- 2024
11. Deep Correlated Prompting for Visual Recognition with Missing Modalities
- Author
-
Hu, Lianyu, Shi, Tongkai, Feng, Wei, Shang, Fanhua, and Wan, Liang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumption may not always hold in the real world due to privacy constraints or collection difficulty, where models pretrained on modality-complete data easily demonstrate degraded performance on missing-modality cases. To handle this issue, we refer to prompt learning to adapt large pretrained multimodal models to handle missing-modality scenarios by regarding different missing cases as different types of input. Instead of only prepending independent prompts to the intermediate layers, we present to leverage the correlations between prompts and input features and excavate the relationships between different layers of prompts to carefully design the instructions. We also incorporate the complementary semantics of different modalities to guide the prompting design for each modality. Extensive experiments on three commonly-used datasets consistently demonstrate the superiority of our method compared to the previous approaches upon different missing scenarios. Plentiful ablations are further given to show the generalizability and reliability of our method upon different modality-missing ratios and types., Comment: NeurIPS 2024, add some results
- Published
- 2024
12. TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training
- Author
-
Liang, Wanchao, Liu, Tianyu, Wright, Less, Constable, Will, Gu, Andrew, Huang, Chien-Chin, Zhang, Iris, Feng, Wei, Huang, Howard, Wang, Junjie, Purandare, Sanket, Nadathur, Gokul, and Idreos, Stratos
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Machine Learning - Abstract
The development of large language models (LLMs) has been instrumental in advancing state-of-the-art natural language processing applications. Training LLMs with billions of parameters and trillions of tokens require sophisticated distributed systems that enable composing and comparing several state-of-the-art techniques in order to efficiently scale across thousands of accelerators. However, existing solutions are complex, scattered across multiple libraries/repositories, lack interoperability, and are cumbersome to maintain. Thus, curating and empirically comparing training recipes require non-trivial engineering effort. This paper introduces TorchTitan, an open-source, PyTorch-native distributed training system that unifies state-of-the-art techniques, streamlining integration and reducing overhead. TorchTitan enables 3D parallelism in a modular manner with elastic scaling, providing comprehensive logging, checkpointing, and debugging tools for production-ready training. It also incorporates hardware-software co-designed solutions, leveraging features like Float8 training and SymmetricMemory. As a flexible test bed, TorchTitan facilitates custom recipe curation and comparison, allowing us to develop optimized training recipes for Llama 3.1 and provide guidance on selecting techniques for maximum efficiency based on our experiences. We thoroughly assess TorchTitan on the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its exceptional performance, modular composability, and elastic scalability. By stacking training optimizations, we demonstrate accelerations of 65.08% with 1D parallelism at the 128-GPU scale (Llama 3.1 8B), an additional 12.59% with 2D parallelism at the 256-GPU scale (Llama 3.1 70B), and an additional 30% with 3D parallelism at the 512-GPU scale (Llama 3.1 405B) on NVIDIA H100 GPUs over optimized baselines.
- Published
- 2024
13. ATOMS: ALMA Three-millimeter Observations of Massive Star-forming regions $-$ XVII. High-mass star-formation through a large-scale collapse in IRAS 15394$-$5358
- Author
-
Das, Swagat R., Merello, Manuel, Bronfman, Leonardo, Liu, Tie, Garay, Guido, Stutz, Amelia, Mardones, Diego, Zhou, Jian-Wen, Sanhueza, Patricio, Liu, Hong-Li, Vázquez-Semadeni, Enrique, Gómez, Gilberto C., Palau, Aina, Tej, Anandmayee, Xu, Feng-Wei, Baug, Tapas, Dewangan, Lokesh K., He, Jinhua, Zhu, Lei, Li1, Shanghuo, Juvela, Mika, Saha, Anindya, Issac, Namitha, Hwang, Jihye, Nazeer, Hafiz, and Toth, L. Viktor
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - Solar and Stellar Astrophysics - Abstract
Hub-filament systems are considered as natural sites for high-mass star formation. Kinematic analysis of the surroundings of hub-filaments is essential to better understand high-mass star formation within such systems. In this work, we present a detailed study of the massive Galactic protocluster IRAS 15394$-$5358, using continuum and molecular line data from the ALMA Three-millimeter Observations of Massive Star-forming Regions (ATOMS) survey. The 3~mm dust continuum map reveals the fragmentation of the massive ($\rm M=843~M_{\odot}$) clump into six cores. The core C-1A is the largest (radius = 0.04~pc), the most massive ($\rm M=157~M_{\odot}$), and lies within the dense central region, along with two smaller cores ($\rm M=7~and~3~M_{\odot}$). The fragmentation process is consistent with the thermal Jeans fragmentation mechanism and virial analysis shows that all the cores have small virial parameter values ($\rm \alpha_{vir}<<2$), suggesting that the cores are gravitationally bound. The mass vs. radius relation indicates that three cores can potentially form at least a single massive star. The integrated intensity map of $\rm H^{13}CO^{+}$ shows that the massive clump is associated with a hub-filament system, where the central hub is linked with four filaments. A sharp velocity gradient is observed towards the hub, suggesting a global collapse where the filaments are actively feeding the hub. We discuss the role of global collapse and the possible driving mechanisms for the massive star formation activity in the protocluster., Comment: 23 pages, 19 figures, accepted for publication in MNRAS
- Published
- 2024
14. Pose-Guided Fine-Grained Sign Language Video Generation
- Author
-
Shi, Tongkai, Hu, Lianyu, Shang, Fanhua, Feng, Jichao, Liu, Peidong, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Sign language videos are an important medium for spreading and learning sign language. However, most existing human image synthesis methods produce sign language images with details that are distorted, blurred, or structurally incorrect. They also produce sign language video frames with poor temporal consistency, with anomalies such as flickering and abrupt detail changes between the previous and next frames. To address these limitations, we propose a novel Pose-Guided Motion Model (PGMM) for generating fine-grained and motion-consistent sign language videos. Firstly, we propose a new Coarse Motion Module (CMM), which completes the deformation of features by optical flow warping, thus transfering the motion of coarse-grained structures without changing the appearance; Secondly, we propose a new Pose Fusion Module (PFM), which guides the modal fusion of RGB and pose features, thus completing the fine-grained generation. Finally, we design a new metric, Temporal Consistency Difference (TCD) to quantitatively assess the degree of temporal consistency of a video by comparing the difference between the frames of the reconstructed video and the previous and next frames of the target video. Extensive qualitative and quantitative experiments show that our method outperforms state-of-the-art methods in most benchmark tests, with visible improvements in details and temporal consistency., Comment: ECCV 2024
- Published
- 2024
15. Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
- Author
-
Chen, Hong, Wang, Xin, Zhou, Yuwei, Huang, Bin, Zhang, Yipeng, Feng, Wei, Chen, Houlun, Zhang, Zeyang, Tang, Siao, and Zhu, Wenwu
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Multi-modal generative AI has received increasing attention in both academia and industry. Particularly, two dominant families of techniques are: i) The multi-modal large language model (MLLM) such as GPT-4V, which shows impressive ability for multi-modal understanding; ii) The diffusion model such as Sora, which exhibits remarkable multi-modal powers, especially with respect to visual generation. As such, one natural question arises: Is it possible to have a unified model for both understanding and generation? To answer this question, in this paper, we first provide a detailed review of both MLLM and diffusion models, including their probabilistic modeling procedure, multi-modal architecture design, and advanced applications to image/video large language models as well as text-to-image/video generation. Then, we discuss the two important questions on the unified model: i) whether the unified model should adopt the auto-regressive or diffusion probabilistic modeling, and ii) whether the model should utilize a dense architecture or the Mixture of Experts(MoE) architectures to better support generation and understanding, two objectives. We further provide several possible strategies for building a unified model and analyze their potential advantages and disadvantages. We also summarize existing large-scale multi-modal datasets for better model pretraining in the future. To conclude the paper, we present several challenging future directions, which we believe can contribute to the ongoing advancement of multi-modal generative AI.
- Published
- 2024
16. Sight View Constraint for Robust Point Cloud Registration
- Author
-
Zhang, Yaojie, Wang, Weijun, Huang, Tianlun, Wang, Zhiyong, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Partial to Partial Point Cloud Registration (partial PCR) remains a challenging task, particularly when dealing with a low overlap rate. In comparison to the full-to-full registration task, we find that the objective of partial PCR is still not well-defined, indicating no metric can reliably identify the true transformation. We identify this as the most fundamental challenge in partial PCR tasks. In this paper, instead of directly seeking the optimal transformation, we propose a novel and general Sight View Constraint (SVC) to conclusively identify incorrect transformations, thereby enhancing the robustness of existing PCR methods. Extensive experiments validate the effectiveness of SVC on both indoor and outdoor scenes. On the challenging 3DLoMatch dataset, our approach increases the registration recall from 78\% to 82\%, achieving the state-of-the-art result. This research also highlights the significance of the decision version problem of partial PCR, which has the potential to provide novel insights into the partial PCR problem., Comment: 9 pages
- Published
- 2024
17. Towards Reliable Advertising Image Generation Using Human Feedback
- Author
-
Du, Zhenbang, Feng, Wei, Wang, Haohan, Li, Yaoyu, Wang, Jingsen, Li, Jian, Zhang, Zheng, Lv, Jingjing, Zhu, Xin, Jin, Junsheng, Shen, Junjie, Lin, Zhangang, and Shao, Jingping
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In the e-commerce realm, compelling advertising images are pivotal for attracting customer attention. While generative models automate image generation, they often produce substandard images that may mislead customers and require significant labor costs to inspect. This paper delves into increasing the rate of available generated images. We first introduce a multi-modal Reliable Feedback Network (RFNet) to automatically inspect the generated images. Combining the RFNet into a recurrent process, Recurrent Generation, results in a higher number of available advertising images. To further enhance production efficiency, we fine-tune diffusion models with an innovative Consistent Condition regularization utilizing the feedback from RFNet (RFFT). This results in a remarkable increase in the available rate of generated images, reducing the number of attempts in Recurrent Generation, and providing a highly efficient production process without sacrificing visual appeal. We also construct a Reliable Feedback 1 Million (RF1M) dataset which comprises over one million generated advertising images annotated by human, which helps to train RFNet to accurately assess the availability of generated images and faithfully reflect the human feedback. Generally speaking, our approach offers a reliable solution for advertising image generation., Comment: ECCV2024
- Published
- 2024
18. OCTrack: Benchmarking the Open-Corpus Multi-Object Tracking
- Author
-
Qian, Zekun, Han, Ruize, Feng, Wei, Hou, Junhui, Song, Linqi, and Wang, Song
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
We study a novel yet practical problem of open-corpus multi-object tracking (OCMOT), which extends the MOT into localizing, associating, and recognizing generic-category objects of both seen (base) and unseen (novel) classes, but without the category text list as prompt. To study this problem, the top priority is to build a benchmark. In this work, we build OCTrackB, a large-scale and comprehensive benchmark, to provide a standard evaluation platform for the OCMOT problem. Compared to previous datasets, OCTrackB has more abundant and balanced base/novel classes and the corresponding samples for evaluation with less bias. We also propose a new multi-granularity recognition metric to better evaluate the generative object recognition in OCMOT. By conducting the extensive benchmark evaluation, we report and analyze the results of various state-of-the-art methods, which demonstrate the rationale of OCMOT, as well as the usefulness and advantages of OCTrackB.
- Published
- 2024
19. Multi-sentence Video Grounding for Long Video Generation
- Author
-
Feng, Wei, Wang, Xin, Chen, Hong, Zhang, Zeyang, and Zhu, Wenwu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Video generation has witnessed great success recently, but their application in generating long videos still remains challenging due to the difficulty in maintaining the temporal consistency of generated videos and the high memory cost during generation. To tackle the problems, in this paper, we propose a brave and new idea of Multi-sentence Video Grounding for Long Video Generation, connecting the massive video moment retrieval to the video generation task for the first time, providing a new paradigm for long video generation. The method of our work can be summarized as three steps: (i) We design sequential scene text prompts as the queries for video grounding, utilizing the massive video moment retrieval to search for video moment segments that meet the text requirements in the video database. (ii) Based on the source frames of retrieved video moment segments, we adopt video editing methods to create new video content while preserving the temporal consistency of the retrieved video. Since the editing can be conducted segment by segment, and even frame by frame, it largely reduces the memory cost. (iii) We also attempt video morphing and personalized generation methods to improve the subject consistency of long video generation, providing ablation experimental results for the subtasks of long video generation. Our approach seamlessly extends the development in image/video editing, video morphing and personalized generation, and video grounding to the long video generation, offering effective solutions for generating long videos at low memory cost.
- Published
- 2024
20. Towards stable training of parallel continual learning
- Author
-
Yuepan, Li, Lyu, Fan, Li, Yuyang, Feng, Wei, Liu, Guangcan, and Shang, Fanhua
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Parallel Continual Learning (PCL) tasks investigate the training methods for continual learning with multi-source input, where data from different tasks are learned as they arrive. PCL offers high training efficiency and is well-suited for complex multi-source data systems, such as autonomous vehicles equipped with multiple sensors. However, at any time, multiple tasks need to be trained simultaneously, leading to severe training instability in PCL. This instability manifests during both forward and backward propagation, where features are entangled and gradients are conflict. This paper introduces Stable Parallel Continual Learning (SPCL), a novel approach that enhances the training stability of PCL for both forward and backward propagation. For the forward propagation, we apply Doubly-block Toeplit (DBT) Matrix based orthogonality constraints to network parameters to ensure stable and consistent propagation. For the backward propagation, we employ orthogonal decomposition for gradient management stabilizes backpropagation and mitigates gradient conflicts across tasks. By optimizing gradients by ensuring orthogonality and minimizing the condition number, SPCL effectively stabilizing the gradient descent in complex optimization tasks. Experimental results demonstrate that SPCL outperforms state-of-the-art methjods and achieve better training stability.
- Published
- 2024
21. Unsupervised 4D Cardiac Motion Tracking with Spatiotemporal Optical Flow Networks
- Author
-
Teng, Long, Feng, Wei, Zhu, Menglong, and Li, Xinchao
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Cardiac motion tracking from echocardiography can be used to estimate and quantify myocardial motion within a cardiac cycle. It is a cost-efficient and effective approach for assessing myocardial function. However, ultrasound imaging has the inherent characteristics of spatially low resolution and temporally random noise, which leads to difficulties in obtaining reliable annotation. Thus it is difficult to perform supervised learning for motion tracking. In addition, there is no end-to-end unsupervised method currently in the literature. This paper presents a motion tracking method where unsupervised optical flow networks are designed with spatial reconstruction loss and temporal-consistency loss. Our proposed loss functions make use of the pair-wise and temporal correlation to estimate cardiac motion from noisy background. Experiments using a synthetic 4D echocardiography dataset has shown the effectiveness of our approach, and its superiority over existing methods on both accuracy and running speed. To the best of our knowledge, this is the first work performed that uses unsupervised end-to-end deep learning optical flow network for 4D cardiac motion tracking.
- Published
- 2024
22. Direct observational evidence of multi-epoch massive star formation in G24.47+0.49
- Author
-
Saha, Anindya, Tej, Anandmayee, Liu, Hong-Li, Liu, Tie, Garay, Guido, Goldsmith, Paul F., Lee, Chang Won, He, Jinhua, Juvela, Mika, Bronfman, Leonardo, Baug, Tapas, Vazquez-Semadeni, Enrique, Sanhueza, Patricio, Li, Shanghuo, Chibueze, James O., Bhadari, N. K., Dewangan, Lokesh K., Das, Swagat Ranjan, Xu, Feng-Wei, Issac, Namitha, Hwang, Jihye, and Toth, L. Viktor
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - Solar and Stellar Astrophysics - Abstract
Using new continuum and molecular line data from the ALMA Three-millimeter Observations of Massive Star-forming Regions (ATOMS) survey and archival VLA, 4.86 GHz data, we present direct observational evidence of hierarchical triggering relating three epochs of massive star formation in a ring-like H II region, G24.47+0.49. We find from radio flux analysis that it is excited by a massive star(s) of spectral type O8.5V-O8V from the first epoch of star formation. The swept-up ionized ring structure shows evidence of secondary collapse, and within this ring a burst of massive star formation is observed in different evolutionary phases, which constitutes the second epoch. ATOMS spectral line (e.g., HCO$^+$(1-0)) observations reveal an outer concentric molecular gas ring expanding at a velocity of $\sim$ 9 $\rm km\,s^{-1}$, constituting the direct and unambiguous detection of an expanding molecular ring. It harbors twelve dense molecular cores with surface mass density greater than 0.05 $\rm g\,cm^{-2}$, a threshold typical of massive star formation. Half of them are found to be subvirial, and thus in gravitational collapse, making them third epoch of potential massive star-forming sites., Comment: 18 pages, 7 figures, accepted for publication in The Astrophysical Journal Letters
- Published
- 2024
23. Robust Multi-Robot Global Localization with Unknown Initial Pose based on Neighbor Constraints
- Author
-
Zhang, Yaojie, Luo, Haowen, Wang, Weijun, and Feng, Wei
- Subjects
Computer Science - Robotics - Abstract
Multi-robot global localization (MR-GL) with unknown initial positions in a large scale environment is a challenging task. The key point is the data association between different robots' viewpoints. It also makes traditional Appearance-based localization methods unusable. Recently, researchers have utilized the object's semantic invariance to generate a semantic graph to address this issue. However, previous works lack robustness and are sensitive to overlap rate of maps, resulting in unpredictable performance in real-world environments. In this paper, we propose a data association algorithm based on neighbor constraints to improve the robustness of the system. We demonstrate the effectiveness of our method on three different datasets, indicating a significant improvement in robustness compared to previous works., Comment: 7 pages (6+1), accepted by ICRA 2024
- Published
- 2024
- Full Text
- View/download PDF
24. MFDNet: Multi-Frequency Deflare Network for Efficient Nighttime Flare Removal
- Author
-
Jiang, Yiguo, Chen, Xuhang, Pun, Chi-Man, Wang, Shuqiang, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
When light is scattered or reflected accidentally in the lens, flare artifacts may appear in the captured photos, affecting the photos' visual quality. The main challenge in flare removal is to eliminate various flare artifacts while preserving the original content of the image. To address this challenge, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid. Our network decomposes the flare-corrupted image into low and high-frequency bands, effectively separating the illumination and content information in the image. The low-frequency part typically contains illumination information, while the high-frequency part contains detailed content information. So our MFDNet consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) to remove flare in the low-frequency part and the Hierarchical Fusion Reconstruction Module (HFRM) to reconstruct the flare-free image. Specifically, to perceive flare from a global perspective while retaining detailed information for image restoration, LFFPM utilizes Transformer to extract global information while utilizing a convolutional neural network to capture detailed local features. Then HFRM gradually fuses the outputs of LFFPM with the high-frequency component of the image through feature aggregation. Moreover, our MFDNet can reduce the computational cost by processing in multiple frequency bands instead of directly removing the flare on the input image. Experimental results demonstrate that our approach outperforms state-of-the-art methods in removing nighttime flare on real-world and synthetic images from the Flare7K dataset. Furthermore, the computational complexity of our model is remarkably low., Comment: Accepted by The Visual Computer journal
- Published
- 2024
25. Estimation of Global Building Stocks by 2070: Unlocking Renovation Potential
- Author
-
Zhang, Shufan, Ma, Minda, Zhou, Nan, Yan, Jinyue, Feng, Wei, Yan, Ran, You, Kairui, Zhang, Jingjing, and Ke, Jing
- Subjects
Economics - General Economics - Abstract
Buildings produce one-third of carbon emissions globally, however, data absence regarding global floorspace poses challenges in advancing building carbon neutrality. We compile the measured building stocks for 14 major economies and apply our global building stock model, GLOBUS, to evaluate future trends in stock turnover. Based on a scenario not considering renovation, by 2070 the building stock in developed economies will be ~1.4 times that of 2020 (100 billion m2); in developing economies it is expected to be 2.2 times that of 2020 (313 billion m2). Based on a techno-economic potential scenario, however, stocks in developed economies will decline to approximately 0.8 times the 2020 level, while stocks in developing economies will increase to nearly twice the 2020 level due to their fewer buildings currently. Overall, GLOBUS provides a way of calculating the global building stock, helping scientists, engineers, and policymakers conduct a range of investigation across various future scenarios., Comment: 25 pages, 4 figures
- Published
- 2024
26. The Invention of Body Representation in Modern China: Case Study of Liu Haisu and the “Model Event”
- Author
-
Zhu, Guohua and Feng, Wei
- Published
- 2019
27. The Chairs by Eugène Ionesco (review)
- Author
-
Feng, Wei
- Published
- 2019
- Full Text
- View/download PDF
28. Controllable Continual Test-Time Adaptation
- Author
-
Shi, Ziqi, Lyu, Fan, Liu, Ye, Shang, Fanhua, Hu, Fuyuan, Feng, Wei, Zhang, Zhang, and Wang, Liang
- Subjects
Computer Science - Machine Learning - Abstract
Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model trained in a source domain must adapt to continuously changing conditions during testing, without access to the original source data. CTTA is prone to error accumulation due to uncontrollable domain shifts, leading to blurred decision boundaries between categories. Existing CTTA methods primarily focus on suppressing domain shifts, which proves inadequate during the unsupervised test phase. In contrast, we introduce a novel approach that guides rather than suppresses these shifts. Specifically, we propose $\textbf{C}$ontrollable $\textbf{Co}$ntinual $\textbf{T}$est-$\textbf{T}$ime $\textbf{A}$daptation (C-CoTTA), which explicitly prevents any single category from encroaching on others, thereby mitigating the mutual influence between categories caused by uncontrollable shifts. Moreover, our method reduces the sensitivity of model to domain transformations, thereby minimizing the magnitude of category shifts. Extensive quantitative experiments demonstrate the effectiveness of our method, while qualitative analyses, such as t-SNE plots, confirm the theoretical validity of our approach.
- Published
- 2024
29. Edge Information Hub-Empowered 6G NTN: Latency-Oriented Resource Orchestration and Configuration
- Author
-
Lin, Yueshan, Feng, Wei, Chen, Yunfei, Ge, Ning, Feng, Zhiyong, and Gao, Yue
- Subjects
Computer Science - Networking and Internet Architecture ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Quick response to disasters is crucial for saving lives and reducing loss. This requires low-latency uploading of situation information to the remote command center. Since terrestrial infrastructures are often damaged in disaster areas, non-terrestrial networks (NTNs) are preferable to provide network coverage, and mobile edge computing (MEC) could be integrated to improve the latency performance. Nevertheless, the communications and computing in MEC-enabled NTNs are strongly coupled, which complicates the system design. In this paper, an edge information hub (EIH) that incorporates communication, computing and storage capabilities is proposed to synergize communication and computing and enable systematic design. We first address the joint data scheduling and resource orchestration problem to minimize the latency for uploading sensing data. The problem is solved using an optimal resource orchestration algorithm. On that basis, we propose the principles for resource configuration of the EIH considering payload constraints on size, weight and energy supply. Simulation results demonstrate the superiority of our proposed scheme in reducing the overall upload latency, thus enabling quick emergency rescue.
- Published
- 2024
30. Overcoming Domain Drift in Online Continual Learning
- Author
-
Lyu, Fan, Liu, Daofeng, Zhao, Linglan, Zhang, Zhang, Shang, Fanhua, Hu, Fuyuan, Feng, Wei, and Wang, Liang
- Subjects
Computer Science - Machine Learning - Abstract
Online Continual Learning (OCL) empowers machine learning models to acquire new knowledge online across a sequence of tasks. However, OCL faces a significant challenge: catastrophic forgetting, wherein the model learned in previous tasks is substantially overwritten upon encountering new tasks, leading to a biased forgetting of prior knowledge. Moreover, the continual doman drift in sequential learning tasks may entail the gradual displacement of the decision boundaries in the learned feature space, rendering the learned knowledge susceptible to forgetting. To address the above problem, in this paper, we propose a novel rehearsal strategy, termed Drift-Reducing Rehearsal (DRR), to anchor the domain of old tasks and reduce the negative transfer effects. First, we propose to select memory for more representative samples guided by constructed centroids in a data stream. Then, to keep the model from domain chaos in drifting, a two-level angular cross-task Contrastive Margin Loss (CML) is proposed, to encourage the intra-class and intra-task compactness, and increase the inter-class and inter-task discrepancy. Finally, to further suppress the continual domain drift, we present an optional Centorid Distillation Loss (CDL) on the rehearsal memory to anchor the knowledge in feature space for each previous old task. Extensive experimental results on four benchmark datasets validate that the proposed DRR can effectively mitigate the continual domain drift and achieve the state-of-the-art (SOTA) performance in OCL.
- Published
- 2024
31. Discrete nonlinear Schr\'odinger type equations: Solutions and continuum limits
- Author
-
Zhao, Song-lin, Feng, Xiao-hui, and Feng, Wei
- Subjects
Nonlinear Sciences - Exactly Solvable and Integrable Systems - Abstract
As local and nonlocal reductions of a discrete second-order Ablowitz-Kaup-Newell-Segur equation, two discrete nonlinear Schr\"odinger type equations are considered. Through the bilinearization reduction method, we construct double Casoratian solutions of the reduced discrete nonlinear Schr\"odinger type equations, including soliton solutions and Jordan-block solutions.Dynamics of the obtained one-soliton and two-soliton solutions are analyzed and illustrated. Moreover,both semi-continuous limit and full continuous limit, are applied to obtain solutions of the local and nonlocal semi-discrete nonlinear Schr\"odinger type equations, as well as the local and nonlocal continuous nonlinear Schr\"odinger type equations., Comment: This paper contains 24 pages and 37 figures
- Published
- 2024
32. CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
- Author
-
Hu, Lianyu, Feng, Wei, Gao, Liqing, Liu, Zekang, and Wan, Liang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In sign language, the conveyance of human body trajectories predominantly relies upon the coordinated movements of hands and facial expressions across successive frames. Despite the recent advancements of sign language understanding methods, they often solely focus on individual frames, inevitably overlooking the inter-frame correlations that are essential for effectively modeling human body trajectories. To address this limitation, this paper introduces a spatial-temporal correlation network, denoted as CorrNet+, which explicitly identifies body trajectories across multiple frames. In specific, CorrNet+ employs a correlation module and an identification module to build human body trajectories. Afterwards, a temporal attention module is followed to adaptively evaluate the contributions of different frames. The resultant features offer a holistic perspective on human body movements, facilitating a deeper understanding of sign language. As a unified model, CorrNet+ achieves new state-of-the-art performance on two extensive sign language understanding tasks, including continuous sign language recognition (CSLR) and sign language translation (SLT). Especially, CorrNet+ surpasses previous methods equipped with resource-intensive pose-estimation networks or pre-extracted heatmaps for hand and facial feature extraction. Compared with CorrNet, CorrNet+ achieves a significant performance boost across all benchmarks while halving the computational overhead. A comprehensive comparison with previous spatial-temporal reasoning methods verifies the superiority of CorrNet+. Code is available at https://github.com/hulianyuyy/CorrNet_Plus., Comment: arXiv admin note: substantial text overlap with arXiv:2303.03202
- Published
- 2024
33. Arena: A Patch-of-Interest ViT Inference Acceleration System for Edge-Assisted Video Analytics
- Author
-
Peng, Haosong, Feng, Wei, Li, Hao, Zhan, Yufeng, Jin, Ren, and Xia, Yuanqing
- Subjects
Computer Science - Multimedia ,Computer Science - Computer Vision and Pattern Recognition - Abstract
The advent of edge computing has made real-time intelligent video analytics feasible. Previous works, based on traditional model architecture (e.g., CNN, RNN, etc.), employ various strategies to filter out non-region-of-interest content to minimize bandwidth and computation consumption but show inferior performance in adverse environments. Recently, visual foundation models based on transformers have shown great performance in adverse environments due to their amazing generalization capability. However, they require a large amount of computation power, which limits their applications in real-time intelligent video analytics. In this paper, we find visual foundation models like Vision Transformer (ViT) also have a dedicated acceleration mechanism for video analytics. To this end, we introduce Arena, an end-to-end edge-assisted video inference acceleration system based on ViT. We leverage the capability of ViT that can be accelerated through token pruning by only offloading and feeding Patches-of-Interest to the downstream models. Additionally, we design an adaptive keyframe inference switching algorithm tailored to different videos, capable of adapting to the current video content to jointly optimize accuracy and bandwidth. Through extensive experiments, our findings reveal that Arena can boost inference speeds by up to 1.58\(\times\) and 1.82\(\times\) on average while consuming only 47\% and 31\% of the bandwidth, respectively, all with high inference accuracy.
- Published
- 2024
34. Improving Continuous Sign Language Recognition with Adapted Image Models
- Author
-
Hu, Lianyu, Shi, Tongkai, Gao, Liqing, Liu, Zekang, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The increase of web-scale weakly labelled image-text pairs have greatly facilitated the development of large-scale vision-language models (e.g., CLIP), which have shown impressive generalization performance over a series of downstream tasks. However, the massive model size and scarcity of available data limit their applications to fine-tune the whole model in downstream tasks. Besides, fully fine-tuning the model easily forgets the generic essential knowledge acquired in the pretraining stage and overfits the downstream data. To enable high efficiency when adapting these large vision-language models (e.g., CLIP) to performing continuous sign language recognition (CSLR) while preserving their generalizability, we propose a novel strategy (AdaptSign). Especially, CLIP is adopted as the visual backbone to extract frame-wise features whose parameters are fixed, and a set of learnable modules are introduced to model spatial sign variations or capture temporal sign movements. The introduced additional modules are quite lightweight, only owning 3.2% extra computations with high efficiency. The generic knowledge acquired in the pretraining stage is well-preserved in the frozen CLIP backbone in this process. Extensive experiments show that despite being efficient, AdaptSign is able to demonstrate superior performance across a series of CSLR benchmarks including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL compared to existing methods. Visualizations show that AdaptSign could learn to dynamically pay major attention to the informative spatial regions and cross-frame trajectories in sign videos.
- Published
- 2024
35. LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks
- Author
-
Chen, Jianlang, Ren, Xuhong, Guo, Qing, Juefei-Xu, Felix, Lin, Di, Feng, Wei, Ma, Lei, and Zhao, Jianjun
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Visual object tracking plays a critical role in visual-based autonomous systems, as it aims to estimate the position and size of the object of interest within a live video. Despite significant progress made in this field, state-of-the-art (SOTA) trackers often fail when faced with adversarial perturbations in the incoming frames. This can lead to significant robustness and security issues when these trackers are deployed in the real world. To achieve high accuracy on both clean and adversarial data, we propose building a spatial-temporal continuous representation using the semantic text guidance of the object of interest. This novel continuous representation enables us to reconstruct incoming frames to maintain semantic and appearance consistency with the object of interest and its clean counterparts. As a result, our proposed method successfully defends against different SOTA adversarial tracking attacks while maintaining high accuracy on clean data. In particular, our method significantly increases tracking accuracy under adversarial attacks with around 90% relative improvement on UAV123, which is even higher than the accuracy on clean data.
- Published
- 2024
36. Dynamic Spatial-Temporal Aggregation for Skeleton-Aware Sign Language Recognition
- Author
-
Hu, Lianyu, Gao, Liqing, Liu, Zekang, and Feng, Wei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Skeleton-aware sign language recognition (SLR) has gained popularity due to its ability to remain unaffected by background information and its lower computational requirements. Current methods utilize spatial graph modules and temporal modules to capture spatial and temporal features, respectively. However, their spatial graph modules are typically built on fixed graph structures such as graph convolutional networks or a single learnable graph, which only partially explore joint relationships. Additionally, a simple temporal convolution kernel is used to capture temporal information, which may not fully capture the complex movement patterns of different signers. To overcome these limitations, we propose a new spatial architecture consisting of two concurrent branches, which build input-sensitive joint relationships and incorporates specific domain knowledge for recognition, respectively. These two branches are followed by an aggregation process to distinguishe important joint connections. We then propose a new temporal module to model multi-scale temporal information to capture complex human dynamics. Our method achieves state-of-the-art accuracy compared to previous skeleton-aware methods on four large-scale SLR benchmarks. Moreover, our method demonstrates superior accuracy compared to RGB-based methods in most cases while requiring much fewer computational resources, bringing better accuracy-computation trade-off. Code is available at https://github.com/hulianyuyy/DSTA-SLR.
- Published
- 2024
37. Hunting Attributes: Context Prototype-Aware Learning for Weakly Supervised Semantic Segmentation
- Author
-
Tang, Feilong, Xu, Zhongxing, Qu, Zhaojun, Feng, Wei, Jiang, Xingjian, and Ge, Zongyuan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Recent weakly supervised semantic segmentation (WSSS) methods strive to incorporate contextual knowledge to improve the completeness of class activation maps (CAM). In this work, we argue that the knowledge bias between instances and contexts affects the capability of the prototype to sufficiently understand instance semantics. Inspired by prototype learning theory, we propose leveraging prototype awareness to capture diverse and fine-grained feature attributes of instances. The hypothesis is that contextual prototypes might erroneously activate similar and frequently co-occurring object categories due to this knowledge bias. Therefore, we propose to enhance the prototype representation ability by mitigating the bias to better capture spatial coverage in semantic object regions. With this goal, we present a Context Prototype-Aware Learning (CPAL) strategy, which leverages semantic context to enrich instance comprehension. The core of this method is to accurately capture intra-class variations in object features through context-aware prototypes, facilitating the adaptation to the semantic attributes of various instances. We design feature distribution alignment to optimize prototype awareness, aligning instance feature distributions with dense features. In addition, a unified training framework is proposed to combine label-guided classification supervision and prototypes-guided self-supervision. Experimental results on PASCAL VOC 2012 and MS COCO 2014 show that CPAL significantly improves off-the-shelf methods and achieves state-of-the-art performance. The project is available at https://github.com/Barrett-python/CPAL.
- Published
- 2024
38. Edge Information Hub: Orchestrating Satellites, UAVs, MEC, Sensing and Communications for 6G Closed-Loop Controls
- Author
-
Lei, Chengleyang, Feng, Wei, Wei, Peng, Chen, Yunfei, Ge, Ning, and Mao, Shiwen
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
An increasing number of field robots would be used for mission-critical tasks in remote or post-disaster areas. Due to the limited individual abilities, these robots usually require an edge information hub (EIH), with not only communication but also sensing and computing functions. Such EIH could be deployed on a flexibly-dispatched unmanned aerial vehicle (UAV). Different from traditional aerial base stations or mobile edge computing (MEC), the EIH would direct the operations of robots via sensing-communication-computing-control ($\textbf{SC}^3$) closed-loop orchestration. This paper aims to optimize the closed-loop control performance of multiple $\textbf{SC}^3$ loops, with constraints on satellite-backhaul rate, computing capability, and on-board energy. Specifically, the linear quadratic regulator (LQR) control cost is used to measure the closed-loop utility, and a sum LQR cost minimization problem is formulated to jointly optimize the splitting of sensor data and allocation of communication and computing resources. We first derive the optimal splitting ratio of sensor data, and then recast the problem to a more tractable form. An iterative algorithm is finally proposed to provide a sub-optimal solution. Simulation results demonstrate the superiority of the proposed algorithm. We also uncover the influence of $\textbf{SC}^3$ parameters on closed-loop controls, highlighting more systematic understanding., Comment: 16pages, 11 figures
- Published
- 2024
39. Higher-order exceptional surface in a pseudo-Hermitian superconducting circuit
- Author
-
Zhang, Guo-Qiang, Feng, Wei, Wang, Yu, and Yang, Chui-Ping
- Subjects
Quantum Physics - Abstract
In the last few years, much attention has been paid to exceptional surfaces (ESs) owing to various important physical phenomena and potential applications. However, high-order ESs in pseudo-Hermitian systems have not been reported until now. Here, we study the high-order ES in a pseudo-Hermitian superconducting (SC) circuit system. In our proposal, the SC circuit system is composed of three circularly coupled SC cavities, where the gain and loss are balanced. According to the eigenvalue properties of the pseudo-Hermitian Hamiltonian, we derive the general pseudo-Hermitian conditions for the ternary SC system. In the special pseudo-Hermitian case with parity-time symmetry, all third-order exceptional points (EP3s) of the SC system form a third-order exceptional line in the parameter space. Under the general pseudo-Hermitian conditions, more EP3s are found, and all EP3s are located on a surface, i.e., a third-order exceptional surface is constructed. Moreover, we also investigate the eigenvalues of the pseudo-Hermitian SC circuit around EP3s. Our work opens up a door for exploring high-order ESs and related applications in pseudo-Hermitian systems., Comment: 8 pages, 5 figures
- Published
- 2024
40. EfficientDeRain+: Learning Uncertainty-Aware Filtering via RainMix Augmentation for High-Efficiency Deraining
- Author
-
Guo, Qing, Qi, Hua, Sun, Jingyang, Juefei-Xu, Felix, Ma, Lei, Lin, Di, Feng, Wei, and Wang, Song
- Published
- 2024
- Full Text
- View/download PDF
41. A hybrid wavelet-deep learning approach for vibration-based damage detection in monopile offshore structures considering soil interaction
- Author
-
Feng, Wei-Qiang, Mousavi, Zohreh, Farhadi, Mohammadreza, Bayat, Meysam, Ettefagh, Mir Mohammad, Varahram, Sina, and Sadeghi, Morteza H.
- Published
- 2024
- Full Text
- View/download PDF
42. Preparation technologies for polymer composites with high-directional thermal conductivity: A review
- Author
-
Duan, Yanshuai, Yu, Huitao, Zhang, Fei, Qin, Mengmeng, and Feng, Wei
- Published
- 2024
- Full Text
- View/download PDF
43. Tailoring Iron-Ion Release of Cellulose-Based Aerogel-Coated Iron Foam for Long-Term High-Power Microbial Fuel Cells
- Author
-
Ni, Zhengyang, Yu, Huitao, Wang, Haoran, Qin, Mengmeng, Li, Feng, Song, Hao, Chen, Xiangyu, Feng, Yiyu, and Feng, Wei
- Published
- 2024
- Full Text
- View/download PDF
44. Dynamic Crosslinked Phosphorescent Poly(vinyl alcohol)-Terpyridine Films with Enhanced Mechanical Properties and Tunable Shape Memory
- Author
-
Wei, Meng, Feng, Wei-Hao, Yu, Chen, Jiang, Zhen-Yi, Yin, Guang-Qiang, Lu, Wei, and Chen, Tao
- Published
- 2024
- Full Text
- View/download PDF
45. Bi-functionalized MCM-41 for heavy metal ions removal: synthesis, enhanced performance and mechanism study
- Author
-
Liao, Qingling, Ma, Fumin, Fu, Yongjun, Feng, Wei, and Lu, Ying
- Published
- 2024
- Full Text
- View/download PDF
46. Interval Type-2 Fuzzy Sampled-Date Control for Nonlinear Systems with Packet Dropouts via a Switched System Approach
- Author
-
Ge, Chao, Sun, Rui, Liu, Yajuan, and Feng, Wei
- Published
- 2024
- Full Text
- View/download PDF
47. Insight into the Solution Self-Assembly of Amphiphilic Asymmetric Brush Copolymers via Computer Simulations
- Author
-
Zeng, Wei-Ting, Feng, Wei-Sheng, Zhang, Xing, Yao, Yuan, Xu, Bin-Bin, and Lin, Shao-Liang
- Published
- 2024
- Full Text
- View/download PDF
48. Anti-PD-L1 blockade facilitates antitumor effects of radiofrequency ablation by improving tumor immune microenvironment in hepatocellular carcinoma
- Author
-
Liang, Jiahua, Ma, Mingjian, Feng, Wei, Xu, Qiongcong, Chen, Dong, Lai, Jiaming, and Chen, Jiancong
- Published
- 2024
- Full Text
- View/download PDF
49. Enhancing Patient Satisfaction in Cross-Regional Healthcare: a Cross-Sectional Study in the Knowledge-Based Healthcare Landscape
- Author
-
Li, Li, Cui, Xin, and Feng, Wei
- Published
- 2024
- Full Text
- View/download PDF
50. Moving and fusion of Majorana zero modes in the presence of nonadiabatic transitions
- Author
-
Wang, Qiongyao, Bai, Jing, Xu, Luting, Feng, Wei, and Li, Xin-Qi
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
We perform simulations for moving and non-Abelian fusion of Majorana zero modes in topological superconducting quantum wires. We display interesting behaviors of nonadiabatic transition associated with the moving through mini-gate-controlled multiple-segments modulations. Owing to breaking of the initial fermion parity induced by nonadiabatic transitions, deviation from the standard fusion rule is analyzed. Moreover, we develop a measurement scheme to infer the amount of fermion parity breaking and nonadiabatic transition probability to excited states, based on the characteristic spectrum of measurement current by a uantum-point-contact detector, in measuring the charge occupation dynamics in a fusion-outcome-probing quantum dot., Comment: 10 pages, 6 figures
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.