273 results on '"Jianjun Lei"'
Search Results
2. Novel View Synthesis from a Single Unposed Image via Unsupervised Learning
- Author
-
Bingzheng Liu, Jianjun Lei, Bo Peng, Chuanbo Yu, Wanqing Li, and Nam Ling
- Subjects
Computer Networks and Communications ,Hardware and Architecture - Abstract
Novel view synthesis aims to generate novel views from one or more given source views. Although existing methods have achieved promising performance, they usually require paired views with different poses to learn a pixel transformation. This article proposes an unsupervised network to learn such a pixel transformation from a single source image. In particular, the network consists of a token transformation module that facilities the transformation of the features extracted from a source image into an intrinsic representation with respect to a pre-defined reference pose and a view generation module that synthesizes an arbitrary view from the representation. The learned transformation allows us to synthesize a novel view from any single source image of an unknown pose. Experiments on the widely used view synthesis datasets have demonstrated that the proposed network is able to produce comparable results to the state-of-the-art methods despite the fact that learning is unsupervised and only a single source image is required for generating a novel view. The code will be available upon the acceptance of the article.
- Published
- 2023
- Full Text
- View/download PDF
3. Recurrent Interaction Network for Stereoscopic Image Super-Resolution
- Author
-
Zhe Zhang, Bo Peng, Jianjun Lei, Haifeng Shen, and Qingming Huang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
4. Dual-attention assisted deep reinforcement learning algorithm for energy-efficient resource allocation in Industrial Internet of Things
- Author
-
Ying Wang, Fengjun Shang, Jianjun Lei, Xiangwei Zhu, Haoming Qin, and Jiayu Wen
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Software - Published
- 2023
- Full Text
- View/download PDF
5. Deep In-Loop Filtering via Multi-Domain Correlation Learning and Partition Constraint for Multiview Video Coding
- Author
-
Bo Peng, Renjie Chang, Zhaoqing Pan, Ge Li, Nam Ling, and Jianjun Lei
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
6. Multi-Projection Fusion and Refinement Network for Salient Object Detection in 360$^{\circ }$ Omnidirectional Image
- Author
-
Runmin Cong, Ke Huang, Jianjun Lei, Yao Zhao, Qingming Huang, and Sam Kwong
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Software ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
7. Graph-Based Structural Deep Spectral-Spatial Clustering for Hyperspectral Image
- Author
-
Bo Peng, Yuxuan Yao, Jianjun Lei, Leyuan Fang, and Qingming Huang
- Subjects
Electrical and Electronic Engineering ,Instrumentation - Published
- 2023
- Full Text
- View/download PDF
8. Deep Gradual-Conversion and Cycle Network for Single-View Synthesis
- Author
-
Jianjun Lei, Bingzheng Liu, Bo Peng, Xiaochun Cao, Qingming Huang, and Nam Ling
- Subjects
Computational Mathematics ,Control and Optimization ,Artificial Intelligence ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
9. LVE-S2D: Low-Light Video Enhancement From Static to Dynamic
- Author
-
Bo Peng, Xuanyu Zhang, Jianjun Lei, Zhe Zhang, Nam Ling, and Qingming Huang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
10. DACNN: Blind Image Quality Assessment via a Distortion-Aware Convolutional Neural Network
- Author
-
Zhaoqing Pan, Hao Zhang, Jianjun Lei, Yuming Fang, Xiao Shao, Nam Ling, and Sam Kwong
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
11. Reliability Optimization for Channel Resource Allocation in Multihop Wireless Network: A Multigranularity Deep Reinforcement Learning Approach
- Author
-
Ying Wang, Fengjun Shang, and Jianjun Lei
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Signal Processing ,Computer Science Applications ,Information Systems - Published
- 2022
- Full Text
- View/download PDF
12. Clustering information-constrained 3D U-Net subspace clustering for hyperspectral image
- Author
-
Bo Peng, Yuxuan Yao, Qunxia Li, Xinyu Li, Guoting Lin, Lin Chen, and Jianjun Lei
- Subjects
Earth and Planetary Sciences (miscellaneous) ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
13. Multiple Resolution Prediction With Deep Up-Sampling for Depth Video Coding
- Author
-
Ge Li, Jianjun Lei, Zhaoqing Pan, Bo Peng, and Nam Ling
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
14. RDEN: Residual Distillation Enhanced Network-Guided Lightweight Synthesized View Quality Enhancement for 3D-HEVC
- Author
-
Zhaoqing Pan, Feng Yuan, Weijie Yu, Jianjun Lei, Nam Ling, and Sam Kwong
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
15. An <scp>R‐R</scp> ‐type <scp>MYB</scp> transcription factor promotes non‐climacteric pepper fruit carotenoid pigment biosynthesis
- Author
-
Jiali Song, Binmei Sun, Changming Chen, Zuoyang Ning, Shuanglin Zhang, Yutong Cai, Xiongjie Zheng, Bihao Cao, Guoju Chen, Dan Jin, Bosheng Li, Jianxin Bian, Jianjun Lei, Hang He, and Zhangsheng Zhu
- Subjects
Genetics ,Cell Biology ,Plant Science - Published
- 2023
- Full Text
- View/download PDF
16. Modeling Long-Range Dependencies and Epipolar Geometry for Multi-View Stereo
- Author
-
Jie Zhu, Bo Peng, Wanqing Li, Haifeng Shen, Qingming Huang, and Jianjun Lei
- Subjects
Computer Networks and Communications ,Hardware and Architecture - Abstract
This paper proposes a network, referred to as Multi-View Stereo TRansformer (MVSTR) for depth estimation from multi-view images. By modeling long-range dependencies and epipolar geometry, the proposed MVSTR is capable of extracting dense features with global context and 3D consistency, which are crucial for reliable matching in Multi-View Stereo (MVS). Specifically, to tackle the problem of the limited receptive field of existing CNN-based MVS methods, a global-context Transformer module is designed to establish intra-view long-range dependencies so that global contextual features of each view are obtained. In addition, to further enable features of each view to be 3D-consistent, a 3D-consistency Transformer module with an epipolar feature sampler is built, where epipolar geometry is modeled to effectively facilitate cross-view interaction. Experimental results show that the proposed MVSTR achieves the best overall performance on the DTU dataset and demonstrates strong generalization on the Tanks & Temples benchmark dataset.
- Published
- 2023
- Full Text
- View/download PDF
17. Deep Affine Motion Compensation Network for Inter Prediction in VVC
- Author
-
Wanqing Li, Bo Peng, Qingming Huang, Dengchao Jin, Jianjun Lei, and Nam Ling
- Subjects
Motion compensation ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Compensation (engineering) ,Motion field ,Encoding (memory) ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Rotation (mathematics) ,ComputingMethodologies_COMPUTERGRAPHICS ,Block (data storage) ,Coding (social sciences) - Abstract
In video coding, it is a challenge to deal with scenes with complex motions, such as rotation and zooming. Although affine motion compensation (AMC) is employed in Versatile Video Coding (VVC), it is still difficult to handle non-translational motions due to the adopted hand-craft block-based motion compensation. In this paper, we propose a deep affine motion compensation network (DAMC-Net) for inter prediction in video coding to effectively improve the prediction accuracy. To the best of our knowledge, our work is the first attempt to deal with the deformable motion compensation based on CNN in VVC. Specifically, a deformable motion-compensated prediction (DMCP) module is proposed to compensate the current encoding block through a learnable way to estimate accurate motion fields. Meanwhile, the spatial neighboring information and the temporal reference block as well as the initial motion field are fully exploited. By effectively fusing the multi-channel feature maps from DMCP, an attention-based fusion and reconstruction (AFR) module is designed to reconstruct the output block. The proposed DAMC-Net is integrated into VVC and the experimental results demonstrate that the proposed method considerably enhances the coding performance.
- Published
- 2022
- Full Text
- View/download PDF
18. The Capsicum MYB31 regulates capsaicinoid biosynthesis in the pepper pericarp
- Author
-
Binmei Sun, Changming Chen, Jiali Song, Peng Zheng, Juntao Wang, Jianlang Wei, Wen Cai, Siping Chen, Yutong Cai, Yuan Yuan, Shuanglin Zhang, Shaoqun Liu, Jianjun Lei, Guoju Cheng, and Zhangsheng Zhu
- Subjects
Physiology ,Fruit ,Vegetables ,Genetics ,Plant Science ,Capsaicin ,Capsicum - Abstract
Pepper (Capsicum) are consumed worldwide as vegetables and food additives due to their pungent taste. Capsaicinoids are the bioactive compounds that confer the desired pungency to pepper fruits. Capsaicinoid biosynthesis was thought to occur exclusively in fruit placenta. Recently, biosynthesis in the pericarp of extremely pungent varieties was discovered, however, the mechanism of capsaicinoid biosynthesis regulation in the pericarp remains largely unknown. Here, the capsaicinoid contents of placenta and pericarp were analyzed. The results indicated that the Capsicum chinense pericarp accumulated a vast amount of capsaicinoids. Expression of the master regulator MYB31 and capsaicinoid biosynthesis genes (CBGs) were significantly upregulated in the pericarp in C. chinense accessions compared to accessions in other tested species. Moreover, in fruit of extremely-pungent 'Trinidad Moruga Scorpion' (C. chinense) and low-pungent '59' inbred line (C. annuum), the capsaicinoid accumulation patterns in the pericarp were consistent with expression levels of CBGs and MYB31. Silencing MYB31 in 'Trinidad Moruga Scorpion' pericarp leads to a significantly decreased CBGs transcription level and capsaicinoids content. Taken together, our results provide insights into the molecular mechanism arising from the expression of MYB31 in the pericarp that results in exceedingly hot peppers.
- Published
- 2022
- Full Text
- View/download PDF
19. MIEGAN: Mobile Image Enhancement via a Multi-Module Cascade Neural Network
- Author
-
Sam Kwong, Zhaoqing Pan, Feng Yuan, Jianjun Lei, Wanqing Li, and Nam Ling
- Subjects
Discriminator ,Artificial neural network ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science Applications ,Visualization ,Discriminative model ,Feature (computer vision) ,Signal Processing ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Focus (optics) ,Mobile device ,Encoder - Abstract
Visual quality of images captured by mobile devices is often inferior to that of images captured by a Digital Single Lens Reflex (DSLR) camera. This paper presents a novel generative adversarial network-based mobile image enhancement method, referred to as MIEGAN. It consists of a novel multi-module cascade generative network and a novel adaptive multi-scale discriminative network. The multi-module cascade generative network is built upon a two-stream encoder, a feature transformer, and a decoder. In the two-stream encoder, a luminance-regularizing stream is proposed to help the network focus on low-light areas. In the feature transformation module, two networks effectively capture both global and local information of an image. To further assist the generative network to generate the high visual quality images, a multi-scale discriminator is used instead of a regular single discriminator to distinguish whether an image is fake or real globally and locally. To balance the global and local discriminators, an adaptive weight allocation is proposed. In addition, a contrast loss is proposed, and a new mixed loss function is developed to improve the visual quality of the enhanced images. Extensive experiments on the popular DSLR photo enhancement dataset and MIT-FiveK dataset have verified the effectiveness of the proposed MIEGAN.
- Published
- 2022
- Full Text
- View/download PDF
20. Multiscale Temporal Self-Attention and Dynamical Graph Convolution Hybrid Network for EEG-Based Stereogram Recognition
- Author
-
Lili Shen, Mingyang Sun, Qunxia Li, Beichen Li, Zhaoqing Pan, and Jianjun Lei
- Subjects
General Neuroscience ,Rehabilitation ,Biomedical Engineering ,Internal Medicine ,Humans ,Attention ,Electroencephalography ,Recognition, Psychology - Abstract
Stereopsis is the ability of human beings to get the 3D perception on real scenarios. The conventional stereopsis measurement is based on subjective judgment for stereograms, leading to be easily affected by personal consciousness. To alleviate the issue, in this paper, the EEG signals evoked by dynamic random dot stereograms (DRDS) are collected for stereogram recognition, which can help the ophthalmologists diagnose strabismus patients even without real-time communication. To classify the collected Electroencephalography (EEG) signals, a novel multi-scale temporal self-attention and dynamical graph convolution hybrid network (MTS-DGCHN) is proposed, including multi-scale temporal self-attention module, dynamical graph convolution module and classification module. Firstly, the multi-scale temporal self-attention module is employed to learn time continuity information, where the temporal self-attention block is designed to highlight the global importance of each time segments in one EEG trial, and the multi-scale convolution block is developed to further extract advanced temporal features in multiple receptive fields. Meanwhile, the dynamical graph convolution module is utilized to capture spatial functional relationships between different EEG electrodes, in which the adjacency matrix of each GCN layer is adaptively tuned to explore the optimal intrinsic relationship. Finally, the temporal and spatial features are fed into the classification module to obtain prediction results. Extensive experiments are conducted on collected datasets i.e., SRDA and SRDB, and the results demonstrate the proposed MTS-DGCHN achieves outstanding classification performance compared with the other methods. The datasets are available at https://github.com/YANGeeg/TJU-SRD-datasets and the code is at https://github.com/YANGeeg/MTS-DGCHN.
- Published
- 2022
- Full Text
- View/download PDF
21. C2FNet: A Coarse-to-Fine Network for Multi-View 3D Point Cloud Generation
- Author
-
Jianjun Lei, Jiahui Song, Bo Peng, Wanqing Li, Zhaoqing Pan, and Qingming Huang
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Abstract
Generation of a 3D model of an object from multiple views has a wide range of applications. Different parts of an object would be accurately captured by a particular view or a subset of views in the case of multiple views. In this paper, a novel coarse-to-fine network (C2FNet) is proposed for 3D point cloud generation from multiple views. C2FNet generates subsets of 3D points that are best captured by individual views with the support of other views in a coarse-to-fine way, and then fuses these subsets of 3D points to a whole point cloud. It consists of a coarse generation module where coarse point clouds are constructed from multiple views by exploring the cross-view spatial relations, and a fine generation module where the coarse point cloud features are refined under the guidance of global consistency in appearance and context. Extensive experiments on the benchmark datasets have demonstrated that the proposed method outperforms the state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
22. SIEV-Net: A Structure-Information Enhanced Voxel Network for 3D Object Detection From LiDAR Point Clouds
- Author
-
Chuanbo Yu, Jianjun Lei, Bo Peng, Haifeng Shen, and Qingming Huang
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
23. Channel Recombination and Projection Network for Blind Image Quality Measurement
- Author
-
Lili Shen, Bo Zhao, Zhaoqing Pan, Bo Peng, Sam Kwong, and Jianjun Lei
- Subjects
Electrical and Electronic Engineering ,Instrumentation - Published
- 2022
- Full Text
- View/download PDF
24. Multi-Modality MR Image Synthesis via Confidence-Guided Aggregation and Cross-Modality Refinement
- Author
-
Jianjun Lei, Bingzheng Liu, Lili Shen, Yi Bin, and Bo Peng
- Subjects
Correlative ,Modality (human–computer interaction) ,genetic structures ,Computer science ,Cross modality ,business.industry ,Aggregate (data warehouse) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Magnetic Resonance Imaging ,Multi modality ,Computer Science Applications ,Image (mathematics) ,Health Information Management ,ComputerApplications_MISCELLANEOUS ,Image Processing, Computer-Assisted ,otorhinolaryngologic diseases ,Benchmark (computing) ,Humans ,Artificial intelligence ,Electrical and Electronic Engineering ,Mr images ,business ,psychological phenomena and processes ,Biotechnology - Abstract
Magnetic resonance imaging (MRI) can provide multi-modality MR images by setting task-specific scan parameters, and has been widely used in various disease diagnosis and planned treatments. However, in practical clinical applications, it is often difficult to obtain multi-modality MR images simultaneously due to patient discomfort, and scanning costs, etc. Therefore, how to effectively utilize the existing modality images to synthesize missing modality image has become a hot research topic. In this paper, we propose a novel confidence-guided aggregation and cross-modality refinement network (CACR-Net) for multi-modality MR image synthesis, which effectively utilizes complementary and correlative information of multiple modalities to synthesize high-quality target-modality images. Specifically, to effectively utilize the complementary modality-specific characteristics, a confidence-guided aggregation module is proposed to adaptively aggregate the multiple target-modality images generated from multiple source-modality images by using the corresponding confidence maps. Based on the aggregated target-modality image, a cross-modality refinement module is presented to further refine the target-modality image by mining correlative information among the multiple source-modality images and aggregated target-modality image. By training the proposed CACR-Net in an end-to-end manner, high-quality and sharp target-modality MR images are effectively synthesized. Experimental results on the widely used benchmark demonstrate that the proposed method outperforms state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
25. TSAN: Synthesized View Quality Enhancement via Two-Stream Attention Network for 3D-HEVC
- Author
-
Nam Ling, Jianjun Lei, Zhaoqing Pan, Wei-Jie Yu, and Sam Kwong
- Subjects
business.industry ,Computer science ,Feature extraction ,computer.software_genre ,Image-based modeling and rendering ,Convolutional neural network ,View synthesis ,Visualization ,Information extraction ,Encoding (memory) ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Block (data storage) - Abstract
In three-dimensional video system, the texture and depth videos are jointly encoded, and then the Depth Image Based Rendering (DIBR) is utilized to realize view synthesis. However, the compression distortion of texture and depth videos, as well as the disocclusion problem in DIBR degrade the visual quality of the synthesized view. To address this problem, a Two-stream Attention Network (TSAN)-based synthesized view quality enhancement method is proposed for 3D-High Efficiency Video Coding (3D-HEVC) in this paper. First, the shortcomings of the view synthesis technique and traditional convolutional neural networks are analyzed. Then, based on these analyses, a TSAN with two information extraction streams is proposed for enhancing the quality of the synthesized view, in which the global information extraction stream learns the contextual information, and the local information extraction stream extracts the texture information from the rendered image. Third, a Multi-Scale Residual Attention Block (MSRAB) is proposed, which can efficiently detect features in different scales, and adaptively refine features by considering interdependencies among spatial dimensions. Extensive experimental results show that the proposed synthesized view quality enhancement method achieves significantly better performance than the state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
26. ZS-SBPRnet: A Zero-Shot Sketch-Based Point Cloud Retrieval Network Based on Feature Projection and Cross-Reconstruction
- Author
-
Bo Peng, Lin Chen, Jiahui Song, Haifeng Shen, Qingming Huang, and Jianjun Lei
- Subjects
Control and Systems Engineering ,Electrical and Electronic Engineering ,Computer Science Applications ,Information Systems - Published
- 2022
- Full Text
- View/download PDF
27. Data from Sonic Hedgehog Paracrine Signaling Activates Stromal Cells to Promote Perineural Invasion in Pancreatic Cancer
- Author
-
Keping Xie, Erxi Wu, Dong Zhang, Kun Guo, Jian Guo, Wei Li, Liang Han, Shifang Lv, Xiu Wang, Jiguang Ma, Jianjun Lei, Wanxing Duan, Han Liu, Qinhong Xu, Qingyong Ma, Zheng Wang, and Xuqi Li
- Abstract
Purpose: Pancreatic cancer is characterized by stromal desmoplasia and perineural invasion (PNI). We sought to explore the contribution of pancreatic stellate cells (PSC) activated by paracrine Sonic Hedgehog (SHH) in pancreatic cancer PNI and progression.Experimental Design: In this study, the expression dynamics of SHH were examined via immunohistochemistry, real-time PCR, and Western blot analysis in a cohort of carcinomatous and nonneoplastic pancreatic tissues and cells. A series of in vivo and in vitro assays was performed to elucidate the contribution of PSCs activated by paracrine SHH signaling in pancreatic cancer PNI and progression.Results: We show that SHH overexpression in tumor cells is involved in PNI in pancreatic cancer and is an important marker of biologic activity of pancreatic cancer. Moreover, the overexpression of SHH in tumor cells activates the hedgehog pathway in PSCs in the stroma instead of activating tumor cells. These activated PSCs are essential for the promotion of pancreatic cancer cell migration along nerve axons and nerve outgrowth to pancreatic cancer cell colonies in an in vitro three-dimensional model of nerve invasion in cancer. Furthermore, the coimplantation of PSCs activated by paracrine SHH induced tumor cell invasion of the trunk and nerve dysfunction along sciatic nerves and also promoted orthotropic xenograft tumor growth, metastasis, and PNI in in vivo models.Conclusions: These results establish that stromal PSCs activated by SHH paracrine signaling in pancreatic cancer cells secrete high levels of PNI-associated molecules to promote PNI in pancreatic cancer. Clin Cancer Res; 20(16); 4326–38. ©2014 AACR.
- Published
- 2023
- Full Text
- View/download PDF
28. Identification and characterization analysis of candidate genes controlling mushroom leaf development in Chinese kale by BSA-seq
- Author
-
Shuo Feng, Jianbing Wu, Kunhao Chen, Muxi Chen, Zhangsheng Zhu, Juntao Wang, Guoju Chen, Bihao Cao, Jianjun Lei, and Changming Chen
- Subjects
Genetics ,Plant Science ,Agronomy and Crop Science ,Molecular Biology ,Biotechnology - Published
- 2023
- Full Text
- View/download PDF
29. Hydropower dam alters the microbial structure of fish gut in different habitats in upstream and downstream rivers
- Author
-
Yusen Li, Kangqi Zhou, Huihong Zhao, Jun Shi, Weijun Wu, Anyou He, Yaoquan Han, Jianjun Lei, Yong Lin, Xianhui Pan, and Dapeng Wang
- Abstract
Hydropower dams are an important green renewable energy technology, but their effect on the gut microbes of fish in different habitats surrounding the dams is unclear. We collected the gut of seven fish species (n = 109 fish) both upstream and downstream of a dam in Xijiang River basin, China, and identified the microbes present by 16s rRNA pyrosequencing. A total of 9,071 OTUs were identified from 1,576,253 high-quality tags with 97% sequence similarity. Our results indicated that the gut microbial diversity of upstream fish was significantly higher than that of downstream fish, though the dominant microbial species were similar and mainly comprised Proteobacteria (mean 35.0%), Firmicutes (20.4%) and Actinobacteria (15.6%). The presence of the dam markedly altered the gut microbial composition in Squaliobarbus curriculusand Hypostomus plecostomus. Moreover, we found specificity in the composition of gut microorganisms in fishes of different diets and pelagic levels, whereas the omnivorous Pseudohemiculter dispar had a higher level of species richness and diversity of gut bacteria compared with the other species. The results of the functional analysis showed that the abundance of microorganisms related to energy metabolism (e.g., amino acid metabolism, carbohydrate metabolism, biosynthesis metabolism) was significantly higher in the gut of upstream fish than in downstream fish. Our results showed that the hydropower station affected downstream levels of chlorophyll-a, total nitrogen and total organic carbon. Canonical correspondence analysis showed that water temperature, Hg and chlorophyll-a significantly affected gut microbial composition. These results are important for assessing the impact of hydropower plant on fish gut microbes and their potential environmental risks.
- Published
- 2023
- Full Text
- View/download PDF
30. Reinforcement Learning-Based Load Balancing for Heavy Traffic Internet of Things
- Author
-
jianjun lei and Jie Liu
- Published
- 2023
- Full Text
- View/download PDF
31. Stereoscopic Image Retargeting Based on Deep Convolutional Neural Network
- Author
-
Jianjun Lei, Yuming Fang, Nam Ling, Jie Liang, Qingming Huang, and Xiaoting Fan
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,Object (computer science) ,Convolutional neural network ,Image (mathematics) ,law.invention ,Consistency (database systems) ,Seam carving ,law ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Stereoscopic image retargeting aims at converting stereoscopic images to the target resolution adaptively. Different from 2D image retargeting, stereoscopic image retargeting needs to preserve both the shape structure of salient objects and depth consistency of 3D scenes. In this paper, we present a stereoscopic image retargeting method based on deep convolutional neural network to obtain high-quality retargeted images with both object shape preservation and scene depth preservation. First, a cross-attention extraction mechanism is constructed to generate attention map, which contains the valuable attention features of the left and right images and the common attention features between them. Second, since the disparity map can provide accurate depth information of objects in 3D scenes, a disparity-assisted 3D significance map generation module is utilized to further preserve the valuable depth information of stereoscopic images. Finally, in order to predict the retargeted stereoscopic images accurately, an image consistency loss is developed to preserve the geometric structure of salient objects, and a disparity consistency loss is introduced to eliminate depth distortions. Experimental results demonstrate that the proposed deep convolutional neural network can provide favorable stereoscopic image retargeting results.
- Published
- 2021
- Full Text
- View/download PDF
32. Deep Multi-Domain Prediction for 3D Video Coding
- Author
-
Jianjun Lei, Dong Liu, Yanan Shi, Nam Ling, Dengchao Jin, Ying Chen, and Zhaoqing Pan
- Subjects
Correlation ,Multi domain ,Computer science ,business.industry ,Media Technology ,Fuse (electrical) ,Pattern recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Convolutional neural network ,Coding (social sciences) - Abstract
Three-dimensional (3D) video contains plentiful multi-domain correlations, including spatial, temporal, and inter-view correlations. In this paper, a deep multi-domain prediction method is proposed for 3D video coding. Different from previous methods, our proposed method utilizes not only spatial and temporal correlations but also inter-view correlation to obtain a more accurate prediction, and adopts deep convolutional neural networks to effectively fuse multi-domain references. More specifically, a hierarchical prediction mechanism, which includes a spatial-temporal prediction network and a multi-domain prediction network, is designed to overcome the fusion difficulty of multi-domain reference information. Furthermore, a progressive spatial-temporal prediction network and a multi-scale multi-domain prediction network are designed to obtain the spatial-temporal prediction result and multi-domain prediction result respectively. Experimental results show that the proposed method achieves considerable bitrate saving compared with 3D-HEVC.
- Published
- 2021
- Full Text
- View/download PDF
33. Enabling Device-to-Device (D2D) Communication for the Next Generation WLAN
- Author
-
Fengjun Shang, Jianjun Lei, and Ying Wang
- Subjects
Technology ,Schedule ,Article Subject ,Computer Networks and Communications ,business.industry ,Computer science ,Throughput ,TK5101-6720 ,Spectral efficiency ,Spectrum management ,law.invention ,law ,Telecommunication ,Resource allocation ,Resource management ,Wi-Fi ,Electrical and Electronic Engineering ,business ,Random access ,Information Systems ,Computer network - Abstract
Device-to-device (D2D) communication technology is widely acknowledged as an emerging candidate to alleviate the wireless traffic explosion problem for the next generation wireless local area network (WLAN) IEEE 802.11ax. In this paper, we integrate D2D communication into IEEE 802.11ax for maximizing the spectrum efficiency. Due to the spectrum scarcity, the number of available resource units (RUs) is typically less than D2D pairs and stations (STAs), which makes the management of spectrum resources more complex. To tackle this issue, we design an efficient resource management algorithm for uplink random access and resource allocation in D2D-enabled 802.11ax, which provides more channel access and resource reuse opportunities for STAs and D2D pairs. Specifically, we propose an enhanced back-off mechanism and derive the optimal contention window (CW) by a theoretical model, which improves the uplink random access efficiency. To tackle the complex interference problem in RU scheduling phase, we develop an efficient and low-complexity resource allocation algorithm based on the maximal independent set (MIS), which can schedule multiple D2D pairs to share the same RU with the specific STA. Simulation results demonstrate that the proposed algorithm significantly improves the network performance in terms of system throughput, collision rate, complete time, and channel utilization.
- Published
- 2021
- Full Text
- View/download PDF
34. Deep video action clustering via spatio-temporal feature learning
- Author
-
Jianjun Lei, Huazhu Fu, Bo Peng, Jia Yalong, Li Yi, and Zhang Zongqian
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Cognitive Neuroscience ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Discriminative model ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Representation (mathematics) ,business ,Cluster analysis ,Feature learning ,Subspace topology - Abstract
The recent years have witnessed significant advances in deep video action recognition. However, the performances of deep learning-based video action recognition methods are limited in the case of relatively small samples or no labeled samples. Therefore, trying to use unlabeled video data to generate clustering label is essential for small sample learning and zero sample learning. In this paper, we propose a novel deep video action clustering network, which aims to learn the similarity relationship among the unlabeled video samples, and generate the clustering label for each video sample. Specifically, the proposed method simultaneously learns the spatio-temporal features and subspace representations under a jointly optimized framework. It consists of a 3D U-Net self-representation generator, a video-clip reconstruction discriminator, and a confidence-based feedback mechanism. The 3D U-Net self-representation generator learns the spatio-temporal features of the video clips and produces subspace representation matrix. Then, the similarity graph is constructed based on this subspace representation matrix, and the clustering result is obtained. In the learning procedure, a confidence-based feedback mechanism is designed to feed the high-confidence labels of partial samples back to further guide the subspace structure learning, so that the optimal result can be obtained. During training, the video-clip reconstruction discriminator is introduced to evaluate the reconstructed video clips, which is beneficial for capturing the discriminative spatio-temporal features. Experimental results on a video benchmark dataset demonstrate the effectiveness of the proposed method.
- Published
- 2021
- Full Text
- View/download PDF
35. BocODD1 and BocODD2 Regulate the Biosynthesis of Progoitrin Glucosinolate in Chinese Kale
- Author
-
Shuanghua Wu, Ting Zhang, Yudan Wang, Muxi Chen, Jianguo Yang, Fei Li, Ying Deng, Zhangsheng Zhu, Jianjun Lei, Guoju Chen, Bihao Cao, and Changming Chen
- Subjects
Inorganic Chemistry ,progoitrin glucosinolate ,ODD ,biosynthesis of glucosinolates ,gene function ,Chinese kale ,Organic Chemistry ,General Medicine ,Physical and Theoretical Chemistry ,Molecular Biology ,Spectroscopy ,Catalysis ,Computer Science Applications - Abstract
Progoitrin (2-hydroxy-3-butenyl glucosinolate, PRO) is the main source of bitterness of Brassica plants. Research on the biosynthesis of PRO glucosinolate can aid the understanding of the nutritional value in Brassica plants. In this study, four ODD genes likely involved in PRO biosynthesis were cloned from Chinese kale. These four genes, designated as BocODD1–4, shared 75–82% similarities with the ODD sequence of Arabidopsis. The sequences of these four BocODDs were analyzed, and BocODD1 and BocODD2 were chosen for further study. The gene BocODD1,2 showed the highest expression levels in the roots, followed by the leaves, flowers, and stems, which is in accordance with the trend of the PRO content in the same tissues. Both the expression levels of BocODD1,2 and the content of PRO were significantly induced by high- and low-temperature treatments. The function of BocODDs involved in PRO biosynthesis was identified. Compared with the wild type, the content of PRO was increased twofold in the over-expressing BocODD1 or BocODD2 plants. Meanwhile, the content of PRO was decreased in the BocODD1 or BocODD2 RNAi lines more than twofold compared to the wildtype plants. These results suggested that BocODD1 and BocODD2 may play important roles in the biosynthesis of PRO glucosinolate in Chinese kale.
- Published
- 2022
- Full Text
- View/download PDF
36. Deep reinforcement learning based sensing bidirectional nodes congestion control mechanism in wireless sensor networks
- Author
-
jianjun lei and Ying Zhou
- Published
- 2022
- Full Text
- View/download PDF
37. Texture-Guided End-to-End Depth Map Compression
- Author
-
Bo Peng, Yuying Jing, Dengchao Jin, Xiangrui Liu, Zhaoqing Pan, and Jianjun Lei
- Published
- 2022
- Full Text
- View/download PDF
38. RGB-D salient object detection via cross-modal joint feature extraction and low-bound fusion loss
- Author
-
Xiaoting Fan, Shi Yanan, Zhu Xinxin, Huazhu Fu, Jianjun Lei, and Li Yi
- Subjects
0209 industrial biotechnology ,Fusion ,Color image ,Computer science ,business.industry ,Cognitive Neuroscience ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Saliency map ,Artificial intelligence ,Joint (audio engineering) ,business ,Block (data storage) - Abstract
RGB-D salient object detection aims at identifying attractive objects in a scene by combining the color image and depth map. However, due to the differences between RGB-D image pairs, it is a key issue to utilize cross-modal data effectively. In this paper, we propose a novel RGB-D salient object detection method via cross-modal joint feature extraction and low-bound fusion loss. A two-stream framework is designed to generate the saliency maps for the RGB image and depth map. During the feature extraction, a cross-modal joint feature extraction module (CFM) is proposed to capture valuable joint features from the two streams. The CFM explores complementary information from the feature extraction and feeds the joint features to the aggregation stage of the network. Then, the fusion block (FB) is utilized to aggregate the multi-scale features of each stream and the joint features to generate the updated features. In addition, a low-bound fusion loss is designed to constrain the predictions of the two streams, to improve the lower bound of saliency values and generate a distinct saliency map. Experimental results on five datasets demonstrate that the proposed method achieves superior performances.
- Published
- 2021
- Full Text
- View/download PDF
39. Perceptual Quality Assessment for Asymmetrically Distorted Stereoscopic Video by Temporal Binocular Rivalry
- Author
-
Patrick Le Callet, Yuming Fang, Jiheng Wang, Jiebin Yan, Jianjun Lei, and Xiangjie Sui
- Subjects
Binocular rivalry ,Computer science ,business.industry ,Image quality ,media_common.quotation_subject ,Pattern recognition ,Weighting ,Visualization ,Quality (physics) ,Perception ,Distortion ,Media Technology ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Energy (signal processing) ,media_common - Abstract
In this paper, we propose a two-stage weighting based perceptual quality assessment framework for asymmetrically distorted stereoscopic video (SV) sequences by temporal binocular rivalry. Firstly, a traditional 2D image quality assessment (IQA) method is employed to measure spatial distortion, and the temporal distortion is evaluated by the magnitude differences between motion vectors of distorted and reference video frames. Secondly, the structural strength (SS) computed by gradient map and the motion energy (ME) computed by frame difference map are used to estimate the intensity of visual stimulus in spatial and temporal domain respectively. Then, SS and ME are considered as the importance indexes to combine the quality scores of spatial and temporal distortion to estimate perceived distortion of single-view video sequences, which is denoted as the first-stage weighting. Finally, considering that the difference of intensity of visual stimulus between two eyes results in binocular rivalry, a novel temporal binocular rivalry inspired weighting method is designed to integrate the quality scores of left- and right-views for the final visual quality prediction of SV sequences, which is denoted as the second-stage weighting. Experimental results on Waterloo-IVC SV quality databases show that several specific examples of 2D-IQA methods within the proposed framework can obtain highly competitive performance over other existing ones.
- Published
- 2021
- Full Text
- View/download PDF
40. Deep Stereoscopic Image Super-Resolution via Interaction Module
- Author
-
Xiaoting Fan, Li Xinxin, Qingming Huang, Jianjun Lei, Bolan Yang, Zhe Zhang, and Ying Chen
- Subjects
Computer science ,business.industry ,Deep learning ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,02 engineering and technology ,Iterative reconstruction ,Superresolution ,Image (mathematics) ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Key (cryptography) ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution - Abstract
Deep learning-based methods have achieved remarkable performance in single image super-resolution. However, these methods cannot be effectively applied in stereoscopic image super-resolution without considering the characteristics of stereoscopic images. In this article, an interaction module-based stereoscopic image super-resolution network (IMSSRnet) is proposed to effectively utilize the correlation information in stereoscopic images. The key insight of the network lies with how to explore the complementary information of one view to help the reconstruction of another view. Thus, an interaction module is designed to acquire the enhanced features by utilizing complementary information between different views. Specifically, the interaction module is composed of a series of interaction units with a residual structure. In addition, the single image features of left and right views are obtained by a spatial feature extraction module, which can be realized by any existing single image super-resolution models. In order to obtain high-quality stereoscopic images, a gradient loss is introduced to preserve the texture details in a view, and a disparity loss is developed to constrain the disparity relationship between different views. Experimental results demonstrate that the proposed method achieves a promising performance and outperforms the state-of-the-art methods.
- Published
- 2021
- Full Text
- View/download PDF
41. Unsupervised stereoscopic image retargeting via view synthesis and stereo cycle consistency losses
- Author
-
Yuming Fang, Xiaoting Fan, Jianjun Lei, Nam Ling, Jie Liang, and Xiaochun Cao
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Cognitive Neuroscience ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,02 engineering and technology ,Field (computer science) ,Computer Science Applications ,law.invention ,View synthesis ,Computer graphics ,Consistency (database systems) ,020901 industrial engineering & automation ,Seam carving ,Artificial Intelligence ,law ,Retargeting ,0202 electrical engineering, electronic engineering, information engineering ,Binocular disparity ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Stereoscopic image retargeting aims to manipulate the stereoscopic images to fit various devices with different resolutions and prescribed aspect ratios. With the development of various types of three-dimensional (3D) displays, stereoscopic image retargeting becomes increasingly popular in the field of computer graphics. In this paper, we propose an unsupervised stereoscopic image retargeting network (USIR-Net) to address the problem of stereoscopic image retargeting without label information. By exploring the inter-view correlation and disparity relationship of stereoscopic images, two unsupervised losses are developed to guide the learning of stereoscopic image retargeting model. First, in view of the inter-view correlation, a view synthesis loss is proposed to guarantee the generation of high-quality stereoscopic images with accurate inter-view relationship. Second, by exploiting the consistency of stereoscopic images before and after the retargeting, a stereo cycle consistency loss, which consists of a content consistency term and a disparity consistency term, is developed to preserve the structure information and prevent binocular disparity inconsistency. Quantitative and qualitative experimental results demonstrate that the proposed method achieves superior performance compared with state-of-the-art methods.
- Published
- 2021
- Full Text
- View/download PDF
42. Deep Spatial-Spectral Subspace Clustering for Hyperspectral Image
- Author
-
Nam Ling, Leyuan Fang, Xinyu Li, Jianjun Lei, Bo Peng, and Qingming Huang
- Subjects
Pixel ,business.industry ,Computer science ,Feature extraction ,Hyperspectral imaging ,Pattern recognition ,02 engineering and technology ,Kernel (linear algebra) ,Similarity (network science) ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Cluster analysis ,business ,Subspace topology - Abstract
Hyperspectral image (HSI) clustering is a challenging task due to the complex characteristics in HSI data, such as spatial-spectral structure, high-dimension, and large spectral variability. In this paper, we propose a novel deep spatial-spectral subspace clustering network (DS3C-Net), which explores spatial-spectral information via the multi-scale auto-encoder and collaborative constraint. Considering the structure correlations of HSI, the multi-scale auto-encoder is first designed to extract spatial-spectral features with different-scale pixel blocks which are selected as the inputs. Then, the collaborative constrained self-expressive layers are introduced between the encoder and decoder, to capture the self-expressive subspace structures. By designing a self-expressiveness similarity constraint, the proposed network is trained collaboratively, and the affinity matrices of the feature representation are learned in an end-to-end manner. Based on the affinity matrices, the spectral clustering algorithm is utilized to obtain the final HSI clustering result. Experimental results on three widely used hyperspectral image datasets demonstrate that the proposed method outperforms state-of-the-art methods.
- Published
- 2021
- Full Text
- View/download PDF
43. Dihydromyricetin Imbues Antiadipogenic Effects on 3T3-L1 Cells via Direct Interactions with 78-kDa Glucose-Regulated Protein
- Author
-
Binmei Sun, Liu Shaoqun, Jianjun Lei, Qing X. Li, Zhibin Liang, Margaret R. Baker, Dongjin Pan, Zhizheng Wang, Deguan Tan, and Ching Yuan Hu
- Subjects
Nutrition and Dietetics ,Flavonols ,Chemistry ,ved/biology ,ved/biology.organism_classification_rank.species ,Medicine (miscellaneous) ,Proteomics ,In vitro ,Molecular Docking Simulation ,Mice ,Glucose ,78 kDa Glucose-Regulated Protein ,Biochemistry ,Adipogenesis ,3T3-L1 Cells ,Lipid droplet ,Adipocytes ,Animals ,Binding site ,Receptor ,Ampelopsis grossedentata ,Endoplasmic Reticulum Chaperone BiP - Abstract
Background Obesity is among the most serious public health problems worldwide, with few safe pharmaceutical interventions. Natural products have become an important source of potential anti-obesity therapeutics. Dihydromyricetin (DHM) exerts antidiabetic effects. The biochemical target of DHM, however, has been unknown. It is crucial to identify the biochemical target of DHM for elucidating its physiological function and therapeutic value. Objectives The objective of this study was to identify the biochemical target of DHM. Methods An abundant antiadipogenic flavanonol was extracted from the herbal plant Ampelopsis grossedentata through bioassay-guided fractionation and characterized with high-resolution LC-MS and 1H and 13C nuclear magnetic resonance. Antiadipogenic experiments were done with mouse 3T3-L1 preadipocytes. A biochemical target of the chemical of interest was identified with drug affinity responsive target stability assay. Direct interactions between the chemical of interest and the protein target in vitro were predicted with molecular docking and subsequently confirmed with surface plasmon resonance. Expression levels of peroxisome proliferator-activated receptor γ (PPARγ), which is associated with 78-kDa glucose-regulated protein (GRP78), were measured with real-time qPCR. Results DHM was isolated, purified, and structurally characterized. Cellular studies showed that DHM notably reduced intracellular oil droplet formation in 3T3-L1 cells with a median effective concentration of 294 μM (i.e., 94 μg/mL). DHM targeted the ATP binding site of GRP78, which is associated with adipogenesis. An equilibrium dissociation constant between DHM and GRP78 was 21.8 μM. In 3T3-L1 cells upon treatment with DHM at 50 μM (i.e., 16 μg/mL), the expression level of PPARγ was downregulated to 53.9% of the solvent vehicle control's level. Conclusions DHM targets GRP78 in vitro. DHM is able to reduce lipid droplet formation in 3T3-L1 cells through a mode of action that is plausibly associated with direct interactions between GRP78 and DHM, which is a step forward in determining potential applications of DHM as an anti-obesity agent.
- Published
- 2021
- Full Text
- View/download PDF
44. An R-R-type MYB transcription factor promotes nonclimacteric pepper fruit ripening pigmentation
- Author
-
Ningzuo Yang, Jiali Song, Changming Chen, Binmei Sun, Shuanglin Zhang, Yutong Cai, Xiongjie Zheng, Bihao Cao, Guoju Chen, Dan Jin, Bosheng Li, Jianxin Bian, Jianjun Lei, Hang He, and Zhangsheng Zhu
- Abstract
SummaryCarotenoids act as phytohormones and volatile compound precursors that influence plant development and confer characteristic colours, affecting both the aesthetic and nutritional value of fruits. Carotenoid pigmentation in ripening fruits is highly dependent on developmental trajectories. Transcription factors incorporate developmental and phytohormone signalling to regulate the biosynthesis process. In contrast to the well-established pathways regulating ripening-related carotenoid biosynthesis in climacteric fruit, carotenoid regulation in nonclimacteric fruit is poorly understood. Capsanthin is the primary carotenoid of nonclimacteric pepper (Capsicum) fruit; its biosynthesis is tightly associated with fruit ripening, and it confers red pigment to the ripening fruit. In this study, using a weighted gene coexpression network and expression analysis, we identified an R-R-type MYB transcription factor, DIVARICATA1, and demonstrated that it is tightly associated with the levels of carotenoid biosynthetic genes (CBGs) and capsanthin accumulation. DIVARICATA1 encodes a nucleus-localized protein that functions primarily as a transcriptional activator. Functional analyses demonstrated that DIVARICATA1 positively regulates CBG transcript levels and capsanthin contents by directly binding to and activating the CBG promoter transcription. Furthermore, the association analysis revealed a significant positive association between DIVARICATA1 transcription level and capsanthin content. Abscisic acid (ABA) promotes capsanthin biosynthesis in a DIVARICATA1-dependent manner. Comparative transcriptomic analysis of DIVARICATA1 in pepper and its orthologue in a climacteric fruit, tomato, suggests that its function might be subject to divergent evolution among the two species. This study illustrates the transcriptional regulation of capsanthin biosynthesis and offers a novel target for breeding peppers with high red colour intensity.
- Published
- 2022
- Full Text
- View/download PDF
45. No-reference stereoscopic image quality assessment based on global and local content characteristics
- Author
-
Lili Shen, Jianjun Lei, Xiongfei Chen, Zhaoqing Pan, Fei Li, and Kefeng Fan
- Subjects
Computer science ,business.industry ,Image quality ,Cognitive Neuroscience ,Deep learning ,Feature extraction ,Context (language use) ,Stereoscopy ,Computer Science Applications ,law.invention ,Artificial Intelligence ,law ,Feature (computer vision) ,Human visual system model ,Computer vision ,Artificial intelligence ,business ,Block (data storage) - Abstract
No-reference stereoscopic images quality assessment (NR-SIQA) via deep learning has gained increasing attention. In this paper, we propose a no-reference stereoscopic image quality assessment method based on global and local content characteristics. The proposed method simulates the perception route of human visual system, and derives features from the fused view and single view through the global feature fusion sub-network and local feature enhancement sub-network. As for the fused view, a cross-fusion strategy is applied to model the process in the V1 visual cortex, and the multi-scales pooling (MSP) is utilized to integrate context information under different sub-regions for effective global feature extraction. As for the single view, the asymmetric convolution block (ACB) is introduced to strengthen the local information description. By jointly considering the fused view and single view, the proposed network can efficiently extract the features for quality assessment. Finally, a weighted average strategy is applied to estimate the visual quality of stereoscopic image. Experimental results on 3D quality databases demonstrate that the proposed network is superior to the state-of-the-art metrics, and achieves an excellent performance.
- Published
- 2021
- Full Text
- View/download PDF
46. A CNN-Based Fast Inter Coding Method for VVC
- Author
-
Jianjun Lei, Bo Peng, Peihan Zhang, Nam Ling, and Zhaoqing Pan
- Subjects
Kernel (linear algebra) ,Tree (data structure) ,Computational complexity theory ,Computer science ,Applied Mathematics ,Algorithmic efficiency ,Encoding (memory) ,Signal Processing ,Feature extraction ,Electrical and Electronic Engineering ,Algorithm ,Convolution ,Coding (social sciences) - Abstract
The Versatile Video Coding (VVC) achieves superior coding efficiency as compared with the High Efficiency Video Coding (HEVC), while its excellent coding performance is at the cost of several high computational complexity coding tools, such as Quad-Tree plus Multi-type Tree (QTMT)-based Coding Units (CUs) and multiple inter prediction modes. To reduce the computational complexity of VVC, a CNN-based fast inter coding method is proposed in this paper. First, a multi-information fusion CNN (MF-CNN) model is proposed to early terminate the QTMT-based CU partition process by jointly using the multi-domain information. Then, a content complexity-based early Merge mode decision is proposed to skip the time-consuming inter prediction modes by considering the CU prediction residuals and the confidence of MF-CNN. Experimental results show that the proposed method reduces an average of 30.63% VVC encoding time, and the Bjoontegaard Delta Bit Rate (BDBR) increases about 3%.
- Published
- 2021
- Full Text
- View/download PDF
47. Disparity-Aware Reference Frame Generation Network for Multiview Video Coding
- Author
-
Jianjun Lei, Zongqian Zhang, Zhaoqing Pan, Dong Liu, Xiangrui Liu, Ying Chen, and Nam Ling
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Abstract
Multiview video coding (MVC) aims to compress the multiview video through the elimination of video redundancies, where the quality of the reference frame directly affects the compression efficiency. In this paper, we propose a deep virtual reference frame generation method based on a disparity-aware reference frame generation network (DAG-Net) to transform the disparity relationship between different viewpoints and generate a more reliable reference frame. The proposed DAG-Net consists of a multi-level receptive field module, a disparity-aware alignment module, and a fusion reconstruction module. First, a multi-level receptive field module is designed to enlarge the receptive field, and extract the multi-scale deep features of the temporal and inter-view reference frames. Then, a disparity-aware alignment module is proposed to learn the disparity relationship, and perform disparity shift on the inter-view reference frame to align it with the temporal reference frame. Finally, a fusion reconstruction module is utilized to fuse the complementary information and generate a more reliable virtual reference frame. Experiments demonstrate that the proposed reference frame generation method achieves superior performance for multiview video coding.
- Published
- 2022
48. Genome-wide analysis of histone acetyltransferase and histone deacetylase families and their expression in fruit development and ripening stage of pepper (
- Author
-
Yutong Cai, Mengwei Xu, Jiarong Liu, Haiyue Zeng, Jiali Song, Binmei Sun, Siqi Chen, Qihui Deng, Jianjun Lei, Bihao Cao, Changming Chen, Muxi Chen, Kunhao Chen, Guoju Chen, and Zhangsheng Zhu
- Subjects
Plant Science - Abstract
The fruit development and ripening process involve a series of changes regulated by fine-tune gene expression at the transcriptional level. Acetylation levels of histones on lysine residues are dynamically regulated by histone acetyltransferases (HATs) and histone deacetylases (HDACs), which play an essential role in the control of gene expression. However, their role in regulating fruit development and ripening process, especially in pepper (Capsicum annuum), a typical non-climacteric fruit, remains to understand. Herein, we performed genome-wide analyses of the HDAC and HAT family in the pepper, including phylogenetic analysis, gene structure, encoding protein conserved domain, and expression assays. A total of 30 HAT and 15 HDAC were identified from the pepper genome and the number of gene differentiation among species. The sequence and phylogenetic analysis of CaHDACs and CaHATs compared with other plant HDAC and HAT proteins revealed gene conserved and potential genus-specialized genes. Furthermore, fruit developmental trajectory expression profiles showed that CaHDAC and CaHAT genes were differentially expressed, suggesting that some are functionally divergent. The integrative analysis allowed us to propose CaHDAC and CaHAT candidates to be regulating fruit development and ripening-related phytohormone metabolism and signaling, which also accompanied capsaicinoid and carotenoid biosynthesis. This study provides new insights into the role of histone modification mediate development and ripening in non-climacteric fruits.
- Published
- 2022
49. A yes-associated protein 1-Notch1 positive feedback loop promotes breast cancer lung metastasis by attenuating the Bone morphogenetic protein 4-SMAD family member 1/5 signaling
- Author
-
Lin Zhao, Jianjun Lei, Shanzhi Gu, Yujiao Zhang, Xin Jing, Lu Wang, Lifen Zhang, Qian Ning, Minna Luo, Yifan Qi, Xinhan Zhao, and Shan Shao
- Subjects
Cancer Research ,General Medicine - Abstract
The Notch1 (Notch1 receptor) and yes-associated protein 1 (YAP1) signaling can regulate breast cancer metastasis. This study aimed at investigating whether and how these two signal pathways crosstalk to promote breast cancer lung metastasis. Here, we show that YAP1 expression was positively correlated with Notch1 in breast cancer according to bioinformatics and experimental validation. Mechanistically, YAP1 with TEA domain transcription factors (TEADs) enhanced Jagged1(JAG1)-Notch1 signaling. Meanwhile, Notch1 promoted YAP1 stability in breast cancer cells by inhibiting the β-TrCP-mediated degradation, thereby, forming a YAP1- JAG1/Notch1 positive feedback loop in breast cancer. Furthermore, YAP1 enhanced the mammosphere formation and stemness of MDA-MB-231 cells by attenuating the inhibition of the BMP4-SMAD1/5 signaling. In vivo, the YAP1- JAG1/Notch1 positive feedback loop promoted the lung colonization of MDA-MB-231 cells. Our data for the first time indicate that the YAP1-Notch1 positive feedback loop promotes lung metastasis of breast cancer by modulating self-renewal and inhibiting the BMP4-SMAD1/5 signaling.
- Published
- 2022
50. Region-Enhanced Convolutional Neural Network for Object Detection in Remote Sensing Images
- Author
-
Jianjun Lei, Yanfeng Gu, Leyuan Fang, Mengyuan Wang, and Xiaowei Luo
- Subjects
Computer science ,business.industry ,Deep learning ,Feature extraction ,0211 other engineering and technologies ,Context (language use) ,02 engineering and technology ,Iterative reconstruction ,Object (computer science) ,Convolutional neural network ,Object detection ,Feature (computer vision) ,General Earth and Planetary Sciences ,Saliency map ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,021101 geological & geomatics engineering ,Remote sensing - Abstract
The convolutional neural networks (CNNs) have recently demonstrated to be a powerful tool for object detection. However, with the complex scenes in remote sensing images, feature extraction of the object in the CNN will be seriously affected by background information. To address this issue, in this article, a region-enhanced CNN (RECNN) is proposed for the object detection of remote sensing images. The RECNN introduces the saliency constraint and multilayer fusion strategy into the CNN model, which can effectively enhance the object regions for better detection. Specifically, the saliency map is extracted and utilized to guide the training of the proposed model to strengthen saliency regions in feature maps. In addition, since different layers can reflect the object regions in varied resolutions, a multilayer fusion strategy is introduced to connect different convolutional layers and explore the context, where the feature maps of object regions are further enhanced. Experimental results on a publicly available ten-class object detection data set demonstrate the superiority of the RECNN over several competitive object detection methods.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.