4 results on '"Chi, Jianning"'
Search Results
2. Cross-view information interaction and feedback network for face hallucination.
- Author
-
Wang, Huan, Chi, Jianning, Wu, Chengdong, Yu, Xiaosheng, and Wu, Hao
- Subjects
- *
OPTICAL resolution , *HIGH resolution imaging , *IMAGE , *HUMAN facial recognition software - Abstract
Hallucinating a photo-realistic frontal face image from a low-resolution (LR) non-frontal face image is beneficial for a series of face-related applications. However, previous efforts either focus on super-resolving high-resolution (HR) face images from nearly frontal LR counterparts or frontalizing non-frontal HR faces. It is necessary to address all these challenges jointly for real-world face images in unconstrained environment. In this paper, we develop a novel Cross-view Information Interaction and Feedback Network (CVIFNet), which simultaneously handles the non-frontal LR face image super-resolution (SR) and frontalization in a unified framework and interacts them with each other to further improve their performance. Specifically, the CVIFNet is composed of two feedback sub-networks for frontal and profile face images. Considering the reliable correspondence between frontal and non-frontal face images can be crucial and contribute to face hallucination in a different manner, we design a cross-view information interaction module (CVIM) to aggregate HR representations of different views produced by the SR and frontalization processes to generate finer face hallucination results. Besides, since 3D rendered facial priors contain rich hierarchical features, such as low-level (e.g., sharp edge and illumination) and perception level (e.g., identity) information, we design an identity-preserving consistency loss based on 3D rendered facial priors, which can ensure that the high-frequency details of frontal face hallucination result are consistent with the profile. Extensive experiments demonstrate the effectiveness and advancement of CVIFNet. • A novel unified framework, named Cross-view Information Interaction and Feedback Network, is proposed to jointly tackle face SR and face frontalization problems. • A novel feedback mechanism is proposed to provide the high-level reconstruction information in top-down feedback flows through feedback connections. • A novel 3D rendered facial priors based identity-preserving consistency loss is proposed to supervise the reconstruction of facial components spatial information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Image super-resolution using multi-granularity perception and pyramid attention networks.
- Author
-
Wang, Huan, Wu, Chengdong, Chi, Jianning, Yu, Xiaosheng, Hu, Qian, and Wu, Hao
- Subjects
- *
HIGH resolution imaging , *PYRAMIDS , *COMPUTER vision , *NETWORK performance , *CONVOLUTIONAL neural networks - Abstract
Recently, single image super-resolution (SISR) has been widely applied in the fields of multimedia and computer vision communities and obtained remarkable performance. However, most current methods ignore to utilize multi-granularity features of low-resolution (LR) image to further improve the SISR performance. And the channel and spatial features obtained from original LR images are treated equally, resulting in unnecessary computations for abundant uninformative features, thereby hindering the representational ability of super-resolution (SR) models. In this paper, we present a novel Multi-Granularity Pyramid Attention Network (MGPAN) which fully exploits the multi-granularity perception and attention mechanisms to improve the quality of reconstructed images. We design a multi-branch dilated convolution layer with varied kernels corresponding to receptive fields of different sizes to modulate multi-granularity features for adaptively capturing more important information. Moreover, a novel spatial pyramid pooling attention (SPPA) module is constructed to integrate the channel-wise and multi-scale spatial information, which is beneficial to compute the response values from the multi-scale regions of each neuron, and then establish the accurate mapping from low to high dimensional solution space. Besides, for long-short-term information preservation and information flow enhancement, we adopt the short, long, and global skip connection structures to concatenate and fuse the states of each module, which can improve the SR network performance effectively. Extensive experiments on several standard benchmark datasets show that the proposed MGPAN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Underwater image super-resolution using multi-stage information distillation networks.
- Author
-
Wang, Huan, Wu, Hao, Hu, Qian, Chi, Jianning, Yu, Xiaosheng, and Wu, Chengdong
- Subjects
- *
HIGH resolution imaging , *REMOTE submersibles , *ROBOT vision , *RECURRENT neural networks , *FEATURE extraction - Abstract
Recently, single image super-resolution (SISR) has been widely applied in the fields of underwater robot vision and obtained remarkable performance. However, most current methods generally suffered from the problem of a heavy burden on computational resources with large model sizes, which limited their real-world underwater robotic applications. In this paper, we introduce and tackle the super resolution (SR) problem for underwater robot vision and provide an efficient solution for near real-time applications. We present a novel lightweight multi-stage information distillation network, named MSIDN, for better balancing performance against applicability, which aggregates the local distilled features from different stages for more powerful feature representation. Moreover, a novel recursive residual feature distillation (RRFD) module is constructed to progressively extract useful features with a modest number of parameters in each stage. We also propose a channel interaction & distillation (CI&D) module that employs channel split operation on the preceding features to produce two-part features and utilizes the inter channel-wise interaction information between them to generate the distilled features, which can effectively extract the useful information of current stage without extra parameters. Besides, we present USR-2K dataset, a collection of over 1.6K samples for large-scale underwater image SR training, and a testset with an additional 400 samples for benchmark evaluation. Extensive experiments on several standard benchmark datasets show that the proposed MSIDN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements. • A information distillation network is proposed for underwater image super-resolution. • A recursive residual module is constructed for informative features distillation. • A channel-wise interaction mechanism is proposed to generate distilled features. • A novel underwater image super-resolution dataset is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.