4 results on '"Song, Xiaogang"'
Search Results
2. Image super-resolution with multi-scale fractal residual attention network.
- Author
-
Song, Xiaogang, Liu, Wanbo, Liang, Li, Shi, Weiwei, Xie, Guo, Lu, Xiaofeng, and Hei, Xinhong
- Subjects
- *
ARTIFICIAL neural networks , *HIGH resolution imaging , *MATHEMATICAL convolutions , *FEATURE extraction , *CONVOLUTIONAL neural networks - Abstract
Deep neural networks can significantly improve the quality of super-resolution. However, previous work has made insufficient use of low-resolution scale features and channel-wise information, hence hindering the representational ability of CNNs. To address these issues, a multi-scale fractal residual attention network (MFRAN) is proposed. Specifically, MFRAN consists of fractal residual blocks (FRBs), dual-enhanced channel attention (DECA), and dilated residual attention blocks (DRABs). Among them, FRB applies multi-scale extension rule to continuously expand into a fractal structure that detects multi-scale features; DRAB constructs a combined dilated convolution to learn a generalizable and expressive feature space with a larger receptive field; DECA employs one-dimensional convolution to achieve cross-channel information interaction, and enhance the flow of information between groups by channel shuffling. Then, we integrate horizontal feature representations via local residual and feature fusion. Extensive quantitative and qualitative evaluations of benchmark datasets show that our proposed approach outperforms state-of-the-art methods in terms of quantitative metrics and visual results. [Display omitted] • We propose a Multi-scale Fractal Residual Attention Network (MFRAN) for SISR, where Dual-Enhanced Channel Attention can capture inter-channel dependencies more efficiently and lightly while enabling cross-channel information interaction for enhanced channel modeling capabilities, so that the model can reconstruct SR images with richer details at large-scale factors; Fractal Residual Block with multiple local feature extraction paths of different lengths, each path consists of dilated residual attention blocks with different size receptive fields, so that LR features of different scales can be extracted efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. TransBoNet: Learning camera localization with Transformer Bottleneck and Attention.
- Author
-
Song, Xiaogang, Li, Hongjuan, Liang, Li, Shi, Weiwei, Xie, Guo, Lu, Xiaofeng, and Hei, Xinhong
- Subjects
- *
TRANSFORMER models , *DEEP learning , *CAMERAS , *FEATURE extraction , *LOCALIZATION (Mathematics) , *SINGLE-degree-of-freedom systems , *AUTONOMOUS vehicles - Abstract
6DoF camera localization is an important component of autonomous driving and navigation. Deep learning has achieved impressive results in localization, but its robustness in dynamic environments has not been adequately addressed. In this paper, we propose a framework based on hybrid attention mechanism which can be generally applied to existing CNN-based pose regressors to improve their robustness in dynamic environments. Specifically, we propose a novel Transformer Bottleneck (TBo) block including convolution, channel attention, and a position-aware self-attention mechanism, which extracts more geometrically robust features by capturing the corresponding long-term dependencies between pixels. Furthermore, we introduce shuffle attention (SA) before the pose regressor, which integrates feature information in both spatial and channel dimensions, forcing the network to learn geometrically robust features, reducing the effects of dynamic objects and illumination conditions to improve camera localization accuracy. We evaluate our method on commonly benchmarked indoor and outdoor datasets and the experimental results show that our proposed method can significantly improve localization performance compared compare favorably to contemporary pose regressors schemes. In addition, extensive ablation evaluations are conducted to prove the effectiveness of our proposed hybrid attention bottleneck block for pose regression networks. • We propose a novel Transformer Bottleneck block with self-attention and channel attention to overcome the limitations of convolution. This coupling allows them to be optimized in a mutually reinforcing manner, significantly improving fine-grained feature extraction for accurate localization. • We propose a novel end-to-end hybrid attention network for single image localization, which improves the accuracy and robustness of camera localization, especially in dynamic scenes. • We conduct extensive experiments in both indoor and outdoor dataset, which shows that our model performs better than the existing competitive methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Mesoscale modelling of the FRP-concrete debonding mechanism in the pull-off test.
- Author
-
Wang, Xuan, Zhao, Tianlin, Guo, Jialong, Zhang, Zihua, and Song, Xiaogang
- Subjects
- *
DEBONDING , *MORTAR , *BOND strengths , *FINITE element method , *CONCRETE testing , *FAILURE mode & effects analysis , *CRACK propagation (Fracture mechanics) - Abstract
This study comprehensively investigated the FRP-concrete debonding mechanism in the pull-off test using a mesoscale cohesive zone modelling approach. Pull-off tests were performed on the FRP-strengthened concrete elements, and corresponding 2-D mesoscale finite element models were established. The numerical stress-separation responses and strain/crack initiation and propagation in the pull-off test were examined. Subsequently, a parametric study numerically investigated the effect of adhesive and FRP properties, concrete heterogeneity, loading fixture stiffness, and sample scale on the debonding mechanism. Finally, the 2-D mesoscale model was extended to the 3-D. The main conclusions are: (1) global separation between FRP and concrete in the pull-off test is minimal; (2) the normal bond strength of the epoxy resin-concrete interface controls the failure mode; however, FRP stiffness does not affect the result; (3) the aggregate content and mortar porosity significantly influence the bond strength, and the effect of aggregate shape and gradation is slight; (4) the loading fixture stiffness should increase with an increase in sample size; (5) the pull-off bond strength aligns with the size effect theory; (6) the bond stiffness and strength of the 3-D model is significantly greater than that of the 2-D model, which is attributed to the out-of-plane concrete constraining. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.