Back to Search Start Over

OneDConv: Generalized Convolution For Transform-Invariant Representation

Authors :
Zhang, Tong
Weng, Haohan
Yi, Ke
Chen, C. L. Philip
Publication Year :
2022

Abstract

Convolutional Neural Networks (CNNs) have exhibited their great power in a variety of vision tasks. However, the lack of transform-invariant property limits their further applications in complicated real-world scenarios. In this work, we proposed a novel generalized one dimension convolutional operator (OneDConv), which dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner. The proposed operator can extract the transform-invariant features naturally. It improves the robustness and generalization of convolution without sacrificing the performance on common images. The proposed OneDConv operator can substitute the vanilla convolution, thus it can be incorporated into current popular convolutional architectures and trained end-to-end readily. On several popular benchmarks, OneDConv outperforms the original convolution operation and other proposed models both in canonical and distorted images.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.05781
Document Type :
Working Paper