Back to Search
Start Over
Video super-resolution via dense non-local spatial-temporal convolutional network.
- Source :
-
Neurocomputing . Aug2020, Vol. 403, p1-12. 12p. - Publication Year :
- 2020
-
Abstract
- In this paper, we present a novel end-to-end deep neural network for the problem of video super-resolution. In contrast to most previous methods where frames need to wrap for temporal alignment based on the estimated optical flow, we propose short-temporal and bidirectional long-temporal blocks to exploit the spatial-temporal dependencies existing in inter-frames. It can effectively model the sudden and smooth varying motions of videos and overcome the limitations of explicit motion estimation. In addition, by introducing dense feature concatenation, it provides an effective way to combine the low-level and high-level features for boosting the reconstruction of mid/high-frequency information as shown in our analysis and experiment. Furthermore, we present a region-level non-local feature enhancing structure, which captures the spatial-temporal correlations of any two positions and makes use of long-distance relevant information. Extensive evaluations and comparisons with the current state-of-the-art approaches demonstrate the effectiveness of the proposed framework. [ABSTRACT FROM AUTHOR]
- Subjects :
- *OPTICAL flow
*VIDEOS
*STREAMING video & television
Subjects
Details
- Language :
- English
- ISSN :
- 09252312
- Volume :
- 403
- Database :
- Academic Search Index
- Journal :
- Neurocomputing
- Publication Type :
- Academic Journal
- Accession number :
- 143799863
- Full Text :
- https://doi.org/10.1016/j.neucom.2020.04.039