8 results on '"Javier Vazquez-Corral"'
Search Results
2. Perceptual Image Enhancement for Smartphone Real-Time Applications
- Author
-
Marcos V. Conde, Florin Vasluianu, Javier Vazquez-Corral, and Radu Timofte
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones., Accepted IEEE/CVF WACV 2023
- Published
- 2023
3. A Benchmark of Objective Quality Metrics for HLG-Based HDR/WCG Image Coding
- Author
-
Marcelo Bertalmío, Javier Vazquez-Corral, Trevor Canham, and Yasuko Sugito
- Subjects
Image coding ,Computer science ,Benchmark (computing) ,Data mining ,computer.software_genre ,computer ,Objective quality - Published
- 2020
4. Convolutional Neural Networks Can Be Deceived by Visual Illusions
- Author
-
Alexander Gomez-Villa, Javier Vazquez-Corral, Adrián Martín, and Marcelo Bertalmío
- Subjects
Deblurring ,Color constancy ,Optical illusion ,Computer science ,business.industry ,Deep learning ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Convolutional neural network ,050105 experimental psychology ,Similarity (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,Feature learning - Abstract
Visual illusions teach us that what we see is not always what is represented in the physical world. Their special nature make them a fascinating tool to test and validate any new vision model proposed. In general, current vision models are based on the concatenation of linear and non-linear operations. The similarity of this structure with the operations present in Convolutional Neural Networks (CNNs) has motivated us to study if CNNs trained for low-level visual tasks are deceived by visual illusions. In particular, we show that CNNs trained for image denoising, image deblurring, and computational color constancy are able to replicate the human response to visual illusions, and that the extent of this replication varies with respect to variation in architecture and spatial pattern size. These results suggest that in order to obtain CNNs that better replicate human behaviour, we may need to start aiming for them to better replicate visual illusions.
- Published
- 2019
5. Weakly Supervised Fog Detection
- Author
-
Aurélio Campilho, Javier Vazquez-Corral, Pedro Alves Costa, and Adrian Galdran
- Subjects
Training set ,Haze ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Image (mathematics) ,Data modeling ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Visibility - Abstract
Image dehazing tries to solve an undesired loss of visibility in outdoor images due to the presence of fog. Recently, machine-learning techniques have shown great dehazing ability. However, in order to be trained, they require training sets with pairs of foggy images and their clean counterparts, or a depth-map. In this paper, we propose to learn the appearance of fog from weakly-labeled data. Specifically, we only require a single label per-image stating if it contains fog or not. Based on the Multiple-Instance Learning framework, we propose a model that can learn from image-level labels to predict if an image contains haze reasoning at a local level. Fog detection performance of the proposed method compares favorably with two popular techniques, and the attention maps generated by the model demonstrate that it effectively learns to disregard sky regions as indicative of the presence of fog, a common pitfall of current image dehazing techniques.
- Published
- 2018
6. NTIRE 2018 Challenge on Image Dehazing: Methods and Results
- Author
-
Cosmin Ancuti, Ruhao Zhao, Xiaoping Ma, Yong Qin, Limin Jia, Klaus Friedel, Sehwan Ki, Hyeonjun Sim, Jae-Seok Choi, Sooye Kim, Soomin Seo, Codruta O. Ancuti, Saehun Kim, Munchurl Kim, Ranjan Mondal, Sanchayan Santra, Bhabatosh Chanda, Jinlin Liu, Kangfu Mei, Juncheng Li, null Luyao, Faming Fang, Radu Timofte, Aiwen Jiang, Xiaochao Qu, Ting Liu, Pengfei Wang, Biao Sun, Jiangfan Deng, Yuhang Zhao, Ming Hong, Jingying Huang, Yizhi Chen, Luc Van Gool, Erin Chen, Xiaoli Yu, Tingting Wu, Anil Genc, Deniz Engin, Hazim Kemal Ekenel, Wenzhe Liu, Tong Tong, Gen Li, Qinquan Gao, Lei Zhang, Zhan Li, Daofa Tang, Yuling Chen, Ziying Huo, Aitor Alvarez-Gila, Adrian Galdran, Alessandro Bria, Javier Vazquez-Corral, Marcelo Bertalmo, H. Seckin Demir, Ming-Hsuan Yang, Omer Faruk Adil, Huynh Xuan Phung, Xin Jin, Jiale Chen, Chaowei Shan, Zhibo Chen, Vishal M. Patel, He Zhang, and Vishwanath A. Sindagi
- Subjects
Haze ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,GeneralLiterature_MISCELLANEOUS ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Focus (optics) ,business ,Image restoration ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had ~ 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing.
- Published
- 2018
7. Color-matching Shots from Different Cameras Having Unknown Gamma or Logarithmic Encoding Curves
- Author
-
Marcelo Bertalmío, Javier Vazquez-Corral, and Raquel Gil Rodríguez
- Subjects
Logarithm ,Computer science ,Non-linearity estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Color matching ,Color stabilization ,Encoding (memory) ,Computer graphics (images) ,Gamma corrected images ,0202 electrical engineering, electronic engineering, information engineering ,Logarithmic encoded images ,020201 artificial intelligence & image processing - Abstract
Comunicació presentada a: SMPTE 2017 Annual Technical Conference and Exhibition, celebrat del 23 al 26 d'octubre de 2017 a Los Angeles, Estats Units d'Amèrica. In cinema and TV it is quite usual to have to work with footage coming from several cameras, which show noticeable color differences among them even if they are all the same model. In TV broadcasts, technicians work in camera control units so as to ensure color consistency when cutting from one camera to another. In cinema post-production, colorists need to manually colormatch images coming from different sources. Aiming to help perform this task automatically, the Academy Color Encoding System (ACES) introduced a color management framework to work within the same color space and be able to use different cameras and displays; however, the ACES pipeline requires to have the cameras characterized previously, and therefore does not allow to work ‘in the wild’, a situation which is very common. We present a color stabilization method that, given two images of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves (logarithmic or gamma), is able to correct the colors of one of the images making it look as if it was captured with the other camera. Our method is based on treating the in-camera color processing pipeline as a combination of a 3x3 matrix followed by a non-linearity, which allows us to model a color stabilization transformation among two shots as a linear-nonlinear function with several parameters. We find corresponding points between the two images, compute the error (color difference) over them, and determine the transformation parameters that minimize this error, all automatically without any user input. The method is fast and the results have no spurious colors or spatio-temporal artifacts of any kind. It outperforms the state of the art both visually and according to several metrics, and can handle very challenging real-life examples. This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P(MINECO/FEDER,UE), and by the Icrea Academia Award. The work of J. Vazquez-Corral was supported by the Spanish government under Grant IJCI-2014-19516.
- Published
- 2017
8. Automatic, Fast and Perceptually Accurate Gamut Mapping Based on Vision Science Models
- Author
-
Marcelo Bertalmío, Syed Waqas Zamir, and Javier Vazquez-Corral
- Subjects
Wide color gamut ,Computer science ,Color reproduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Exhibition ,Gamut mapping algorithms ,Vision science ,Gamut ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Gamut mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Comunicació presentada a: SMPTE 2017 Annual Technical Conference and Exhibition, celebrat del 23 al 26 d'octubre de 2017 a Los Angeles, Estats Units d'Amèrica. Gamut mapping transforms colors of the original (image or video) content to the color palette of the display device with the simultaneous goals of (a) reproducing content accurately while preserving the artistic intent of the original content’s creator and (b) exploiting the full color rendering potential of the target display device. The rapid advancement in display technologies has created a pressing need to develop automatic and fast gamut mapping algorithms that can deal with imagery intended for both conventional and emerging displays. In this paper, we propose a novel framework based on retinal and color perception models from vision science that offers a functionality to perform both gamut reduction and gamut extension, while preserving hue and taking into account the analysis of the colors of the input image. We evaluate the performance of the proposed framework visually and by using a perceptually-based error metric, according to which the gamut-mapped results of our framework outperform those of the state-of-the-art methods. This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE), and by the Icrea Academia Award. The work of Javier Vazquez-Corral was supported by the Spanish government grant IJCI-2014-19516.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.