Back to Search Start Over

Text in the Dark: Extremely Low-Light Text Image Enhancement

Authors :
Lin, Che-Tsung
Ng, Chun Chet
Tan, Zhi Qin
Nah, Wan Jun
Wang, Xinyu
Kew, Jie Long
Hsu, Pohao
Lai, Shang Hong
Chan, Chee Seng
Zach, Christopher
Publication Year :
2024

Abstract

Extremely low-light text images are common in natural scenes, making scene text detection and recognition challenging. One solution is to enhance these images using low-light image enhancement methods before text extraction. However, previous methods often do not try to particularly address the significance of low-level features, which are crucial for optimal performance on downstream scene text tasks. Further research is also hindered by the lack of extremely low-light text datasets. To address these limitations, we propose a novel encoder-decoder framework with an edge-aware attention module to focus on scene text regions during enhancement. Our proposed method uses novel text detection and edge reconstruction losses to emphasize low-level scene text features, leading to successful text extraction. Additionally, we present a Supervised Deep Curve Estimation (Supervised-DCE) model to synthesize extremely low-light images based on publicly available scene text datasets such as ICDAR15 (IC15). We also labeled texts in the extremely low-light See In the Dark (SID) and ordinary LOw-Light (LOL) datasets to allow for objective assessment of extremely low-light image enhancement through scene text tasks. Extensive experiments show that our model outperforms state-of-the-art methods in terms of both image quality and scene text metrics on the widely-used LOL, SID, and synthetic IC15 datasets. Code and dataset will be released publicly at https://github.com/chunchet-ng/Text-in-the-Dark.<br />Comment: The first two authors contributed equally to this work

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.14135
Document Type :
Working Paper