127 results on '"Texture enhancement"'
Search Results
2. Joint Optimization-Based Texture and Geometry Enhancement Method for Single-Image-Based 3D Content Creation.
- Author
-
Park, Jisun, Kim, Moonhyeon, Kim, Jaesung, Kim, Wongyeom, and Cho, Kyungeun
- Subjects
- *
GENERALIZATION , *GEOMETRY - Abstract
Recent studies have explored the generation of three-dimensional (3D) meshes from single images. A key challenge in this area is the difficulty of improving both the generalization and detail simultaneously in 3D mesh generation. To address this issue, existing methods utilize fixed-resolution mesh features to train networks for generalization. This approach is capable of generating the overall 3D shape without limitations on object categories. However, the generated shape often exhibits a blurred surface and suffers from suboptimal texture resolution due to the fixed-resolution mesh features. In this study, we propose a joint optimization method that enhances geometry and texture by integrating generalized 3D mesh generation with adjustable mesh resolution. Specifically, we apply an inverse-rendering-based remeshing technique that enables the estimation of complex-shaped mesh estimations without relying on fixed-resolution structures. After remeshing, we enhance the texture to improve the detailed quality of the remeshed mesh via a texture enhancement diffusion model. By separating the tasks of generalization, detailed geometry estimation, and texture enhancement and adapting different target features for each specific network, the proposed joint optimization method effectively addresses the characteristics of individual objects, resulting in increased surface detail and the generation of high-quality textures. Experimental results on the Google Scanned Objects and ShapeNet datasets demonstrate that the proposed method significantly improves the accuracy of 3D geometry and texture estimation, as evaluated by the PSNR, SSIM, LPIPS, and CD metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Scattered Light Compensation Combined with Color Preservation and Contrast Balance for Underwater Image Enhancement
- Author
-
Zemeng NING, Sen LIN, and Xingran LI
- Subjects
unmanned undersea system ,color correction ,underwater image enhancement ,texture enhancement ,scattering light compensation ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
In view of color deviation, low contrast, and blurring in underwater images, an underwater image enhancement method based on scattered light compensation combined with color preservation and contrast balance was proposed. Firstly, the relative total variational model was used to separate the structure and texture layer of the image. Specifically, the color deviation of the structural layer was corrected by defining a compensation coefficient error matrix based on the RGB spatial mapping, and the texture layer was enhanced by filtering separation and fusion to prevent the initial feature loss of the image. The enhanced texture layer was superimposed with the structural layer to obtain the output of the first layer. Besides, in the contrast balance module, color preservation-contrast limiting adaptive histogram equalization based on spatial transformation was performed to further improve the contrast and brightness. Finally, the enhanced results of the two layers were fused to output the image. Comparison conducted on different datasets verifies that the proposed method has better performance in balancing color deviation, enhancing details, and deblurring, which has practical application value in unmanned undersea system-based vision tasks.
- Published
- 2024
- Full Text
- View/download PDF
4. HA-Net: A Hybrid Algorithm Model for Underwater Image Color Restoration and Texture Enhancement.
- Author
-
Qian, Jin, Li, Hui, and Zhang, Bin
- Subjects
IMAGE reconstruction ,PETRI nets ,FEATURE extraction ,SUBMERGED structures ,IMAGE intensifiers ,ALGORITHMS ,LEARNING ability - Abstract
Due to the extremely irregular nonlinear degradation of images obtained in real underwater environments, it is difficult for existing underwater image enhancement methods to stably restore degraded underwater images, thus making it challenging to improve the efficiency of marine work. We propose a hybrid algorithm model for underwater image color restoration and texture enhancement, termed HA-Net. First, we introduce a dynamic color correction algorithm based on depth estimation to restore degraded images and mitigate color attenuation in underwater images by calculating the depth of targets and backgrounds. Then, we propose a multi-scale U-Net to enhance the network's feature extraction capability and introduce a parallel attention module to capture image spatial information, thereby improving the model's accuracy in recognizing deep semantics such as fine texture. Finally, we propose a global information compensation algorithm to enhance the output image's integrity and boost the network's learning ability. Experimental results on synthetic standard data sets and real data demonstrate that our method produces images with clear texture and bright colors, outperforming other algorithms in both subjective and objective evaluations, making it more suitable for real marine environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Research on video face forgery detection model based on multiple feature fusion network.
- Author
-
Hou, Wenyan, Sun, Jingtao, Liu, Huanqi, and Zhang, Fengling
- Abstract
In recent years, the nefarious exploitation of video face forgery technology has emerged as a grave threat, not only to personal property security but also to the broader stability of states and societies. Although numerous models and methods have emerged for video face forgery detection, these methods fall short in recognizing subtle traces of forgery in local regions, and the performance of the detection models is often affected to some extent when dealing with specific forgery strategies. To solve this problem, we propose a model based on multiple feature fusion network (MFF-Net) for video face forgery detection. The model employs Res2Net50 to extract texture features of the video, which realizes deeper texture feature extraction. By integrating the extracted texture and frequency feature into a temporal feature extraction module, which includes a three-layer LSTM network, the detection model fully incorporates the diverse features of the video information, thus identifying the subtle artifacts more effectively. To further enhance the discrimination ability of the model, we have also introduced a texture activation module (TAM) in the texture feature extraction section. It helps to enhance the saliency of subtle forgery traces, thus improving the detection of specific forgery strategies. In order to verify the effectiveness of the proposed method, we conduct experiments on several generalized datasets such as FaceForensics++ and DFD. The experimental results demonstrate that the MFF-Net model can recognize subtle forgery traces more effectively, especially in the case of a particular forgery strategy, and the model exhibits excellent performance and high detection accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Face Forgery Detection via Texture and Saliency Enhancement
- Author
-
Guo, Sizheng, Yang, Haozhe, Lin, Xianming, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rudinac, Stevan, editor, Hanjalic, Alan, editor, Liem, Cynthia, editor, Worring, Marcel, editor, Jónsson, Björn Þór, editor, Liu, Bei, editor, and Yamakata, Yoko, editor
- Published
- 2024
- Full Text
- View/download PDF
7. Evaluating the effect of Spirulina platensis and whey powder in baguette texture using response surface methodology (RSM)
- Author
-
Vosough, Morvarid, Emamshoushtari, Mir Mehrshad, Helchi, Salar, Sohani, Elnaz, Esmaeili, Fatemeh, Shayanfar, Shima, and PajoumShariati, Farshid
- Published
- 2024
- Full Text
- View/download PDF
8. Development of kiwi fruit leather incorporated with hydrocolloids and betacyanin microcapsules: Rheological behaviour and release kinetics of betacyanin
- Author
-
G.V.S. Bhagya Raj, Kshirod Kumar Dash, Rafeeya Shams, Shaikh Ayaz Mukarram, and Bela Kovács
- Subjects
Controlled release ,Texture enhancement ,Cross model ,Shelf life ,Elasticity and firmness ,Food processing and manufacture ,TP368-456 - Abstract
The kiwi fruit leather incorporated with betacyanin microcapsules was prepared using three hydrocolloids namely xanthan gum, gellan gum, and guar gum. The four-kiwi fruit puree prepared for the study were kiwi fruit puree without hydrocolloid (CP), with xanthan gum (XE), with gellan gum (EP), and with guar gum (UP). The influence of various hydrocolloids and temperatures on rheological properties of kiwi fruit puree was investigated using the Cross, Carreau, and modified Powell model. The apparent viscosity was decreased with increase in shear rate and rise of temperature. The incorporation of hydrocolloid also increased viscosity of prepared puree. Cross model (R2>0.989) found to fit the apparent viscosity with respect to shear rate. The flow behaviour index of four kiwi fruit puree samples was found to be less than one depicting pseudoplastic fluid with shear thinning behaviour. For dynamic rheology study in the frequency range of 1–60 rad/s, the storage modulus was higher than loss modulus at specific frequency that implies, elastic properties of puree were dominant over the viscous ones suggesting the weak gel-like network of kiwi fruit puree. The glass transition temperature of prepared kiwi fruit leathers ranged from 59 to 64 °C. The in vitro betacyanin release from the leather sample was studied and Peppas-Sahlin model was observed to be one of the ideal model when compared with the other kinetic models. The development of kiwi fruit leather with hydrocolloids and betacyanin microcapsules represented a successful strategy to create a functional snack that combines health benefits with consumer-desirable sensory attributes.
- Published
- 2024
- Full Text
- View/download PDF
9. A Texture Enhancement Method for Oceanic Internal Wave Synthetic Aperture Radar Images Based on Non-Local Mean Filtering and Texture Layer Enhancement.
- Author
-
Chen, Zhenghua, Zeng, Hongcheng, Wang, Yamin, Yang, Wei, Guan, Yanan, and Liu, Wei
- Subjects
- *
INTERNAL waves , *SYNTHETIC apertures , *SYNTHETIC aperture radar , *IMAGE denoising , *SPECKLE interference - Abstract
Synthetic aperture radar (SAR) is an important tool for observing the oceanic internal wave phenomenon. However, owing to the unstable imaging quality of SAR on oceanic internal waves, the texture details of internal wave images are usually unclear, which is not conducive to the subsequent applications of the images. To cope with this problem, a texture enhancement method for oceanic internal wave SAR images is proposed in this paper, which is based on non-local mean (NLM) filtering and texture layer enhancement (TLE). Since the strong speckle noise commonly present in internal wave images is simultaneously enhanced during texture enhancement, resulting in degraded image quality, NLM filtering is first performed to suppress speckle noise. Then, the denoised image is decomposed into the structure layer and the texture layer, and a texture layer enhancement method oriented to the texture characteristics of oceanic internal waves is proposed and applied. Finally, the enhanced texture layer and the structure layer are combined to reconstruct the final enhanced image. Experiments are conducted based on the Gaofen-3 real SAR data, and the results demonstrate that the proposed method performs well in suppressing speckle noise, maintaining overall image brightness, and enhancing internal wave texture details. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Improved detector in orchard via top-to-down texture enhancement and adaptive region-aware feature fusion.
- Author
-
Sun, Wei, Tian, Yulong, Wang, Qianzhou, Lu, Jin, Kong, Xianguang, and Zhang, Yanning
- Subjects
ORCHARDS ,DETECTORS ,POLLINATION - Abstract
Accurate target detection in complex orchard environments is the basis for automatic picking and pollination. The characteristics of small, clustered and complex interference greatly increase the difficulty of detection. Toward this end, we explore a detector in the orchard and improve the detection ability of complex targets. Our model includes two core designs to make it suitable for reducing the risk of error detection due to small and camouflaged object features. Multi-scale texture enhancement design focuses on extracting and enhancing more distinguishable features for each level with multiple parallel branches. Our adaptive region-aware feature fusion module extracts the dependencies between locations and channels, potential cross-relations among different levels and multi-types information to build distinctive representations. By combining enhancement and fusion, experiments on various real-world datasets show that the proposed network can outperform previous state-of-the-art methods, especially for detection in complex conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images.
- Author
-
Zhang, Di, Wang, Wensheng, Gade, Martin, and Zhou, Huihui
- Subjects
- *
BEACHES , *SYNTHETIC aperture radar , *INTERTIDAL zonation , *SEDIMENTS , *HABITATS , *SEAGRASSES - Abstract
This paper proposes a texture-enhanced network (TENet) for intertidal sediment and habitat classification using multiband multipolarization synthetic aperture radar (SAR) images. The architecture introduces the texture enhancement module (TEM) into the UNet framework to explicitly learn global texture information from SAR images. The study sites are chosen from the northern part of the intertidal zones in the German Wadden Sea. Results show that the presented TENet model is able to detail the intertidal surface types, including land, seagrass, bivalves, bright sands/beach, water, sediments, and thin coverage of vegetation or bivalves. To further assess its performance, we quantitatively compared our results from the TENet model with different instance segmentation models for the same areas of interest. The TENet model gives finer classification accuracies and shows great potential in providing more precise locations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. TEMCA-Net: A Texture-Enhanced Deep Learning Network for Automatic Solar Panel Extraction in High Groundwater Table Mining Areas
- Author
-
Min Tan, Weiqiang Luo, Jingjing Li, and Ming Hao
- Subjects
Semantic segmentation ,solar panel extraction ,texture enhancement ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Long-term coal mining has led to a series of ecological problems, constraining society's sustainable development. Ecological restoration is a crucial component of achieving sustainability, and with the continuous advancement of photovoltaic technology, the comprehensive utilization of photovoltaics has become one of the important restoration methods in mining areas. The area and location of solar panels, as key indicators for assessing the ecological restoration approach, require precise extraction and positioning. This article proposes a texture-enhanced multicontext attention network (TEMCA-Net). In the encoding part, the network utilizes residual connections in conjunction with the convolutional block attention module to preliminarily extract contextual information. Then, low-level features were input into the statistical texture learning (STL) texture enhancement module and high-level features into the horizontal atrous spatial pyramid pooling (H-ASPP) module. In the decoding part, the high-level features processed by the H-ASPP were combined module with the texture-enhanced features from the STL module. Experiments were conducted in the Peibei mining region located in Xuzhou City, Jiangsu Province. We established the solar panels of Peibei mining region (SPPMR) dataset. The experimental results on the SPPMR dataset demonstrate TEMCA-Net's outstanding performance in solar panel extraction, with precision at 90.24%, recall at 93.07%, an F1-score of 91.63%, and a mean intersection over union of 92.21%. It significantly outperforms three classic deep learning networks: Deeplabv3+, U-net, and PSPnet. In summary, this study provides an efficient and feasible solution for the extraction of solar panels in mining areas with high water tables.
- Published
- 2024
- Full Text
- View/download PDF
13. Improved detector in orchard via top-to-down texture enhancement and adaptive region-aware feature fusion
- Author
-
Wei Sun, Yulong Tian, Qianzhou Wang, Jin Lu, Xianguang Kong, and Yanning Zhang
- Subjects
Texture enhancement ,Adaptive fusion ,Orchard detection ,Top-to-down ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Accurate target detection in complex orchard environments is the basis for automatic picking and pollination. The characteristics of small, clustered and complex interference greatly increase the difficulty of detection. Toward this end, we explore a detector in the orchard and improve the detection ability of complex targets. Our model includes two core designs to make it suitable for reducing the risk of error detection due to small and camouflaged object features. Multi-scale texture enhancement design focuses on extracting and enhancing more distinguishable features for each level with multiple parallel branches. Our adaptive region-aware feature fusion module extracts the dependencies between locations and channels, potential cross-relations among different levels and multi-types information to build distinctive representations. By combining enhancement and fusion, experiments on various real-world datasets show that the proposed network can outperform previous state-of-the-art methods, especially for detection in complex conditions.
- Published
- 2023
- Full Text
- View/download PDF
14. Joint Optimization-Based Texture and Geometry Enhancement Method for Single-Image-Based 3D Content Creation
- Author
-
Jisun Park, Moonhyeon Kim, Jaesung Kim, Wongyeom Kim, and Kyungeun Cho
- Subjects
3D content creation ,image-based 3D generation ,joint optimization ,texture enhancement ,geometry enhancement ,Mathematics ,QA1-939 - Abstract
Recent studies have explored the generation of three-dimensional (3D) meshes from single images. A key challenge in this area is the difficulty of improving both the generalization and detail simultaneously in 3D mesh generation. To address this issue, existing methods utilize fixed-resolution mesh features to train networks for generalization. This approach is capable of generating the overall 3D shape without limitations on object categories. However, the generated shape often exhibits a blurred surface and suffers from suboptimal texture resolution due to the fixed-resolution mesh features. In this study, we propose a joint optimization method that enhances geometry and texture by integrating generalized 3D mesh generation with adjustable mesh resolution. Specifically, we apply an inverse-rendering-based remeshing technique that enables the estimation of complex-shaped mesh estimations without relying on fixed-resolution structures. After remeshing, we enhance the texture to improve the detailed quality of the remeshed mesh via a texture enhancement diffusion model. By separating the tasks of generalization, detailed geometry estimation, and texture enhancement and adapting different target features for each specific network, the proposed joint optimization method effectively addresses the characteristics of individual objects, resulting in increased surface detail and the generation of high-quality textures. Experimental results on the Google Scanned Objects and ShapeNet datasets demonstrate that the proposed method significantly improves the accuracy of 3D geometry and texture estimation, as evaluated by the PSNR, SSIM, LPIPS, and CD metrics.
- Published
- 2024
- Full Text
- View/download PDF
15. MAREPVGG: Multi-attention RepPVGG to Facefake Detection
- Author
-
Huang, Zhuochao, Yang, Rui, Lan, Rushi, Pang, Cheng, Luo, Xiaoyan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lu, Huimin, editor, Blumenstein, Michael, editor, Cho, Sung-Bae, editor, Liu, Cheng-Lin, editor, Yagi, Yasushi, editor, and Kamiya, Tohru, editor
- Published
- 2023
- Full Text
- View/download PDF
16. Learning Texture Enhancement Prior with Deep Unfolding Network for Snapshot Compressive Imaging
- Author
-
Jin, Mengying, Wei, Zhihui, Xiao, Liang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wang, Lei, editor, Gall, Juergen, editor, Chin, Tat-Jun, editor, Sato, Imari, editor, and Chellappa, Rama, editor
- Published
- 2023
- Full Text
- View/download PDF
17. A Texture Enhancement Method for Oceanic Internal Wave Synthetic Aperture Radar Images Based on Non-Local Mean Filtering and Texture Layer Enhancement
- Author
-
Zhenghua Chen, Hongcheng Zeng, Yamin Wang, Wei Yang, Yanan Guan, and Wei Liu
- Subjects
synthetic aperture radar (SAR) ,oceanic internal wave ,texture layer enhancement (TLE) ,non-local mean (NLM) ,texture enhancement ,Science - Abstract
Synthetic aperture radar (SAR) is an important tool for observing the oceanic internal wave phenomenon. However, owing to the unstable imaging quality of SAR on oceanic internal waves, the texture details of internal wave images are usually unclear, which is not conducive to the subsequent applications of the images. To cope with this problem, a texture enhancement method for oceanic internal wave SAR images is proposed in this paper, which is based on non-local mean (NLM) filtering and texture layer enhancement (TLE). Since the strong speckle noise commonly present in internal wave images is simultaneously enhanced during texture enhancement, resulting in degraded image quality, NLM filtering is first performed to suppress speckle noise. Then, the denoised image is decomposed into the structure layer and the texture layer, and a texture layer enhancement method oriented to the texture characteristics of oceanic internal waves is proposed and applied. Finally, the enhanced texture layer and the structure layer are combined to reconstruct the final enhanced image. Experiments are conducted based on the Gaofen-3 real SAR data, and the results demonstrate that the proposed method performs well in suppressing speckle noise, maintaining overall image brightness, and enhancing internal wave texture details.
- Published
- 2024
- Full Text
- View/download PDF
18. TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images
- Author
-
Di Zhang, Wensheng Wang, Martin Gade, and Huihui Zhou
- Subjects
SAR ,intertidal zones ,texture enhancement ,classification ,multiband ,multipolarization ,Science - Abstract
This paper proposes a texture-enhanced network (TENet) for intertidal sediment and habitat classification using multiband multipolarization synthetic aperture radar (SAR) images. The architecture introduces the texture enhancement module (TEM) into the UNet framework to explicitly learn global texture information from SAR images. The study sites are chosen from the northern part of the intertidal zones in the German Wadden Sea. Results show that the presented TENet model is able to detail the intertidal surface types, including land, seagrass, bivalves, bright sands/beach, water, sediments, and thin coverage of vegetation or bivalves. To further assess its performance, we quantitatively compared our results from the TENet model with different instance segmentation models for the same areas of interest. The TENet model gives finer classification accuracies and shows great potential in providing more precise locations.
- Published
- 2024
- Full Text
- View/download PDF
19. TED-Face : Texture-Enhanced Deep Face Reconstruction in the Wild.
- Author
-
Huang, Ying, Fang, Lin, and Hu, Shanfeng
- Subjects
- *
EYE , *RADIANCE , *DEEP learning - Abstract
We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem because human eyes are particularly sensitive to numerically minute yet perceptually significant details. Previous methods that seek to minimize reconstruction errors within a low-dimensional face space can suffer from this issue and generate close yet low-fidelity approximations. The loss of high-frequency texture details is a key factor in their process, which we propose to address by learning to recover both dense radiance residuals and sparse facial texture features from a single image, in addition to the variables solved by previous work—shape, appearance, illumination, and camera. We integrate the estimation of all these factors in a single unified deep neural network and train it on several popular face reconstruction datasets. We also introduce two new metrics, visual fidelity (VIF) and structural similarity (SSIM), to compensate for the fact that reconstruction error is not a consistent perceptual metric of quality. On the popular FaceWarehouse facial reconstruction benchmark, our proposed system achieves a VIF score of 0.4802 and an SSIM score of 0.9622, improving over the state-of-the-art Deep3D method by 6.69% and 0.86%, respectively. On the widely used LS3D-300W dataset, we obtain a VIF score of 0.3922 and an SSIM score of 0.9079 for indoor images, and the scores for outdoor images are 0.4100 and 0.9160, respectively, which also represent an improvement over those of Deep3D. These results show that our method is able to recover visually more realistic facial appearance details compared with previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Texture enhanced underwater image restoration via Laplacian regularization.
- Author
-
Hao, Yali, Hou, Guojia, Tan, Lu, Wang, Yongfang, Zhu, Haotian, and Pan, Zhenkuan
- Subjects
- *
IMAGE reconstruction , *IMAGE enhancement (Imaging systems) , *INFORMATION modeling , *LAPLACIAN operator , *PROBLEM solving , *RADIANCE - Abstract
• Incorporate the underwater image formation model into a novel constructed Laplacian variation model. • Introduce a simple and effective L 2 norm on the Laplacian priors to preserve details and enhance texture. • Propose a brightness-aware manner to estimate the transmission map. • Design an efficient optimization scheme to improve the convergence speed. Underwater images are usually degraded by color distortion, blur, and low contrast due to the fact that the light is inevitably absorbed and scattered when traveling through water. The captured images with poor quality may greatly limit their applications. To address these problems, we propose a new Laplacian variation model based on underwater image formation model and the information derived from the transmission map and background light. Technically, a novel fidelity term is designed to constrain the radiance scene, and a divergence-based regularization is applied to strengthen the structure and texture details. Moreover, the brightness-aware blending algorithm and quad-tree subdivision scheme are integrated into our variational framework to perform the transmission map and background light estimation. Accordingly, we provide a fast-iterative algorithm based on the alternating direction method of multipliers to solve the optimization problem and accelerate its convergence speed. Experimental results demonstrate that the proposed method achieves outstanding performance on dehazing, detail preserving, and texture enhancement for improving underwater image quality. Extensive qualitative and quantitative comparisons with several state-of-the-art methods also validate the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Retinal vessel segmentation based on self-distillation and implicit neural representation.
- Author
-
Gu, Jia, Tian, Fangzheng, and Oh, Il-Seok
- Subjects
RETINAL blood vessels ,BLOOD vessels ,RETINAL imaging ,IMAGE analysis - Abstract
Segmenting retinal blood vessels from retinal images is a crucial step in ocular disease diagnosis. It is also one of the most important applications and research in ophthalmic image analysis. However, the contrast between the retinal vessels and background in fundus images is low. The size and shape of retinal vessels vary significantly, and the width of some small vessels is often below 10 pixels or even 1 pixel. Moreover, some blood vessels are discontinuous owing to illumination, which complicates the segmentation of retinal blood vessels. To address these problems, this paper innovatively proposes a novel retinal vessel segmentation network framework based on self-distillation and implicit neural representation, which predicts retinal vessels in two stages. First, the self-distillation method extracts the main features of retinal images using the properties of Vision Transformer (ViT) to obtain preliminary images for the blood vessel segmentation. Second, implicit neural representation improves the resolution of the original retinal image, and the details of blood vessels are enhanced through the texture enhancement module to obtain accurate results of the blood vessel segmentation. Furthermore, we adopted an improved centerline dice (clDice) loss function to constrain the topology of blood vessels. We experimented on two benchmark retinal datasets (i.e., Drive and Chase) to quantitatively and qualitatively analyze the proposed method. The results indicate that the proposed outperformed the mainstream baseline. The visual segmentation results also show that this method can segment thin blood vessels more accurately and ensure the continuity of blood vessels. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Soil Surface Texture Classification Using RGB Images Acquired Under Uncontrolled Field Conditions
- Author
-
Ekunayo-Oluwabami Babalola, Muhammad H. Asad, and Abdul Bais
- Subjects
Soil texture classification ,image processing ,convolutional neural network ,uncontrolled field conditions ,Gabor filter ,texture enhancement ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Soil surface texture classification is a critical aspect of agriculture and soil science that affects various soil properties, such as water-holding capacity and soil nutrient retention. However, existing methods for soil texture classification rely on soil images taken under controlled conditions, which are not scalable for high spatiotemporal mapping of soil texture and fail to reflect real-world challenges and variations. To overcome these limitations, we propose a novel, scalable, and high spatial resolution soil surface texture classification process that employs image processing, texture-enhancing filters, and Convolutional Neural Network (CNN) to classify soil images captured under Uncontrolled Field Conditions (UFC). The proposed process involves a series of steps for improving soil image analysis. Initially, image segmentation is utilized to eliminate non-soil pixels and prepare the images for further processing. Next, the segmented output is divided into smaller tiles to isolate relevant soil pixels. Then, high-frequency filtering is introduced to enhance the texture of the images. Our research has shown that the Gabor filter is more effective than Local Binary Patterns (LBP) for this purpose. By creating four distinct Gabor filters, we can enhance specific, hidden patterns within the soil images. Finally, the split and enhanced images are used to train CNN classifiers for optimal analysis. We evaluate the performance of the proposed framework using different metrics and compare it to existing state-of-the-art soil texture classification frameworks. Our proposed soil texture classification process improves performance. We employed various CNN architectures in our proposed process for comparison purposes. Inception v3 produces the highest accuracy of 85.621%, an increase of 12% compared to other frameworks. With applications in precision agriculture, soil management, and environmental monitoring, the proposed novel methodology has the potential to offer a dependable and sustainable tool for classifying soil surface texture using low-cost ground imagery acquired under UFC.
- Published
- 2023
- Full Text
- View/download PDF
23. Peanut Defect Identification Based on Multispectral Image and Deep Learning.
- Author
-
Wang, Yang, Ding, Zhao, Song, Jiayong, Ge, Zhizhu, Deng, Ziqing, Liu, Zijie, Wang, Jihong, Bian, Lifeng, and Yang, Chen
- Subjects
- *
DEEP learning , *MULTISPECTRAL imaging , *PEANUTS , *LIGHT sources , *NETWORK performance , *IMAGE intensifiers - Abstract
To achieve the non-destructive detection of peanut defects, a multi-target identification method based on the multispectral system and improved Faster RCNN is proposed in this paper. In terms of the system, the root-mean-square contrast method was employed to select the characteristic wavelengths for defects, such as mildew spots, mechanical damage, and the germ of peanuts. Then, a multispectral light source system based on a symmetric integrating sphere was designed with 2% nonuniformity illumination. In terms of Faster RCNN improvement, a texture-based attention and a feature enhancement module were designed to enhance the performance of its backbone. In the experiments, a peanut-deficient multispectral dataset with 1300 sets was collected to verify the detection performance. The results show that the evaluation metrics of all improved compared with the original network, especially in the VGG16 backbone network, where the mean average precision (mAP) reached 99.97%. In addition, the ablation experiments also verify the effectiveness of the proposed texture module and texture enhancement module in peanut defects detection. In conclusion, texture imaging enhancement and efficient extraction are effective methods to improve the network performance for multi-target peanut defect detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Extraction and analysis of surface lake flow direction in Poyang Lake based on texture enhancement and Hough transform
- Author
-
RunYuan Kuang, Yun Qiu, and WenJie Peng
- Subjects
gabor filter ,hough transform ,poyang lake ,texture enhancement ,Water supply for domestic and industrial purposes ,TD201-500 ,River, lake, and water-supply engineering (General) ,TC401-506 - Abstract
Lake flow is one of the common hydrological phenomena in nature. The water exchange of lakes and the circulation of natural water resources are realized by the water movement caused by various factors. Lake current is of great significance in the hydrological and ecological environment and it is one of the focuses of scholars’ research. This paper takes Poyang Lake as the research area, and uses three texture enhancement methods of Laws, Gabor, and LBP operators to enhance the texture features of images based on multi-source remote sensing data, and uses the Hough transform to extract the flow direction of the Poyang Lake area. The results show that the three texture enhancement methods combined with Hough transform can extract the flow direction, but the Gabor filter has the best flow direction effect and the highest extraction accuracy. Gabor filter is the most adaptable for texture enhancement of images with different resolutions. HIGHLIGHTS This paper compared the adaptability of Laws, Gabor filtering and LBP algorithms to the images with different resolutions and different regions.; Hough transform was used to extract the water flow direction of Poyang Lake.; The seasonal flow direction of Poyang Lake was analyzed.;
- Published
- 2022
- Full Text
- View/download PDF
25. Unified feature learning network for few-shot fault diagnosis.
- Author
-
Xu, Yan, Ma, Xinyao, Wang, Xuan, Wang, Jinjia, Tang, Gang, and Ji, Zhong
- Subjects
- *
FAULT diagnosis , *FEATURE selection , *FEATURE extraction , *DIAGNOSIS - Abstract
Few-shot fault diagnosis aims at diagnosing the state of mechanical signals with only a few training samples. Numerous contemporary approaches incorporate Time–Frequency Images (TFIs) derived from vibration signals to provide a thorough understanding of both the time and frequency domains. Current approaches often neglect the exploration of sample-level features, which hinders the enrichment and refinement of sample features. To this end, we propose a Unified Feature Learning Network (UFLN) designed to comprehensively model TFIs at two levels. We first present a Texture Enhancement Module (TEM) to amplify intricate details, aiding in the extraction of category-level features. Subsequently, we devise a dynamic Feature Selection Module (DFSM), tailored for the extraction of fault-related features. Finally, we develop an Intra-Diversity (ID) loss function to promote intra-class diversity, enriching the representation of each sample. Extensive experiments on three cases have demonstrated the effectiveness of the proposed UFLN approach. • A Unified Feature Learning Network is proposed for few-shot fault diagnosis. • Texture enhancement and dynamic feature selection extract category-level features. • Intra-diversity loss enhance the ability to capture subtle differences among samples. • The performance of UFLN is verified with extensive experiments on three cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Deep Region Adaptive Denoising for Texture Enhancement
- Author
-
Seong-Eui Lee, Sung-Min Woo, Jong-Han Kim, Je-Ho Ryu, and Jong-Ok Kim
- Subjects
Region adaptive image denoising ,convolution neural network ,texture classification ,cross attention ,texture enhancement ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Image denoising is a highly challenging problem yet important task in image processing. Recently, many CNN-based denoising methods have made great performances but they commonly denoise blindly texture and non-texture regions together. This frequently leads to excessive texture smoothing and detail loss. To address this issue, we propose a novel region adaptive denoising network that adjusts the denoising strength according to region textureness. The proposed network conducts denoising tasks for texture and non-texture areas independently to improve the visual quality of the resulting image. To this end, we first generate a texture map that separates the image into texture and non-texture region. Because the difference between texture and non-texture is more evident in the frequency domain than in the spatial domain, the classification is performed through discrete cosine transform (DCT). Second, guided by the texture map, denoising is performed independently in two subnets, corresponding to the texture and non-texture regions. This allows the texture subnet to avoid excessive smoothing of high frequency details, and the non-texture subnet to maximize noise reduction in flat regions. Finally, a cross fusion that takes into intra- and inter-relationship between two resulting features is proposed. The cross fusion highlights the discriminant features from two subnets without degradation when combining the output of two subnets, and thus helps enhancing the performance of regions adaptive denoising. The superiority of the proposed method is validated on both synthetic and real-world images. We demonstrate that our method outperforms the existing methods in both objective scores and subjective image quality, in particular showing outstanding results in the restoration of visually sensitive textures. Furthermore, ablation study shows that our network can adaptively control the noise removal strength by manually manipulating the texture map and that the details of the texture region can be further improved. This also can simplify the cumbersome noise tuning process when deploying deep neural networks (DNN) architectures into products.
- Published
- 2022
- Full Text
- View/download PDF
27. MSCNet: A Framework With a Texture Enhancement Mechanism and Feature Aggregation for Crack Detection
- Author
-
Guanlin Lu, Xiaohui He, Qiang Wang, Faming Shao, Jinkang Wang, and Xiaokang Zhao
- Subjects
Crack detection ,deep learning ,feature aggregation ,grouping attention mechanism ,texture enhancement ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Bridge crack is one of the critical optical and visual information to judge the health state of bridges. The bridge crack detection methods based on artificial intelligence are essential in this field, but the current approaches are not satisfactory in terms of speed and accuracy. This study proposes a novel multi-scale crack detection network, called MSCNet, comprising a texture enhancement mechanism and feature aggregation to enhance the visual saliency of the objects in the background for bridge crack detection. We use Res2Net as the backbone network to improve the depth information expression ability of the cracks itself. Because the edge property of bridge cracks is prominent, to make full use of this visual feature, we use a texture enhancement module based on group attention to capturing the detailed information of cracks in low-level features. To further mine the depth information of the network, we use a cascade fusion module to capture crack location information in high-level features. Finally, to fully utilize the characteristic information of the deep network, we fuse the low- and high-level features to obtain the final crack prediction. We evaluate the proposed method compared with other state-of-the-art methods on a large-scale crack dataset. The experimental results demonstrate the effectiveness and superiority of the proposed method, which achieves a precision of 93.5%, recall of 94.2%, and inference speed of over 63 FPS.
- Published
- 2022
- Full Text
- View/download PDF
28. LIC color texture enhancement algorithm for ocean vector field data based on HSV color mapping and cumulative distribution function.
- Author
-
Zheng, Hongbo, Shao, Qin, Chen, Jie, Shan, Yangyang, Qin, Xujia, Ma, Ji, and Xu, Xiaogang
- Abstract
Texture-based visualization method is a common method in the visualization of vector field data. Aiming at adding color mapping to the texture of ocean vector field and solving the ambiguity of vector direction in texture image, a new color texture enhancement algorithm based on the Line Integral Convolution (LIC) for the vector field data is proposed, which combines the HSV color mapping and cumulative distribution function calculation of vector field data. This algorithm can be summarized as follows: firstly, the vector field data is convoluted twice by line integration to get the gray texture image. Secondly, the method of mapping vector data to each component of the HSV color space is established. And then, the vector field data is mapped into HSV color space and converted from HSV to RGB values to get the color image. Thirdly, the cumulative distribution function of the RGB color components of the gray texture image and the color image is constructed to enhance the gray texture and RGB color values. Finally, both the gray texture image and the color image are fused to get the color texture. The experimental results show that the proposed LIC color texture enhancement algorithm is capable of generating a better display of vector field data. Furthermore, the ambiguity of vector direction in the texture images is solved and the direction information of the vector field is expressed more accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Insulator defect detection: A detection method of target search and cascade recognition
- Author
-
Jinyun Yu, Kaipei Liu, Min He, and Liang Qin
- Subjects
Insulator broken string ,Aerial image ,Defect detection ,Target search ,Cascade recognition ,Texture enhancement ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Aiming at the difficulties of few real samples for defective insulators and complex background of aerial images, this paper proposes a detection method based on target search and cascade recognition. Using the SINet framework, we apply fine-grained texture enhancement to different sizes of receptive fields. Through nearest-neighbor decoding and grouping reverse attention, the more recognizable features are guided to aggregate and generate a refined location area map by performing cascading purification operations. Additionally, we integrate the classification network to complete the solution. Experimental results show that the AUC value is up to 99.82%, which demonstrates the effectiveness and superiority of the proposed method on insulator defect detection.
- Published
- 2021
- Full Text
- View/download PDF
30. TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild
- Author
-
Ying Huang, Lin Fang, and Shanfeng Hu
- Subjects
face reconstruction ,texture enhancement ,3D morphable model ,high fidelity ,deep learning ,Chemical technology ,TP1-1185 - Abstract
We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem because human eyes are particularly sensitive to numerically minute yet perceptually significant details. Previous methods that seek to minimize reconstruction errors within a low-dimensional face space can suffer from this issue and generate close yet low-fidelity approximations. The loss of high-frequency texture details is a key factor in their process, which we propose to address by learning to recover both dense radiance residuals and sparse facial texture features from a single image, in addition to the variables solved by previous work—shape, appearance, illumination, and camera. We integrate the estimation of all these factors in a single unified deep neural network and train it on several popular face reconstruction datasets. We also introduce two new metrics, visual fidelity (VIF) and structural similarity (SSIM), to compensate for the fact that reconstruction error is not a consistent perceptual metric of quality. On the popular FaceWarehouse facial reconstruction benchmark, our proposed system achieves a VIF score of 0.4802 and an SSIM score of 0.9622, improving over the state-of-the-art Deep3D method by 6.69% and 0.86%, respectively. On the widely used LS3D-300W dataset, we obtain a VIF score of 0.3922 and an SSIM score of 0.9079 for indoor images, and the scores for outdoor images are 0.4100 and 0.9160, respectively, which also represent an improvement over those of Deep3D. These results show that our method is able to recover visually more realistic facial appearance details compared with previous methods.
- Published
- 2023
- Full Text
- View/download PDF
31. MDTL-NET: Computer-generated image detection based on multi-scale deep texture learning.
- Author
-
Xu, Qiang, Jia, Shan, Jiang, Xinghao, Sun, Tanfeng, Wang, Zhe, and Yan, Hong
- Subjects
- *
DEEP learning , *AFFINE transformations , *DIGITAL images - Abstract
Distinguishing between computer-generated (CG) and natural photographic (PG) images is of great importance to verify the authenticity and originality of digital images. However, the recent cutting-edge generation methods enable high qualities of synthesis in CG images, which makes this challenging task even trickier. To address this issue, a novel multi-scale deep texture learning neural network coined as MDTL-NET is proposed for CG image detection. We first utilize a global texture representation module incorporating the ResNet architecture to capture multi-scale texture patterns. Then, a deep texture enhancement module based on a semantic segmentation map guided affine transformation operation is designed for texture difference amplification. To enhance performance, we equip the MDTL-NET with channel and spatial attention mechanisms, which refines intermediate features and facilitates trace exploration in different domains. Moreover, a Low-rank Tensor Representation (LTR) strategy is also used for feature fusion. Extensive experiments on three public datasets and a newly constructed dataset 1 1 The benchmark is available at https://github.com/191578010/DSGCG. with more realistic and diverse images show that the proposed approach outperforms existing methods in the field by a clear margin. Besides, results also demonstrate the detection robustness and generalization ability of the proposed approach to postprocessing operations. • Generating an image will introduce different traces compared with the real one. • Attention mechanism is capable of capturing long-range feature interactions. • Multi-Scale Deep Texture can provide reliable information for detecting fake images. • Different artifacts lead to performance degradation on other types of images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Peanut Defect Identification Based on Multispectral Image and Deep Learning
- Author
-
Yang Wang, Zhao Ding, Jiayong Song, Zhizhu Ge, Ziqing Deng, Zijie Liu, Jihong Wang, Lifeng Bian, and Chen Yang
- Subjects
peanut defects ,target identification ,multispectral ,texture attention ,texture enhancement ,Agriculture - Abstract
To achieve the non-destructive detection of peanut defects, a multi-target identification method based on the multispectral system and improved Faster RCNN is proposed in this paper. In terms of the system, the root-mean-square contrast method was employed to select the characteristic wavelengths for defects, such as mildew spots, mechanical damage, and the germ of peanuts. Then, a multispectral light source system based on a symmetric integrating sphere was designed with 2% nonuniformity illumination. In terms of Faster RCNN improvement, a texture-based attention and a feature enhancement module were designed to enhance the performance of its backbone. In the experiments, a peanut-deficient multispectral dataset with 1300 sets was collected to verify the detection performance. The results show that the evaluation metrics of all improved compared with the original network, especially in the VGG16 backbone network, where the mean average precision (mAP) reached 99.97%. In addition, the ablation experiments also verify the effectiveness of the proposed texture module and texture enhancement module in peanut defects detection. In conclusion, texture imaging enhancement and efficient extraction are effective methods to improve the network performance for multi-target peanut defect detection.
- Published
- 2023
- Full Text
- View/download PDF
33. Research on the Extraction of Hazard Sources along High-Speed Railways from High-Resolution Remote Sensing Images Based on TE-ResUNet.
- Author
-
Pan, Xuran, Yang, Lina, Sun, Xu, Yao, Jingchuan, and Guo, Jiliang
- Subjects
- *
REMOTE sensing , *IMAGE enhancement (Imaging systems) , *RAILROADS , *HAZARDS , *RAILROAD safety measures - Abstract
There are many potential hazard sources along high-speed railways that threaten the safety of railway operation. Traditional ground search methods are failing to meet the needs of safe and efficient investigation. In order to accurately and efficiently locate hazard sources along the high-speed railway, this paper proposes a texture-enhanced ResUNet (TE-ResUNet) model for railway hazard sources extraction from high-resolution remote sensing images. According to the characteristics of hazard sources in remote sensing images, TE-ResUNet adopts texture enhancement modules to enhance the texture details of low-level features, and thus improve the extraction accuracy of boundaries and small targets. In addition, a multi-scale Lovász loss function is proposed to deal with the class imbalance problem and force the texture enhancement modules to learn better parameters. The proposed method is compared with the existing methods, namely, FCN8s, PSPNet, DeepLabv3, and AEUNet. The experimental results on the GF-2 railway hazard source dataset show that the TE-ResUNet is superior in terms of overall accuracy, F1-score, and recall. This indicates that the proposed TE-ResUNet can achieve accurate and effective hazard sources extraction, while ensuring high recall for small-area targets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Single-Image Super-Resolution Using Gradient and Texture Similarity
- Author
-
Rajeena, L., Simon, Philomina, Kacprzyk, Janusz, Series Editor, Krishna, A.N., editor, Srikantaiah, K.C., editor, and Naveena, C, editor
- Published
- 2019
- Full Text
- View/download PDF
35. Analysis of texture enhancement methods for the detection of eco-friendly textile fabric defects.
- Author
-
Shu, Yufeng, Zhang, Liangchao, Zuo, Dali, Zhang, Junhua, Li, Junlong, and Gan, Haoquan
- Subjects
- *
TEXTILE defects , *TEXTURED woven textiles , *MATERIALS texture , *TEXTURE mapping , *IMAGE processing , *KALMAN filtering - Abstract
If the appearance of an eco-friendly textile fabric is problematic, the product quality will be substantially deteriorated. Defect measurement is one of the most important quality control measures for eco-friendly textile fabrics. Compared to previously employed manual measurements, the application of image processing technology for the detection of eco-friendly textile defects is characterized by high efficiency and high precision. In this study, the main objectives of textile reinforcement based on texture enhancement are as follows: (1) Summarize the description methods of texture maps in a certain space and a certain frequency and investigate the gray-scale co-occurrence matrix of textile fabrics, which aimed at the characteristics of a unique texture of textile fabrics, the texture of the background caused by noise, and the texture of the defect area. The error between them was analyzed; (2) Apply a scheme based on principal component analysis-non local means to improve the eco-friendly textile quality. The image information used in the calculation process of the neighborhood similarity in nonlocal average filtering algorithm (NLM) includes the problem of an excess amount due to noise, and the NLM method is employed to estimate the parameters. On the other hand, to remove the noise, it is also possible to display the texture image content of the textile fabric, which is more conducive to the defect detection; and (3) Apply a texture-based textile defect measurement method, that is, a class-separable characteristic between non-defective and defect textures, which increases the measurement of the gray matrix characteristics that distinguish the defect regions and improves the correctness of the detected texture. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. 一种改进的图像阴影去除方法.
- Author
-
胡晨, 胡德敏, 胡钰媛, and 褚成伟
- Abstract
In view of the problems of illumination unsaturation and texture inconsistency after shadow removal, a shadow removal method based on Retinex is proposed in this study . Based on the relationship between incident light and reflected light in Retinex theory, this method reevaluates the incident light map, assignes the average value of pixels in the three RGB channels to the target pixel point, and improves the gradient descent method. By adjusting the adaptive weight, the problem of illumination unsaturation and texture inconsistency after shadow removal is improved. The experiments on the UIUC data set prove that the root mean square error ratio of the experimental results is reduced Lo 0. 4106, which verifies the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. 多尺度和纹理特征增强的小尺寸人脸检测.
- Author
-
张智, 王进, 王杰, and 郑锦
- Subjects
- *
PIXELS , *TEXTURES , *ALGORITHMS , *SIZE - Abstract
In view of the existing face detection algorithms were difficult to deal with multi-scale,multi-pose face detection,especially for small size face,which encountered the problem of low accuracy,this paper proposed a small size face detection algorithm with multi-scale and texture feature enhanced.In this algorithm,the multi-scale enhancement module could enrich the multi-scale features to improve the detection accuracy of multi-scale face,and the texture feature enhancement module could enhance the high-level semantic information by fusing the low-level texture information,thus,it enhanced the detection ability of small size face.Furthermore,the multi-stage weighted loss functions balanced the output of the network and strengthened the role of each module.The experimental results show that the proposed algorithm can not only achieve real-time computation in detection speed,but also achieve 88.69% accuracy for face detection with less than 60 pixels height in MALF dataset.Compared with the BBFCN algorithm in FDDB dataset,the result is increased by nearly 4%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Dense multiview stereo based on image texture enhancement.
- Author
-
Liao, Jie, Wei, Mengqiang, Fu, Yanping, Yan, Qingan, and Xiao, Chunxia
- Subjects
STEREO image ,IMAGE intensifiers ,POINT cloud ,GEOMETRY - Abstract
In this paper, we propose a novel Multiview Stereo (MVS) method which can effectively estimate geometry in low‐textured regions. Conventional MVS algorithms predict geometry by performing dense correspondence estimation across multiple views under the constraint of epipolar geometry. As low‐textured regions contain less feature information for reliable matching, estimating geometry for low‐textured regions remains hard work for previous MVS methods. To address this issue, we propose an MVS method based on texture enhancement. By enhancing texture information for each input image via our multiscale bilateral decomposition and reconstruction algorithm, our method can estimate reliable geometry for low‐textured regions that are intractable for previous MVS methods. To densify the final output point cloud, we further propose a novel selective joint bilateral propagation filter, which can effectively propagate reliable geometry estimation to neighboring unpredicted regions. We validate the effectiveness of our method on the ETH3D benchmark. Quantitative and qualitative comparisons demonstrate that our method can significantly improve the quality of reconstruction in low‐textured regions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Research on the Extraction of Hazard Sources along High-Speed Railways from High-Resolution Remote Sensing Images Based on TE-ResUNet
- Author
-
Xuran Pan, Lina Yang, Xu Sun, Jingchuan Yao, and Jiliang Guo
- Subjects
high resolution remote sensing image ,hazard source extraction ,semantic segmentation ,texture enhancement ,Lovász loss function ,Chemical technology ,TP1-1185 - Abstract
There are many potential hazard sources along high-speed railways that threaten the safety of railway operation. Traditional ground search methods are failing to meet the needs of safe and efficient investigation. In order to accurately and efficiently locate hazard sources along the high-speed railway, this paper proposes a texture-enhanced ResUNet (TE-ResUNet) model for railway hazard sources extraction from high-resolution remote sensing images. According to the characteristics of hazard sources in remote sensing images, TE-ResUNet adopts texture enhancement modules to enhance the texture details of low-level features, and thus improve the extraction accuracy of boundaries and small targets. In addition, a multi-scale Lovász loss function is proposed to deal with the class imbalance problem and force the texture enhancement modules to learn better parameters. The proposed method is compared with the existing methods, namely, FCN8s, PSPNet, DeepLabv3, and AEUNet. The experimental results on the GF-2 railway hazard source dataset show that the TE-ResUNet is superior in terms of overall accuracy, F1-score, and recall. This indicates that the proposed TE-ResUNet can achieve accurate and effective hazard sources extraction, while ensuring high recall for small-area targets.
- Published
- 2022
- Full Text
- View/download PDF
40. Direction-aware neural style transfer with texture enhancement.
- Author
-
Wu, Hao, Sun, Zhengxing, Zhang, Yan, and Li, Qian
- Subjects
- *
EMBRYO transfer , *ARTIFICIAL neural networks , *TEXTURES - Abstract
Neural learning methods have been shown to be effective in style transfer. These methods, which are called NST, aim to synthesize a new image that retains the high-level structure of a content image while keeps the low-level features of a style image. However, these models using convolutional structures only extract local statistical features of style images and semantic features of content images. Since the absence of low-level features in the content image, these methods would synthesize images that look unnatural and full of traces of machines. In this paper, we find that direction, that is, the orientation of each painting stroke, can capture the soul of image style preferably and thus generates much more natural and vivid stylizations. According to this observation, we propose a direction-aware neural style transfer with texture enhancement. There are four major innovations. First, we separate the style transfer method into two stage, namely, NST stage and texture enhancement stage. Second, for the NST stage, a novel direction field loss is proposed to steer the direction of strokes in the synthesized image. And to build this loss function, we propose novel direction field loss networks to generate and compare the direction fields of content image and synthesized image. By incorporating the direction field loss in neural style transfer, we obtain a new optimization objective. Through minimizing this objective, we can produce synthesized images that better follow the direction field of the content image. Third, our method provides a simple interaction mechanism to control the generated direction fields, and further control the texture direction in synthesized images. Fourth, with a texture enhancement module, our method can add vivid texture details to the synthesized image. Experiments show that our method outperforms state-of-the-art style transfer method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. An Improved Fractional Differential Method for Image Enhancement
- Author
-
Chen, Qingli, Huang, Guo, and Wong, W. Eric, editor
- Published
- 2015
- Full Text
- View/download PDF
42. Shock filter‐based morphological scheme for texture enhancement.
- Author
-
Chakraborty, Niladri, Subudhi, Priyambada, and Mukhopadhyay, Susanta
- Abstract
Coherence enhancing shock filters combine shock filtering with the orientation estimation of the structure tensors thus enhancing the coherent flow‐like structures. The basic operations defined here are dilation and erosion that take place in the zones of influence. However, in order to achieve the goal of texture enhancement, in the proposed method, the authors have extended this notion to define an opening and closing based shock filter. Subsequently, the open–close filtered image is employed to locate and highlight the bright and dark texture features over the entirety of the image. Combining these feature images with the original image in a specific way will produce an image with texture features enhanced. Furthermore, we have performed these operations at different scales to achieve better enhancement of the texture features. The method has been formulated, implemented and tested over a number of synthetic and natural texture images and the experimental results establish the efficacy of the proposed method in enhancing the prominent texture parts in the image proportionately more than the non‐prominent texture parts. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. Intelligent vector field visualization based on line integral convolution.
- Author
-
Tang, Bin and Shi, Hongxia
- Subjects
- *
DATA visualization , *SOFTWARE visualization , *INFORMATION science , *INFORMATION design , *VISUAL analytics - Abstract
Abstract In order to reflect the internal motion characteristics of entire vector fields, texture visualization methods based on line integral convolution are usually adopted. However, the visualization results obtained in this way have low image quality. To solve this problem, this paper suggests using line integral convolution to optimize two specific aspects - texture enhancement and color enhancement – to provide an enhanced vector field visualization model. Existing texture enhancement algorithms can create texture aliasing. Based on an analysis of the relationship between vector angles, sampling distance and texture aliasing, the paper puts forward a texture enhancement algorithm that uses the vector angle to adjust the sampling distance of a high-pass filter. This greatly reduces the presence of texture aliasing. For color enhancement, a linear algorithm is usually used that adds vector size information to the vector field. However, the resulting image has a problem of color concentration. In view of this, the distribution characteristics of the vector field are analyzed using a histogram and a dynamic nonlinear color enhancement algorithm is proposed. This noticeably improves the color distribution of the resulting image and improves the overall visual quality of the result. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. PCBSegClassNet — A light-weight network for segmentation and classification of PCB component.
- Author
-
Makwana, Dhruv, R., Sai Chandra Teja, and Mittal, Sparsh
- Subjects
- *
PRINTED circuit design , *WASTE recycling , *LEARNING modules , *CLASSIFICATION - Abstract
PCB component classification and segmentation can be helpful for PCB waste recycling. However, the variance in shapes and sizes of PCB components presents crucial challenges. We propose PCBSegClassNet, a novel deep neural network for PCB component classification and segmentation. The network uses a two-branch design that captures the global context in one branch and spatial features in the other. The fusion of two branches allows the effective segmentation of components of various sizes and shapes. We reinterpret the skip connections as a learning module to learn features efficiently. We propose a texture enhancement module that utilizes texture information and spatial features to obtain precise boundaries of components. We introduce a loss function that combines DICE, IoU, and SSIM loss functions to guide the training process for precise pixel-level, patch-level, and map-level segmentation. Our network outperforms all previous state-of-the-art networks on both segmentation and classification tasks. For example, it achieves a DICE score of 96.3% and IoU score of 92.7% on the FPIC dataset. From the FPIC dataset, we crop the images of 25 component classes and term the resultant 19158 images as the "FPIC-Component dataset" (we release scripts for obtaining this dataset from FPIC dataset). On this dataset, our network achieves a classification accuracy of 95.2%. Our model is much more light-weight than previous networks and achieves a segmentation throughput of 122 frame-per-second on a single GPU. We also showcase its ability to count the number of each component on a PCB. The code is available at https://github.com/CandleLabAI/PCBSegClassNet. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Towards Enhancing Geometry Textures of 3D City Elements
- Author
-
Alizadehashrafi, Behnam, Abdul Rahman, Alias, Abdul Rahman, Alias, editor, Boguslawski, Pawel, editor, Gold, Christopher, editor, and Said, Mohamad Nor, editor
- Published
- 2013
- Full Text
- View/download PDF
46. RETRACTED: Analysis of texture enhancement methods for the detection of eco-friendly textile fabric defects
- Author
-
Liangchao Zhang, Zhang Junhua, Shu Yufeng, Li Junlong, Zuo Dali, and Haoquan Gan
- Subjects
Statistics and Probability ,Textile ,business.industry ,Computer science ,General Engineering ,020207 software engineering ,02 engineering and technology ,Environmentally friendly ,Texture enhancement ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Process engineering - Abstract
This article has been retracted. A retraction notice can be found at https://doi.org/10.3233/JIFS-219268.
- Published
- 2021
47. Aging Progression of Elderly People Using Image Morphing
- Author
-
Kumari, L. L. Gayani, Dharmaratne, Anuja T., Kim, Tai-hoon, editor, Adeli, Hojjat, editor, Ramos, Carlos, editor, and Kang, Byeong-Ho, editor
- Published
- 2011
- Full Text
- View/download PDF
48. Enhancing LBP by preprocessing via anisotropic diffusion.
- Author
-
Barros Neiva, Mariane, Guidotti, Patrick, and Bruno, Odemir Martinez
- Subjects
- *
ANISOTROPY , *GAUSSIAN processes , *MATHEMATICAL transformations , *KERNEL (Mathematics) , *IMAGE processing - Abstract
The main goal of this paper is to study the addition of a new preprocessing step in order to improve local feature descriptors and texture classification. The preprocessing is implemented by using transformations which help highlight salient features that play a significant role in texture recognition. We evaluate and compare four different competing methods: three different anisotropic diffusion methods including the classical anisotropic Perona–Malik diffusion and two subsequent regularizations of it and the application of a Gaussian kernel, which is the classical multiscale approach in texture analysis. The combination of the transformed images and the original ones are analyzed. The results show that the use of the preprocessing step does lead to an improvement in texture recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. Lumen and media-adventitia border detection in IVUS images using texture enhanced deformable model.
- Author
-
Chen, Fang, Ma, Ruibin, Liu, Jia, Zhu, Mingyu, and Liao, Hongen
- Subjects
- *
INTRAVASCULAR ultrasonography , *ATHEROSCLEROTIC plaque , *SUPPORT vector machines , *IMAGE segmentation , *THREE-dimensional imaging - Abstract
Lumen and media–adventitia (MA) borders in intravascular ultrasound (IVUS) images are critical for assessing the dimensions of vascular structures and providing plaque information in the diagnosis and navigation of vascular interventions. However, manual delineation of the lumen and MA borders is an intricate and time-consuming process. In this paper, a texture-enhanced deformable model (TEDM) is proposed to accurately detect these borders by incorporating texture information with the morphological factors of deformable model. An ensemble support vector machine classifier is used to classify IVUS pixels presented by texture features into different tissue types. The image regionalization maps of different tissue types are further used for texture enhancement modules in the TEDM. The proposed TEDM method has been tested on 1500 images from 15 clinical IVUS datasets by comparing with the manual delineations. Evaluation results demonstrate that our method can accurately detect lumen and MA surfaces with small surface distance errors of 0.17 and 0.19 mm, respectively. Accurate segmentation results provide 2D measurements of MA/lumen areas and 3D vessel visualizations for vascular interventions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Novel Fractional-Order Difference Schemes Reducible to Standard Integer-Order Formulas.
- Author
-
Paskas, Milorad P., Reljin, Irini S., and Reljin, Branimir D.
- Subjects
INTEGERS ,RATIONAL numbers ,SIGNAL processing ,SIGNAL theory ,BANDWIDTHS - Abstract
In this letter, we advise numerical schemes for calculation of fractional derivatives of Grünwald–Letnikov type that reduce to standard integer-order derivative schemes. Since, in the literature, only forward differences have such a property, here, novel forms of backward differences and central differences based both on integer and half-integer mesh points are proposed. It enables the use of the proposed fractional differences interchangeably with standard difference formulas. The proposed schemes are qualitatively and quantitatively tested on 2-D signals for texture enhancement. The obtained results show that the proposed fractional differences provide better performances in comparison to traditional schemes. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.