8 results on '"Vallez N"'
Search Results
2. MicroHikari3D: an automated DIY digital microscopy platform with deep learning capabilities
- Author
-
Salido, J., primary, Toledano, P. T., additional, Vallez, N., additional, Deniz, O., additional, Ruiz-Santaquiteria, J., additional, Cristobal, G., additional, and Bueno, G., additional
- Published
- 2021
- Full Text
- View/download PDF
3. MicroHikari3D: an automated DIY digital microscopy platform with deep learning capabilities
- Author
-
Junta de Comunidades de Castilla-La Mancha, Salido, J., Toledano, P.T., Vallez, N., Deniz, O., Ruiz-Santaquiteria, Jesús, Cristóbal, Gabriel, Bueno, G., Junta de Comunidades de Castilla-La Mancha, Salido, J., Toledano, P.T., Vallez, N., Deniz, O., Ruiz-Santaquiteria, Jesús, Cristóbal, Gabriel, and Bueno, G.
- Abstract
A microscope is an essential tool in biosciences and production quality laboratories for unveiling the secrets of microworlds. This paper describes the development of MicroHikari3D, an affordable DIY optical microscopy platform with automated sample positioning, autofocus and several illumination modalities to provide a high-quality flexible microscopy tool for labs with a short budget. This proposed optical microscope design aims to achieve high customization capabilities to allow whole 2D slide imaging and observation of 3D live specimens. The MicroHikari3D motion control system is based on the entry level 3D printer kit Tronxy X1 controlled from a server running in a Raspberry Pi 4. The server provides services to a client mobile app for video/image acquisition, processing, and a high level classification task by applying deep learning models.
- Published
- 2021
4. Data augmentation via warping transforms for modeling natural variability in the corneal endothelium enhances semi-supervised segmentation.
- Author
-
Sanchez S, Vallez N, Bueno G, and Marrugo AG
- Subjects
- Humans, Algorithms, Endothelium, Corneal diagnostic imaging, Neural Networks, Computer, Image Processing, Computer-Assisted methods
- Abstract
Image segmentation of the corneal endothelium with deep convolutional neural networks (CNN) is challenging due to the scarcity of expert-annotated data. This work proposes a data augmentation technique via warping to enhance the performance of semi-supervised training of CNNs for accurate segmentation. We use a unique augmentation process for images and masks involving keypoint extraction, Delaunay triangulation, local affine transformations, and mask refinement. This approach accurately captures the natural variability of the corneal endothelium, enriching the dataset with realistic and diverse images. The proposed method achieved an increase in the mean intersection over union (mIoU) and Dice coefficient (DC) metrics of 17.2% and 4.8% respectively, for the segmentation task in corneal endothelial images on multiple CNN architectures. Our data augmentation strategy successfully models the natural variability in corneal endothelial images, thereby enhancing the performance and generalization capabilities of semi-supervised CNNs in medical image cell segmentation tasks., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Sanchez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
5. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images.
- Author
-
Salido J, Vallez N, González-López L, Deniz O, and Bueno G
- Subjects
- Staining and Labeling, Eosine Yellowish-(YS), Image Processing, Computer-Assisted, Microscopy, Deep Learning
- Abstract
Background and Objective: This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range., Methods: The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models., Results: The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN)., Conclusions: The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts., Competing Interests: Declaration of Competing Interest All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue. The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript, (Copyright © 2023 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
6. Diffeomorphic transforms for data augmentation of highly variable shape and texture objects.
- Author
-
Vallez N, Bueno G, Deniz O, and Blanco S
- Subjects
- Databases, Factual, Humans, Data Management, Neural Networks, Computer
- Abstract
Background and Objective: Training a deep convolutional neural network (CNN) for automatic image classification requires a large database with images of labeled samples. However, in some applications such as biology and medicine only a few experts can correctly categorize each sample. Experts are able to identify small changes in shape and texture which go unnoticed by untrained people, as well as distinguish between objects in the same class that present drastically different shapes and textures. This means that currently available databases are too small and not suitable to train deep learning models from scratch. To deal with this problem, data augmentation techniques are commonly used to increase the dataset size. However, typical data augmentation methods introduce artifacts or apply distortions to the original image, which instead of creating new realistic samples, obtain basic spatial variations of the original ones., Methods: We propose a novel data augmentation procedure which generates new realistic samples, by combining two samples that belong to the same class. Although the idea behind the method described in this paper is to mimic the variations that diatoms experience in different stages of their life cycle, it has also been demonstrated in glomeruli and pollen identification problems. This new data augmentation procedure is based on morphing and image registration methods that perform diffeomorphic transformations., Results: The proposed technique achieves an increase in accuracy over existing techniques of 0.47%, 1.47%, and 0.23% for diatom, glomeruli and pollen problems respectively., Conclusions: For the Diatom dataset, the method is able to simulate the shape changes in different diatom life cycle stages, and thus, images generated resemble newly acquired samples with intermediate shapes. In fact, the other methods compared obtained worse results than those which were not using data augmentation. For the Glomeruli dataset, the method is able to add new samples with different shapes and degrees of sclerosis (through different textures). This is the case where our proposed DA method is more beneficial, when objects highly differ in both shape and texture. Finally, for the Pollen dataset, since there are only small variations between samples in a few classes and this dataset has other features such as noise which are likely to benefit other existing DA techniques, the method still shows an improvement of the results., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
7. Automatic Museum Audio Guide.
- Author
-
Vallez N, Krauss S, Espinosa-Aranda JL, Pagani A, Seirafi K, and Deniz O
- Abstract
An automatic "museum audio guide" is presented as a new type of audio guide for museums. The device consists of a headset equipped with a camera that captures exhibit pictures and the eyes of things computer vision device (EoT). The EoT board is capable of recognizing artworks using features from accelerated segment test (FAST) keypoints and a random forest classifier, and is able to be used for an entire day without the need to recharge the batteries. In addition, an application logic has been implemented, which allows for a special highly-efficient behavior upon recognition of the painting. Two different use case scenarios have been implemented. The main testing was performed with a piloting phase in a real world museum. Results show that the system keeps its promises regarding its main benefit, which is simplicity of use and the user's preference of the proposed system over traditional audioguides.
- Published
- 2020
- Full Text
- View/download PDF
8. Eyes of Things.
- Author
-
Deniz O, Vallez N, Espinosa-Aranda JL, Rico-Saavedra JM, Parra-Patino J, Bueno G, Moloney D, Dehghani A, Dunne A, Pagani A, Krauss S, Reiser R, Waeny M, Sorci M, Llewellynn T, Fedorczak C, Larmoire T, Herbst M, Seirafi A, and Seirafi K
- Abstract
Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities., Competing Interests: The authors declare no conflict of interest.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.