15 results on '"Fabijańska, Anna"'
Search Results
2. Image processing pipeline for the detection of blood flow through retinal vessels with subpixel accuracy in fundus images
- Author
-
Czepita, Maciej and Fabijańska, Anna
- Published
- 2021
- Full Text
- View/download PDF
3. Wood species automatic identification from wood core images with a residual convolutional neural network
- Author
-
Fabijańska, Anna, Danek, Małgorzata, and Barniak, Joanna
- Published
- 2021
- Full Text
- View/download PDF
4. DeepDendro – A tree rings detector based on a deep convolutional neural network
- Author
-
Fabijańska, Anna and Danek, Małgorzata
- Published
- 2018
- Full Text
- View/download PDF
5. Segmentation of corneal endothelium images using a U-Net-based convolutional neural network
- Author
-
Fabijańska, Anna
- Published
- 2018
- Full Text
- View/download PDF
6. Towards automatic tree rings detection in images of scanned wood samples
- Author
-
Fabijańska, Anna, Danek, Małgorzata, Barniak, Joanna, and Piórkowski, Adam
- Published
- 2017
- Full Text
- View/download PDF
7. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study
- Author
-
Rudyanto, Rina D., Kerkstra, Sjoerd, van Rikxoort, Eva M., Fetita, Catalin, Brillet, Pierre-Yves, Lefevre, Christophe, Xue, Wenzhe, Zhu, Xiangjun, Liang, Jianming, Öksüz, İlkay, Ünay, Devrim, Kadipaşaogˇlu, Kamuran, Estépar, Raúl San José, Ross, James C., Washko, George R., Prieto, Juan-Carlos, Hoyos, Marcela Hernández, Orkisz, Maciej, Meine, Hans, Hüllebrand, Markus, Stöcker, Christina, Mir, Fernando Lopez, Naranjo, Valery, Villanueva, Eliseo, Staring, Marius, Xiao, Changyan, Stoel, Berend C., Fabijanska, Anna, Smistad, Erik, Elster, Anne C., Lindseth, Frank, Foruzan, Amir Hossein, Kiros, Ryan, Popuri, Karteek, Cobzas, Dana, Jimenez-Carretero, Daniel, Santos, Andres, Ledesma-Carbayo, Maria J., Helmberger, Michael, Urschler, Martin, Pienn, Michael, Bosboom, Dennis G.H., Campo, Arantza, Prokop, Mathias, de Jong, Pim A., Ortiz-de-Solorzano, Carlos, Muñoz-Barrutia, Arrate, and van Ginneken, Bram
- Published
- 2014
- Full Text
- View/download PDF
8. Automatic segmentation of corneal endothelial cells from microscopy images.
- Author
-
Fabijańska, Anna
- Subjects
ENDOTHELIAL cells ,CELL segmentation ,NEURAL circuitry ,CELL separation ,GENETIC algorithms - Abstract
Graphical abstract Highlights • A fully automatic approach for corneal endothelial cells segmentation is proposed. • The method is dedicated to endothelium microscopic images. • It combines a neural network, adaptive thresholding and iterative morphological processing. • The method was tested on three datasets of corneal endothelium microscopic images. • The error of cell number determination is below 7%. Abstract The structure of the corneal endothelial cells can provide important information about the cornea health status. Particularly, parameters describing cell size and shape are important. However, these parameters are not widely used, because it requires segmentation of the cells from corneal endothelium images. Although several dedicated approaches exist, none of them is faultless. Therefore, this paper proposes a new approach to fully automatic segmentation of corneal endothelium images. The proposed approach combines a neural network which is thought to recognize pixels located at the cell boundaries, with postprocessing of the resulting boundary probability map. The postprocessing includes morphological reconstruction followed by coarse cell segmentation using local thresholding. The resulting cells are next separated from each other via iterative morphological opening. Finally, the region between cell bodies is skeletonized. The proposed method was tested on three publicly available corneal endothelium image datasets. The results were assessed against the ground truths and compared with the results provided by selected state-of-the-art methods. The resulting cell boundaries are well aligned with the ground truths. The mean absolute error of the determined cell number equals 6.78%, while the mean absolute error of cell size is at the level of 5.13%. Cell morphometric parameters were determined with the error of 5.69% for the coefficient of variation of cell side length and 11.64% for cell hexagonality. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Corneal endothelial image segmentation training data generation using GANs. Do experts need to annotate?
- Author
-
Kucharski, Adrian and Fabijańska, Anna
- Subjects
IMAGE segmentation ,GENERATIVE adversarial networks ,CORNEA ,GRAPHICAL user interfaces ,CONVOLUTIONAL neural networks ,ENDOTHELIAL cells - Abstract
Current solutions for corneal endothelial image segmentation use convolutional neural networks. However, their potential is not exploited due to the scarcity of labeled corneal endothelial data caused by an expensive cell delineation process. Therefore, this work proposes synthesizing cell edges and corresponding images using generative adversarial neural networks. To our knowledge, such work has not yet been reported in the context of corneal endothelial image segmentation. A two-step pipeline for synthesizing training patches is proposed. First, a custom mosaic generator creates a grid that mimics a binary map of endothelial cell edges. The synthetic cell edges are then input to the generative adversarial neural network, which is trained to generate corneal endothelial image patches from the corresponding edge labels. Finally, pairs of synthetic patches are used to train the patch-based U-Net model for corneal endothelial image segmentation. Experiments performed on three datasets of corneal endothelial images show that using purely synthetic data for U-Net training allows image segmentation with comparable accuracy to that obtained when using original images annotated by experts. Depending on the dataset, the segmentation quality decreases only from 1% to 4%. Our solution provides a cost-effective source of diverse training data for corneal endothelial image segmentation. Due to the simple graphical user interface wrapping the proposed solution, many users can easily obtain unlimited training data for corneal endothelial image segmentation and use it in various scenarios. [Display omitted] • A method for synthesizing patches of corneal endothelial images is proposed. • Synthetic patches are accompanied by the endothelial cell edges annotations. • The method comprises a mosaic generator and a generative adversarial neural network. • The U-Net model was trained on real and synthetic data and yields comparable results. • The method provides unlimited train data for corneal endothelial image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. The quantitative assessment of the pre- and postoperative craniosynostosis using the methods of image analysis.
- Author
-
Fabijańska, Anna and Węgliński, Tomasz
- Subjects
- *
CRANIOSYNOSTOSES , *IMAGE analysis , *BRAIN tumors , *OPTICAL processors , *MACHINE theory - Abstract
This paper considers the problem of the CT based quantitative assessment of the craniosynostosis before and after the surgery. First, fast and efficient brain segmentation approach is proposed. The algorithm is robust to discontinuity of skull. As a result it can be applied both in pre- and post-operative cases. Additionally, image processing and analysis algorithms are proposed for describing the disease based on CT scans. The proposed algorithms automate determination of the standard linear indices used for assessment of the craniosynostosis (i.e. cephalic index CI and head circumference HC) and allow for planar and volumetric analysis which so far have not been reported. Results of applying the introduced methods to sample craniosynostotic cases before and after the surgery are presented and discussed. The results show that the proposed brain segmentation algorithm is characterized by high accuracy when applied both in the pre- and postoperative craniosynostosis, while the introduced planar and volumetric indices for the disease description may be helpful to distinguish between the types of the disease. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. Segmentation of pulmonary vascular tree from 3D CT thorax scans.
- Author
-
Fabijańska, Anna
- Subjects
IMAGE segmentation ,PULMONARY blood vessels ,RANDOM walks ,COMPUTED tomography ,THREE-dimensional imaging - Abstract
This paper considers the problem of pulmonary vessels identification in thoracic 3D CT scans. In particular, the method for pulmonary vascular tree segmentation is introduced. The main idea behind the introduced method is to extract both thoracic trees together (i.e. the vascular tree and the airway tree) and then remove airway walls. Therefore, firstly segmentation of vessels and airway walls is performed using 3D region growing where the growth of the region is guided and constrained by results of random walk segmentation applied to consecutive CT slices. In particular, results of random walk segmentation of one slice are used to determine seeds for random walk segmentation of the following slice. Next step is airway tree segmentation using 3D region growing algorithm guided and constrained by the morphological gradient. Finally, morphological processing is applied in order to extend airway lumen onto airway walls and remove the overlapping regions. The main steps of the proposed approach are described in detail. Results of pulmonary vascular tree segmentation from example thoracic volumetric CT datasets provided by the introduced approach are presented and discussed. Based on a manually selected and radiologist's verified ground truth pixels and the resulting quality measures it can be concluded, that the average accuracy of the introduced approach is about 90%. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. Two-pass region growing algorithm for segmenting airway tree from MDCT chest scans
- Author
-
Fabijańska, Anna
- Subjects
- *
LUNG radiography , *TOMOGRAPHY , *MORPHOLOGY , *BRONCHI , *CHEST examination , *IMAGING systems , *ALGORITHMS - Abstract
Abstract: The paper addresses problem of pulmonary airways investigation based on high-resolution multidetector computed tomography (MDCT) chest scans. Especially it presents new, fully automated algorithm for airway tree segmentation. The algorithm uses two passes of 3D seeded region growing. First pass is applied for obtaining the initial (rough) airway tree. The second pass aims at refining the tree based on the morphological gradient. Results of applying proposed algorithm to scans of several randomly selected patients are introduced and discussed. Moreover, comparison with results obtained by simple region growing with manual threshold selection is provided. Obtained results justify the method and prove that it detects up to 10 generations of bronchi and diminishes leakages into the lung parenchyma which are common when simple region growing is used for segmenting airway tree. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
13. CNN-watershed: A watershed transform with predicted markers for corneal endothelium image segmentation.
- Author
-
Kucharski, Adrian and Fabijańska, Anna
- Subjects
CORNEA ,IMAGE segmentation ,RETINAL blood vessels ,CONVOLUTIONAL neural networks ,ARTIFICIAL neural networks ,ENDOTHELIUM - Abstract
[Display omitted] • A pipeline for corneal endothelium image segmentation is proposed. • It combines a marker-based watershed transform and convolutional neural network. • CNN predicts markers for each cell, next used to segment cell edge probability map. • The method was tested on a heterogeneous dataset of corneal endothelium images. • The method correctly detected 97.72% of corneal endothelial cells. Quantitive information about corneal endothelium cells' morphometry is vital for assessing cornea pathologies. Nevertheless, in clinical, everyday routine dominates qualitative assessment based on visual inspection of the microscopy images. Although several systems exist for automatic segmentation of corneal endothelial cells, they exhibit certain limitations. The main one is sensitivity to low contrast and uneven illumination, resulting in over-segmentation. Subsequently, image segmentation results often require manual editing of missing or false cell edges. Therefore, this paper further investigates the problem of corneal endothelium cell segmentation. A fully automatic pipeline is proposed that incorporates the watershed algorithm for marker-driven segmentation of corneal endothelial cells and an encoder-decoder convolutional neural network trained in a sliding window set up to predict the probability of cell centers (markers) and cell borders. The predicted markers are used for watershed segmentation of edge probability maps outputted by a neural network. The proposed method's performance on a heterogeneous dataset comprising four publicly available corneal endothelium image datasets is analyzed. The performance of three convolutional neural network models (i.e., U-Net, SegNet, and W-Net) incorporated in the proposed pipeline is examined. The results of the proposed pipeline are analyzed and compared to the state-of-the-art competitor. The obtained results are promising. Regardless of the convolutional neural model incorporated into the proposed pipeline, it notably outperforms the competitor. The proposed method scored 97.72% of cell detection accuracy, compared to 87.38% achieved by the competitor. The advantage of the introduced method is also apparent for cell size, DICE coefficient, and Modified Hausdorff distance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. DeepVarveNet: Automatic detection of glacial varves with deep neural networks.
- Author
-
Fabijańska, Anna, Feder, Andrew, and Ridge, John
- Subjects
- *
VARVES , *CONVOLUTIONAL neural networks , *GLACIAL lakes , *INSPECTION & review , *DIGITAL images - Abstract
Varves – annual sediment layers, common in glacial lakes – are an important source of paleoclimate information. Manually recording their occurrence, typically by visual inspection, can be both time-consuming and prone to error, leading to several attempts in recent years to at least partially computerize the process. However, existing computerized methods of varve detection still require moderate to large amounts of user interaction — they are semi-automated, rather than fully automated. In light of that, this paper is a step towards fully automatic detection of varves. The presented program, DeepVarveNet - a glacial varve detector built on a convolutional neural network, is designed to automatically delineate annual layers in such lacustrine sediment in digital images of photographed sediment cores. To the best of the authors' knowledge, this is the first approach that applies a convolutional neural network to this task. The performance of DeepVarveNet was assessed on a data set comprising images from seven sediment coring sites, of varying sedimentological properties. They represent three northeast U.S. glacial paleolakes, and glacial paleolake Ojibway. Our testing set contained 1415 identified varves, on which DeepVarveNet demonstrated sensitivity at a level of 0.986 and precision equal to 0.834, exceeding that of BMPix and ANFIS, the existing semi-automated varve identifiers. • A varve detector is proposed to delineate annual layers in lacustrine deposit. • The method is dedicated to images of photographed sediment cores. • The core of the method is a convolutional neural network. • Tests on a dataset of 1415 varves from seven sediment cores were performed. • The method demonstrated sensitivity of 0.986 and precision equal to 0.834. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Deep cartoon colorizer: An automatic approach for colorization of vintage cartoons.
- Author
-
Chybicki, Mariusz, Kozakiewicz, Wiktor, Sielski, Dawid, and Fabijańska, Anna
- Subjects
- *
PERCEPTION testing , *ARTIFICIAL neural networks , *MOVIE scenes - Abstract
Although there exist several approaches to the automatic colorization of the natural scene images and movies the problem of cartoon colorization has barely been considered. Therefore, this paper proposes a fully automatic pipeline to create a plausible colorization of vintage cartoons which is visually appealing to a human observer. Particularly, the Deep Cartoon Colorizer is proposed. The method incorporates an encoder–decoder convolutional neural network which is trained to map colors onto consecutive cartoon frames. The method was trained with 4944 images and tested on 34 vintage Disney cartoons including both the decolorized and the originally monochrome movies. In total 265388 cartoon frames were assessed including 246591 decolorized frames and 18797 originally monochrome ones. The resulting automatic colorizations were evaluated both quantitatively (by means of popular image perceptual quality measures) and qualitatively (by performing a human perception test). The resulting median values of the image quality measures range from 0.66 for Visual Information Fidelity and 0.85 for HaarPSI, through 0.94 for Structural Similarity Index, to 0.97 for Universal Image Quality Index which is a very good result. The subjective scores assigned by the human raters' on a ten-degree scale are on average equal to 6.11 and 6.63 for decolorized and monochrome frames respectively. This result also confirms that the desired effect of plausible colorization was obtained by the introduced approach. • The Deep Cartoon Colorizer is proposed for an automatic colorization of cartoons • The method is dedicated to old vintage cartoon episodes • The method uses an encoder–decoder convolutional neural network • The method was tested on a dataset containing 34 cartoon episodes • The colorization is plausible with SSIM equal to 0.94 and high raters' scores [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.