5,774 results on '"IMAGE processing"'
Search Results
2. Optimal Extraction Method of Feature Points in Key Frame Image of Mobile Network Animation
- Author
-
Yin, Tao and Lv, Zhihan
- Subjects
Optimization ,Feature points extraction ,Computer Networks and Communications ,Scale invariant feature transformations ,Extraction ,Keyframe image ,Image processing ,Datorseende och robotik (autonoma system) ,Key-frames ,Optimization method ,Constrained optimization ,Wireless networks ,Computer Vision and Robotics (Autonomous Systems) ,Computer Sciences ,Transformation algorithm ,Extraction method ,Animation ,Mobile network animation ,Local feature ,Datavetenskap (datalogi) ,Mobile telecommunication systems ,Hardware and Architecture ,Feature point extraction ,Optimisations ,Software ,Information Systems - Abstract
In order to effectively extract the feature points of mobile network animation images and accurately reflect the main content of the video, an optimization method to extract the feature points of key frame images of mobile network animation is proposed. Firstly, the key frames are selected according to the content change degree of the animation video. The scale invariant feature transformation algorithm is used to describe the feature points of the key frame image of the animation video. The local feature points of the image are estimated by the constraint optimization method to realize the optimization extraction of the feature points of the key frame image of the mobile network animation. The efficiency of feature points extraction is analyzed from the number and effectiveness of feature points extraction, time-consuming and similarity invariance of feature points. The experimental results show that the proposed method has excellent adaptability, and can effectively extract feature points of mobile network animation image.
- Published
- 2022
3. Lung shrinking assessment on HRCT with elastic registration technique for monitoring idiopathic pulmonary fibrosis
- Author
-
Haishuang Sun, Xiaoyan Yang, Xuebiao Sun, Xiapei Meng, Han Kang, Rongguo Zhang, Haoyue Zhang, Min Liu, Huaping Dai, and Chen Wang
- Subjects
Male ,Vital Capacity ,Clinical Sciences ,Idiopathic pulmonary fibrosis ,Interstitial lung disease ,General Medicine ,Autoimmune Disease ,X-Ray Computed ,X-ray ,Nuclear Medicine & Medical Imaging ,Rare Diseases ,Image processing ,Clinical Research ,Respiratory ,Humans ,Computer-assisted ,Radiology, Nuclear Medicine and imaging ,Lung ,Tomography ,Computed tomography ,Retrospective Studies - Abstract
Objectives Evaluation and follow-up of idiopathic pulmonary fibrosis (IPF) mainly rely on high-resolution computed tomography (HRCT) and pulmonary function tests (PFTs). The elastic registration technique can quantitatively assess lung shrinkage. We aimed to investigate the correlation between lung shrinkage and morphological and functional deterioration in IPF. Methods Patients with IPF who underwent at least two HRCT scans and PFTs were retrospectively included. Elastic registration was performed on the baseline and follow-up HRCTs to obtain deformation maps of the whole lung. Jacobian determinants were calculated from the deformation fields and after logarithm transformation, log_jac values were represented on color maps to describe morphological deterioration, and to assess the correlation between log_jac values and PFTs. Results A total of 69 patients with IPF (male 66) were included. Jacobian maps demonstrated constriction of the lung parenchyma marked at the lung base in patients who were deteriorated on visual and PFT assessment. The log_jac values were significantly reduced in the deteriorated patients compared to the stable patients. Mean log_jac values showed positive correlation with baseline percentage of predicted vital capacity (VC%) (r = 0.394, p < 0.05) and percentage of predicted forced vital capacity (FVC%) (r = 0.395, p < 0.05). Additionally, the mean log_jac values were positively correlated with pulmonary vascular volume (r = 0.438, p < 0.01) and the number of pulmonary vascular branches (r = 0.326, p < 0.01). Conclusions Elastic registration between baseline and follow-up HRCT was helpful to quantitatively assess the morphological deterioration of lung shrinkage in IPF, and the quantitative indicator log_jac values were significantly correlated with PFTs. Key Points • The elastic registration on HRCT was helpful to quantitatively assess the deterioration of IPF. • Jacobian logarithm was significantly reduced in deteriorated patients and mean log_jac values were correlated with PFTs. • The mean log_jac values were related to the changes of pulmonary vascular volume and the number of vascular branches.
- Published
- 2022
4. Detection and Spatiotemporal Analysis of In-vitro 3D Migratory Triple-Negative Breast Cancer Cells
- Author
-
Nikolaos M Dimitriou, Joseph M. Kinsella, Salvador Flores-Torres, and Georgios D. Mitsis
- Subjects
3D cell culture ,Cancer cell ,medicine ,Spatial ecology ,Biomedical Engineering ,Cancer ,Image processing ,Segmentation ,Computational biology ,Biology ,medicine.disease ,Spatial organization ,Triple-negative breast cancer - Abstract
The invasion of cancer cells into the surrounding tissues is one of the hallmarks of cancer. However, a precise quantitative understanding of the spatiotemporal patterns of cancer cell migration and invasion still remains elusive. A promising approach to investigate these patterns are 3D cell cultures, which provide more realistic models of cancer growth compared to conventional 2D monolayers. Quantifying the spatial distribution of cells in these 3D cultures yields great promise for understanding the spatiotemporal progression of cancer. In the present study, we present an image processing and segmentation pipeline for the detection of 3D GFP-fluorescent Triple-Negative Breast Cancer cell nuclei, and we perform quantitative analysis of the formed spatial patterns and their temporal evolution. The performance of the proposed pipeline was evaluated using experimental 3D cell culture data, and was found to be comparable to manual segmentation, outperforming four alternative automated methods. The spatiotemporal statistical analysis of the detected distributions of nuclei revealed transient, non-random spatial distributions that consisted of clustered patterns across a wide range of neighbourhood distances, as well as dispersion for larger distances. Overall, the implementation of the proposed framework revealed the spatial organization of cellular nuclei with improved accuracy, providing insights into the 3 dimensional inter-cellular organization and its progression through time.
- Published
- 2022
5. Reducing false positive rate with the help of scene change indicator in deep learning based real-time face recognition systems
- Author
-
Mehmet Ali Kutlugün and Yahya Şirin
- Subjects
Image processing ,Real-time face recognition ,Computer Networks and Communications ,Hardware and Architecture ,Media Technology ,Illumination and pose changes ,Deep metric learning ,Classification ,Software - Abstract
In face recognition systems, light direction, reflection, and emotional and physical changes on the face are some of the main factors that make recognition difficult. Researchers con-tinue to work on deep learning-based algorithms to overcome these difficulties. It is essen-tial to develop models that will work with high accuracy and reduce the computational cost, especially in real-time face recognition systems. Deep metric learning algorithms called representative learning are frequently preferred in this field. However, in addition to the extraction of outstanding representative features, the appropriate classification of these feature vectors is also an essential factor affecting the performance. The Scene Change Indicator (SCI) in this study is proposed to reduce or eliminate false recognition rates in sliding windows with a deep metric learning model. This model detects the blocks where the scene does not change and tries to identify the comparison threshold value used in the classifier stage with a new value more precisely. Increasing the sensitivity ratio across the unchanging scene blocks allows for fewer comparisons among the samples in the database. The model proposed in the experimental study reached 99.25% accuracy and 99.28% F-1 score values compared to the original deep metric learning model. Experimental results show that even if there are differences in facial images of the same person in unchang-ing scenes, misrecognition can be minimized because the sample area being compared is narrowed.
- Published
- 2023
6. Large scale crowdsourced radiotherapy segmentations across a variety of cancer anatomic sites
- Author
-
Wahid, Kareem A, Lin, Diana, Sahin, Onur, Cislo, Michael, Nelms, Benjamin E, He, Renjie, Naser, Mohammed A, Duke, Simon, Sherer, Michael V, Christodouleas, John P, Mohamed, Abdallah, Murphy, James D, Fuller, Clifton D, Gillespie, Erin F, Wahid, Kareem A [0000-0002-0503-0175], Cislo, Michael [0000-0002-5880-2802], Mohamed, Abdallah SR [0000-0003-2064-7613], Fuller, Clifton D [0000-0002-5264-3994], and Apollo - University of Cambridge Repository
- Subjects
Statistics and Probability ,Image Processing ,Radiotherapy Planning, Computer-Assisted ,Radiotherapy Planning ,Bioengineering ,Library and Information Sciences ,Computer Science Applications ,Education ,X-Ray Computed ,Computer-Assisted ,Rare Diseases ,Neoplasms ,Breast Cancer ,Radiation Oncology ,Image Processing, Computer-Assisted ,Humans ,Crowdsourcing ,Female ,Statistics, Probability and Uncertainty ,Tomography, X-Ray Computed ,Tomography ,Information Systems ,Cancer - Abstract
Acknowledgements: This work was supported by the National Institutes of Health (NIH)/National Cancer Institute (NCI) through a Cancer Center Support Grant (CCSG; P30CA016672-44; P30CA008748). D.L. is supported by the Radiological Society of North America (RSNA) Research Medical Student Grant (RMS2116). K.A.W. is supported by the Dr. John J. Kopchick Fellowship through The University of Texas MD Anderson UTHealth Graduate School of Biomedical Sciences, the American Legion Auxiliary Fellowship in Cancer Research, and an NIH/National Institute for Dental and Craniofacial Research (NIDCR) F31 fellowship (1 F31DE031502-01). E.F.G. and J.D.M. received funding from the Agency for Health Research and Quality (AHRQ R18HS026881). C.D.F. received funding from the NIH/NIDCR (1R01DE025248-01/R56DE025248); an NIH/NIDCR Academic-Industrial Partnership Award (R01DE028290); the National Science Foundation (NSF), Division of Mathematical Sciences, Joint NIH/NSF Initiative on Quantitative Approaches to Biomedical Big Data (QuBBD) Grant (NSF 1557679); the NIH Big Data to Knowledge (BD2K) Program of the NCI Early Stage Development of Technologies in Biomedical Computing, Informatics, and Big Data Science Award (1R01CA214825); the NCI Early Phase Clinical Trials in Imaging and Image-Guided Interventions Program (1R01CA218148); an NIH/NCI Pilot Research Program Award from the UT MD Anderson CCSG Radiation Oncology and Cancer Imaging Program (P30CA016672); an NIH/NCI Head and Neck Specialized Programs of Research Excellence (SPORE) Developmental Research Program Award (P50CA097007); and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) Research Education Program (R25EB025787)., Clinician generated segmentation of tumor and healthy tissue regions of interest (ROIs) on medical images is crucial for radiotherapy. However, interobserver segmentation variability has long been considered a significant detriment to the implementation of high-quality and consistent radiotherapy dose delivery. This has prompted the increasing development of automated segmentation approaches. However, extant segmentation datasets typically only provide segmentations generated by a limited number of annotators with varying, and often unspecified, levels of expertise. In this data descriptor, numerous clinician annotators manually generated segmentations for ROIs on computed tomography images across a variety of cancer sites (breast, sarcoma, head and neck, gynecologic, gastrointestinal; one patient per cancer site) for the Contouring Collaborative for Consensus in Radiation Oncology challenge. In total, over 200 annotators (experts and non-experts) contributed using a standardized annotation platform (ProKnow). Subsequently, we converted Digital Imaging and Communications in Medicine data into Neuroimaging Informatics Technology Initiative format with standardized nomenclature for ease of use. In addition, we generated consensus segmentations for experts and non-experts using the Simultaneous Truth and Performance Level Estimation method. These standardized, structured, and easily accessible data are a valuable resource for systematically studying variability in segmentation applications.
- Published
- 2023
- Full Text
- View/download PDF
7. Wider urban zones: use of topology and nighttime satellite images for delimiting urban areas
- Author
-
Andrea Spinosa
- Subjects
metropolitan area ,Economics and Econometrics ,nighttime map ,urban area, metropolitan area, nighttime map, image processing ,Geography, Planning and Development ,Social Sciences (miscellaneous) ,urban area ,image processing - Abstract
In the literature on the definition of urban areas, the methodological approaches are divided into formalist (aggregation by density thresholds) and functionalist (aggregation by commuting quotas). This paper proposes a mixed approach, in which the territorial density threshold from the lower-level administrative unit is combined with the brightness of nighttime satellite imagery, intended as a proxy variable for the functional links. The objective is to attain a method for the delimitation of urban areas, to be used by various States and Regions across the world in an iterative procedure, for the delimitation of urban areas as connected topological spaces. This represents an independent method, compared to the various standards adopted by national and regional statistics bureaus, which allows comparing the infrastructural, economic, and social data of different cities in the world. Such cities are hence described in terms of the “real” dimension of the urban areas, partially correcting the bias related to the adoption of administrative perimeters as a “fact” when local authorities make decisions regarding them.
- Published
- 2022
8. Consistent Quantification of Precipitate Shapes and Sizes in Two and Three Dimensions Using Central Moments
- Author
-
Felix Schleifer, Moritz Müller, Yueh-Yu Lin, Markus Holzinger, Uwe Glatzel, and Michael Fleck
- Subjects
Microstructure analysis ,Image processing ,Shape quantification ,General Materials Science ,Precipitate size ,Industrial and Manufacturing Engineering ,Data analysis technique - Abstract
Computational microstructure design aims to fully exploit the precipitate strengthening potential of an alloy system. The development of accurate models to describe the temporal evolution of precipitate shapes and sizes is of great technological relevance. The experimental investigation of the precipitate microstructure is mostly based on two-dimensional micrographic images. Quantitative modeling of the temporal evolution of these microstructures needs to be discussed in three-dimensional simulation setups. To consistently bridge the gap between 2D images and 3D simulation data, we employ the method of central moments. Based on this, the aspect ratio of plate-like particles is consistently defined in two and three dimensions. The accuracy and interoperability of the method is demonstrated through representative 2D and 3D pixel-based sample data containing particles with a predefined aspect ratio. The applicability of the presented approach in integrated computational materials engineering (ICME) is demonstrated by the example of γ″ microstructure coarsening in Ni-based superalloys at 730 °C. For the first time, γ″ precipitate shape information from experimental 2D images and 3D phase-field simulation data is directly compared. This coarsening data indicates deviations from the classical ripening behavior and reveals periods of increased precipitate coagulation.
- Published
- 2022
9. A novel pattern recognition framework based on ensemble of handcrafted features on images
- Author
-
Erdal Tasci and Aybars Ugur
- Subjects
Image processing ,Computer Networks and Communications ,Hardware and Architecture ,Pattern recognition ,Feature selection ,Machine learning ,Media Technology ,Feature extraction ,Shape ,Classification ,Algorithms ,Software - Abstract
Nowadays, with the advances and use of technological possibilities and devices, the number of digital images is increasing gradually. Computer-aided classification of image types is widely applied in many applications such as medicine, security, and automation. The feature extraction and selection stages have great importance in terms of improving the classification performance as sub-stages of the pattern recognition process. Researchers apply different feature extraction methods for their works due to the requirements. In this study, a novel pattern recognition framework combining diverse and large-scale handcrafted feature extraction methods (shape-based and texture-based) and the selection stage on images is developed. Genetic algorithms are also used for feature selection. In the experimental studies, Flavia leaf recognition, Caltech101 object classification image datasets, and five supervised classification models (random forest, ECOC-SVM, k-nearest neighbor, AdaBoost, classification tree) with different parameters' values are used. The experimental results show that the proposed method achieves 98.39% and 82.77% accuracy rates on Flavia and Caltech101 datasets with the ECOC-SVM model, respectively. The proposed framework is also competitive with the existing state-of-the-art methods in the related literature., Scientific and Technological Research Council of Turkey (TUBITAK) 2211 National Graduate Scholarship Program, The author Erdal TASCI has been supported by the Scientific and Technological Research Council of Turkey (TUBITAK) 2211 National Graduate Scholarship Program.
- Published
- 2022
10. Enhancing vehicle re-identification via synthetic training datasets and re-ranking based on video-clips information
- Author
-
Moral De Eusebio, Paula, García Martín, Álvaro, Martínez, José M., Bescos Cano, Jesús, and UAM. Departamento de Tecnología Electrónica y de las Comunicaciones
- Subjects
Vehicle re-identification ,Telecomunicaciones ,Image processing ,Computer Networks and Communications ,Hardware and Architecture ,Media Technology ,Deep learning ,Surveillance videos ,Software - Abstract
Vehicle re-identification (ReID) aims to find a specific vehicle identity across multiple non-overlapping cameras. The main challenge of this task is the large intra-class and small inter-class variability of vehicles appearance, sometimes related with large viewpoint variations, illumination changes or different camera resolutions. To tackle these problems, we proposed a vehicle ReID system based on ensembling deep learning features and adding different post-processing techniques. In this paper, we improve that proposal by: incorporating large-scale synthetic datasets in the training step; performing an exhaustive ablation study showing and analyzing the influence of synthetic content in ReID datasets, in particular CityFlow-ReID and VeRi-776; and extending post-processing by including different approaches to the use of gallery video-clips of the target vehicles in the re-ranking step. Additionally, we present an evaluation framework in order to evaluate CityFlow-ReID: as this dataset has not public ground truth annotations, AI City Challenge provided an on-line evaluation service which is no more available; our evaluation framework allows researchers to keep on evaluating the performance of their systems in the CityFlow-ReID dataset, Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature, This work is part of the preliminary tasks related to the Harvesting Visual Data (HVD) project (PID2021-125051OB-I00) funded by the Ministerio de Ciencia e Innovacion of the Spanish ´ Government
- Published
- 2023
11. Machine learning applied to emerald gemstone grading: framework proposal and creation of a public dataset
- Author
-
Sandro Carvalho Izidoro, G. Bernardes, F. B. Pena, D. Crabi, and É. O. Rodrigues
- Subjects
Computer science ,business.industry ,Deep learning ,Process (computing) ,Image processing ,engineering.material ,Emerald ,Machine learning ,computer.software_genre ,Categorization ,Artificial Intelligence ,Pattern recognition (psychology) ,engineering ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Grading (education) ,computer ,Publication - Abstract
The grading of gemstones is currently a manual procedure performed by gemologists. A popular approach uses reference stones, where those are visually inspected by specialists that decide which one of the available reference stone is the most similar to the inspected stone. This procedure is very subjective as different specialists may end up with different grading choices. This work proposes a complete framework that entails the image acquisition and goes up to the final stone categorization. The proposal is able to automate the entire process apart from including the stone in the created chamber for the image acquisition. It discards the subjective decisions made by specialists. This is the first work to propose a machine learning approach coupled with image processing techniques for emerald grading. The proposed framework achieves 98% of accuracy (correctly categorized stones), outperforming a deep learning approach. Furthermore, we also create and publish the used dataset that contains 192 images of emerald stones along with their extracted and pre-processed features.
- Published
- 2021
12. Deep Learning Image Processing Enables 40% Faster Spinal MR Scans Which Match or Exceed Quality of Standard of Care
- Author
-
B Johnson, W Gibbs, L.N. Tanenbaum, S. Bash, T Zhang, and A. Shankaranarayanan
- Subjects
Standard of care ,Pixel ,Image quality ,business.industry ,Deep learning ,Image processing ,Medicine ,Radiology, Nuclear Medicine and imaging ,Neurology (clinical) ,Artificial intelligence ,Mri scan ,business ,Nuclear medicine ,Spinal magnetic resonance imaging ,Neuroradiology - Abstract
Objective This prospective multicenter multireader study evaluated the performance of 40% scan-time reduced spinal magnetic resonance imaging (MRI) reconstructed with deep learning (DL). Methods A total of 61 patients underwent standard of care (SOC) and accelerated (FAST) spine MRI. DL was used to enhance the accelerated set (FAST-DL). Three neuroradiologists were presented with paired side-by-side datasets (666 series). Datasets were blinded and randomized in sequence and left-right display order. Image features were preference rated. Structural similarity index (SSIM) and per pixel L1 was assessed for the image sets pre and post DL-enhancement as a quantitative assessment of image integrity impact. Results FAST-DL was qualitatively better than SOC for perceived signal-to-noise ratio (SNR) and artifacts and equivalent for other features. Quantitative SSIM was high, supporting the absence of image corruption by DL processing. Conclusion DL enables 40% spine MRI scan time reduction while maintaining diagnostic integrity and image quality with perceived benefits in SNR and artifact reduction, suggesting potential for clinical practice utility.
- Published
- 2021
13. Automated determination of interfacial tension and contact angle using computer vision for oil field applications
- Author
-
Amit Saxena, Shivanjali Sharma, Himanshu Kesarwani, Ankur Gupta, and Anurag Pandey
- Subjects
Contact angle ,Surface tension ,General Energy ,Computer program ,Drop (liquid) ,Mechanical engineering ,Image processing ,Standard solution ,Python (programming language) ,Geotechnical Engineering and Engineering Geology ,computer ,Standard deviation ,computer.programming_language - Abstract
Contact angle and surface tension are the two most widely used surface analysis approaches for reservoir fluid characterization in petroleum industries. The pendant drop method has among the most widely used techniques for the estimation of surface tension. The present work utilizes a python-based computer program to automatically determine interfacial tension (IFT) and contact angle from the pendant drop image acquired from a typical pendant drop apparatus. The proposed program uses python-based image processing libraries for the analysis of the pendant drop image. Also, the program is tested on images acquired from the standard solutions for the IFT and contact angle calculation showing promising results with a standard deviation of less than 1.7 mN/m.
- Published
- 2021
14. Recent advancement in haze removal approaches
- Author
-
Weisheng Li, Hira Khan, Nazeer Muhammad, and Bin Xiao
- Subjects
Haze ,Computer Networks and Communications ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Image processing ,Iterative reconstruction ,Object detection ,Computer graphics ,Hardware and Architecture ,Media Technology ,Computer vision ,Artificial intelligence ,Noise level ,Visibility ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems - Abstract
Haze and fog are big reasons for road accidents. The haze occurrence in the air lowers the images quality captured by visible camera sensors. Haze brings inconvenience to numerous computer vision applications as it diminishes the scene visibility. Haze removal techniques recuperate the color and scene contrast. These haze removal techniques are extensively utilized in numerous applications like outdoor surveillance, object detection, consumer electronics, etc. Haze removal is commonly performed under the physical degradation model, which requires a solution of an ill-posed inverse issue. Different dehazing algorithms was recently proposed to relieve this difficulty and has acknowledged a great deal of consideration. Dehazing is basically accomplished through four major steps: hazy images acquisition process, estimation process (atmospheric light, transmission map, scattering phenomenon, and visibility or haze level), enhancement process (improved visibility level, reduce haze or noise level), restoration process (restore enhanced image, image reconstruction). This four-step dehazing process makes it possible to provide a step-by-step approach to the complex solution of the ill-posed inverse problem. Our detailed survey and experimental analysis on different dehazing methods that will help readers understand the effectiveness of the individual step of the dehazing process and will facilitate development of advanced dehazing algorithms. The overall objective of this review paper is to explore the various methods for efficiently removing the haze and short comings of the earlier presented techniques used in the revolutionary era of image processing applications.
- Published
- 2021
15. Adaptive increasing-margin adversarial neural iterative system based on facial expression recognition feature models
- Author
-
Ramachandran Vedantham
- Subjects
Iterative and incremental development ,Facial expression ,Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Pattern recognition ,Image processing ,Overfitting ,Feature model ,ComputingMethodologies_PATTERNRECOGNITION ,Hardware and Architecture ,Margin (machine learning) ,Media Technology ,Feature (machine learning) ,Artificial intelligence ,business ,Software - Abstract
Face expression recognition has been proved as a challenging task in image processing. Many related works on facial expression recognition had done but they faced several challenges during the classification of data with the stored database. It carried out various workflows on improvisation of classifiers based on deep learning but they have been lagging in understanding facial expression mainly because of disastrous forgetting, time management, data mixing, and data overfitting, etc. Ignoring all these challenges would lead to inaccurate recognition of facial expressions. Hence to overcome all the above issues this work proposed a model named adaptive increasing-margin adversarial neural iterative model involves triple threat filtration techniques along with modified scaling density-based spatial clustering of applications with noise and dual feature model for obtaining a better quality featured image. Advance back propagation artificial neural network model is initiated to overcome catastrophic forgetting, underfitting of data, over fitting of data, etc. Thus, the proposed work achieves better efficiency as well as high accuracy in terms of facial expression recognition.
- Published
- 2021
16. Recent advances of deep learning algorithms for aquacultural machine vision systems with emphasis on fish
- Author
-
Ling Du and Daoliang Li
- Subjects
Linguistics and Language ,Data processing ,Machine vision ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Image processing ,Language and Linguistics ,Field (computer science) ,Scientific management ,Artificial Intelligence ,Artificial intelligence ,Transfer of learning ,business ,Algorithm - Abstract
Monitoring the growth conditions and behavior of fish will enable scientific management, reduce the threat of losses caused by disease and stress. Traditional monitoring methods are time-consuming, laborious, and untimely monitoring readily leads to aquaculture accidents. As a non-invasive, objective, and repeatable tool, machine vision systems have been widely used in various aspects of aquaculture monitoring. Nevertheless, the complex underwater environment makes it difficult to obtain ideal data processing results only using traditional image processing methods. Due to their powerful feature extraction capabilities, deep learning (DL) algorithms have been widely used in underwater image processing. Hence, the combination of DL algorithms and machine vision for the automated monitoring of aquaculture is of great importance. As evidence for the multidisciplinary aspects of DL applications, attention is focused on the latest DL methods applied to five fields of research: classification, detection, counting, behavior recognition, and biomass estimation. Meanwhile, due to the low training efficiency of DL models caused by insufficient dataset, transfer learning and GAN have also put into spotlight of this filed to pursue high performance of DL models. We also present the challenges and benchmarks in terms of the advantages and disadvantages of the selected method in each field. In addition, we review the sources of image acquisition and pre-processing methods in aquaculture. Finally, the challenges and prospects of DL in aquaculture machine vision systems are discussed. The literature review shows that the deep neural networks such as AlexNet, LSTM, VGG, and GoogLeNet, have been used for aquaculture machine vision systems.
- Published
- 2021
17. Automated Grain Counting for the Microstructure of Mg Alloys Using an Image Processing Method
- Author
-
Fatih Akkoyun and Ali Erçetin
- Subjects
Materials science ,Mg alloys ,Scanning electron microscope ,Mechanical Engineering ,Metallurgy ,Alloy ,Image processing ,engineering.material ,Microstructure ,Standard deviation ,Grain size ,Mechanics of Materials ,engineering ,General Materials Science - Abstract
In this study, a practical and swift approach for calculating the number of grains in a microstructure and determining the ASTM grain size of Mg alloys was demonstrated using computer vision technology. In the experiments, Mg alloys were used as work materials. Microscopic images were taken by scanning electron microscopy (SEM) and were subjected to the image processing method. The grains in the microstructure were counted by the image processing method and manually. The experimental results were examined by comparing the manual and automated grain counting results. The standard deviation of the grain numbers was found to be 6% between the manual and automated counting methods. The success rate in the comparison of the grain numbers is approximately 94%. Moreover, ASTM grain sizes were calculated according to the number of grains determined in the SEM images, and a high success rate was achieved by equalizing the ASTM grain sizes of each alloy according to both methods.
- Published
- 2021
18. Multifractal based image processing for estimating the complexity of COVID-19 dynamics
- Author
-
Qiusheng Rong, D Easwaramoorthy, C Thangaraj, and Shaobo He
- Subjects
Computer science ,business.industry ,Noise reduction ,General Physics and Astronomy ,Regular Article ,Pattern recognition ,Context (language use) ,Image processing ,Multifractal system ,Grayscale ,Fractal dimension ,Dimension (vector space) ,Robustness (computer science) ,General Materials Science ,Artificial intelligence ,Physical and Theoretical Chemistry ,business - Abstract
The COVID-19 pandemic creates a worldwide threat to human health, medical practitioners, social structures, and finance sectors. The coronavirus epidemic has a significant impact on people's health, survival, employment, and financial crises; while also having noticeable harmful effects on our environment in a short span of time. In this context, the complexity of the Corona Virus transmission is estimated and analyzed by the measure of non-linearity called the Generalized Fractal Dimensions (GFD) on the chest X-Ray images. Grayscale image is considered as the most important suitable tool in the medical image processing. Particularly, COVID-19 affects the human lungs vigorously within a few days. It is a very challenging task to differentiate the COVID-19 infections from the various respiratory diseases represented in this study. The multifractal dimension measure is calculated for the original, noisy and denoised images to estimate the robustness of COVID-19 and other noticeable diseases. Also the comparison of COVID-19 X-Ray images is performed graphically with the images of healthy and other diseases to state the level of complexity of diseases in terms of GFD curves. In addition, the Mean Absolute Error (MAE) and the Peak Signal-to-Noise Ratio (PSNR) are used to evaluate the performance of the denoising process involved in the proposed comparative analysis of the representative grayscale images.
- Published
- 2021
19. A time series processing chain for geological disasters based on a GPU-assisted sentinel-1 InSAR processor
- Author
-
Yongsheng Li, Jingfa Zhang, and Wenliang Jiang
- Subjects
Atmospheric Science ,Interconnection ,business.industry ,Computer science ,Big data ,Real-time computing ,Image processing ,Field (computer science) ,Deformation monitoring ,Chain (algebraic topology) ,Interferometric synthetic aperture radar ,Earth and Planetary Sciences (miscellaneous) ,Graphics ,business ,Water Science and Technology - Abstract
Interferometric synthetic aperture radar (InSAR) technology has the potential to reveal ground surface deformation at high temporal and spatial resolutions, and the InSAR image processing field is progressing toward big data analytics. In this era of big data, as InSAR is usually employed for natural hazard research, processing large InSAR images under time constraints is a fundamental challenge. Graphics processing units (GPUs) have been widely adopted for high-performance computing (HPC) due to their parallel computing capability and low power consumption. Accordingly, we explore the interconnection between InSAR time series processing and GPU hardware for parallel Sentinel-1 InSAR processing. We build a rapid GPU-assisted InSAR processing chain and apply this chain to accelerate the time-consuming optimization steps in Sentinel-1 InSAR processing. This processing chain plays a pivotal role in rapidly calculating InSAR deformation maps using large quantities of Sentinel-1 SAR data and thus can be used in practical applications. We also perform some experiments to demonstrate that the processing chain is appropriate for the deformation monitoring of various natural disasters and is suitable for emergency monitoring and wide-area investigations of potential hazards involving geological disasters.
- Published
- 2021
20. Identification of apple diseases in digital images by using the Gaining-sharing knowledge-based algorithm for multilevel thresholding
- Author
-
Noé Ortega-Sánchez, Ali Wagdy Mohamed, Rosaura Hernández-Montelongo, Erick Rodríguez-Esparza, Marco Pérez-Cisneros, Gaurav Dhiman, and Diego Oliva
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Computational intelligence ,Thresholding ,Theoretical Computer Science ,Set (abstract data type) ,Digital image ,Identification (information) ,Segmentation ,Geometry and Topology ,Metaheuristic ,Algorithm ,Software - Abstract
Identifying the defects in apples is commonly done with visual examination techniques. However, it is a slow and laborious process. Image processing techniques have begun to be used to help and make the diagnosis of fruit diseases more efficient. In image processing systems, the segmentation of regions in the scenes is a crucial step. Specifically for images from apples, disease segmentation is a complicated task due to the different elements that affect the acquisition of the images. In addition, apple diseases also have features that need to be segmented. In this work, an efficient approach that uses the Gaining-sharing Knowledge-based (GSK) algorithm is proposed to optimize the minimum cross-entropy thresholding (MCET) for the segmentation of apple images highlighting the diseases defects. The proposed MCET-GSK has been tested for experimental purposes over different images and compared with various metaheuristics. The experiments were conducted to provide evidence of the GSK’s optimization capabilities by performing the Wilcoxon test and applying a set of metrics to verify the quality of the segmented images. The experimental results validate the performance of the MCET-GSK in the segmentation of apple images by adequately separating the regions with damage produced by a disease. The quality of the segmentation is superior compared with other similar approaches.
- Published
- 2021
21. Real-time face detection using circular sliding of the Gabor energy and neural networks
- Author
-
Ali Shahidinejad, Reza Mohammadian Fini, and Mahmoud Mahlouji
- Subjects
Artificial neural network ,Computational complexity theory ,Computer science ,business.industry ,Feature vector ,Image processing ,Feature (computer vision) ,Face (geometry) ,Sliding window protocol ,Signal Processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Face detection ,business - Abstract
Face detection is one of the most important subjects in image processing. Over time, researchers have paid much attention to the subject, and they have made tremendous progress in the quality of face detection. In addition to the quality of face detection, the speed of face detection is of prime importance. In this paper, a real-time approach is presented for face detection using the Gabor filters and the neural networks that can be implemented in IoT devices. The Gabor filters are one of the most powerful tools in image processing, but they are rarely used in real-time applications due to high computational complexity. To overcome the problem, a new algorithm is proposed for processing images and detecting faces called circular sliding window (CSW). This new algorithm can reduce the number of sub-images generated by almost 98% related to the sliding window algorithm, in frontal face images which have symmetry. Also, a new Gabor feature called compressed Gabor feature (CGF) is employed which improves the speed of classification due to reducing the size of feature vector of the neural network. In the proposed method, the best speed of face detection and the worst speed of face detection for faces with size of 64 × 64 pixels are 0.0072 and 0.0092 s, respectively. The sensitivity of face detection in the proposed method is 95%, approximately.
- Published
- 2021
22. A visual object segmentation algorithm with spatial and temporal coherence inspired by the architecture of the visual cortex
- Author
-
Raul Rangel-Gonzalez, Graciela Ramirez-Alonso, Mario I. Chacon-Murguia, and Juan A. Ramirez-Quintana
- Subjects
Computer science ,business.industry ,Cognitive Neuroscience ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Experimental and Cognitive Psychology ,Image processing ,General Medicine ,Coherence (statistics) ,Object (computer science) ,Motion (physics) ,Visual cortex ,medicine.anatomical_structure ,Artificial Intelligence ,medicine ,Benchmark (computing) ,Segmentation ,Computer vision ,Artificial intelligence ,business - Abstract
Scene analysis in video sequences is a complex task for a computer vision system. Several schemes have been addressed in this analysis, such as deep learning networks or traditional image processing methods. However, these methods require thorough training or manual adjustment of parameters to achieve accurate results. Therefore, it is necessary to develop novel methods to analyze the scenario information in video sequences. For this reason, this paper proposes a method for object segmentation in video sequences inspired by the structural layers of the visual cortex. The method is called Neuro-Inspired Object Segmentation, SegNI. SegNI has a hierarchical architecture that analyzes object features such as edges, color, and motion to generate regions that represent the objects in the scenario. The results obtained with the Video Segmentation Benchmark VSB100 dataset demonstrate that SegNI can adapt automatically to videos with scenarios that have different nature, composition, and different types of objects. Also, SegNI adapts its processing to new scenario conditions without training, which is a significant advantage over deep learning networks.
- Published
- 2021
23. Research on the application of intelligent robots in explosive crime scenes
- Author
-
Junwei Guo
- Subjects
Data processing ,Machine vision ,business.industry ,Computer science ,Strategy and Management ,Coordinate system ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Motion control ,Digital image processing ,Robot ,Computer vision ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business - Abstract
In order to reduce the harmfulness of explosion crime to society, this paper improves the coordinate transformation algorithm from the perspective of bomb demolition, and builds a motion control system suitable for EOD robots. Moreover, this article combines image recognition algorithms to improve machine vision, and obtains an intelligent recognition system suitable for EOD robots. In addition, the optical model of the monocular camera is established to calibrate the camera's own parameters, and a series of image processing algorithms are used for visual image processing to obtain the plane position information of the target object. After the robot recognizes the position of the bomb, it gradually approaches the bomb, its coordinates are constantly changing, and the collected data is transmitted to the data processing center to realize real-time control of the EOD robot. Finally, this paper verifies the reliability of the EOD robot through simulation tests.
- Published
- 2021
24. Texture surface defect detection of plastic relays with an enhanced feature pyramid network
- Author
-
Feng Huang, Qi-peng Li, Ben-wu Wang, and Jun Zou
- Subjects
Computer science ,business.industry ,Deep learning ,Feature extraction ,Image processing ,Pattern recognition ,Industrial and Manufacturing Engineering ,Artificial Intelligence ,Feature (computer vision) ,Cascade ,Robustness (computer science) ,Pyramid (image processing) ,Artificial intelligence ,business ,Software ,Block (data storage) - Abstract
Deep learning has seen its promising applications in manufacturing processes. In this study, a deep network named Cascade Tri-DFPN based on the two-stage target detection algorithm is proposed for detecting the defects on the texture surface of plastic relays. The network adopts a derivative Resnet-101d as the backbone to obtain a loss-reduced feature extraction. Meanwhile, an enhanced feature pyramid module is put forward to enhance the feature representation of the network by adding a dense connection of feature layers through the self-attentive block. Moreover, the defect region proposals are optimized by introducing a cascade module to obtain high-quality defective proposal boxes. Experimental results on the augmented data set of relays’ surface defect reveal an average accuracy of 88.57% and an average recall rate of 94.58%, much higher than those of traditional RCNN or FPN detectors, demonstrating the remarkable improvement of the proposed network. Robustness of the method is also verified by performing tests with deteriorative image processing, which indicates an eligible defect detection under relatively complex scenarios such as image blurring. The proposed deep network could be used in surface defect detection of plastic relays and other potentially related industrial defect detection fields.
- Published
- 2021
25. Hachimoji DNA-based reversible blind color images hiding using Julia set and SVD
- Author
-
Tiegang Gao, Hang Gao, Mengqi Liu, Kunshu Wang, and Xiangjun Wu
- Subjects
Computer science ,business.industry ,Data_MISCELLANEOUS ,Image processing ,Pattern recognition ,Watermark ,Encryption ,Julia set ,Image (mathematics) ,Artificial Intelligence ,Robustness (computer science) ,Singular value decomposition ,Artificial intelligence ,business ,Digital watermarking ,Software - Abstract
In this paper, a novel reversible blind dual-color image watermarking algorithm is proposed by using singular value decomposition (SVD), Hachimoji Deoxyribonucleic Acid (HDNA) biogenetic encryption, coupled map lattice-based Tent–Sine system (TSS-CML) and mathematical Julia set. For watermark embedding, the watermark image is firstly encrypted using 8-bases HDNA sequences, TSS-CML and Julia set image. Then the encrypted HDNA watermark is obtained. Next, decompose the host image into equal non-overlapping blocks and utilize SVD on the randomly selected blocks. Further, embed the HDNA watermark through modifying the relation between the elements in the first column of the matrix U or V. The watermarked image can be eventually attained by carrying out the inverse SVD on all selected blocks. Also a reliable extraction algorithm is designed to recover the watermark from the possibly attacked watermarked images without resorting to the original image. Experimental and analysis results demonstrate that the proposed watermarking scheme has not only an excellent imperceptibility but a strong robustness against the common image processing attacks, geometric attacks and some composite attacks. In addition, the running time taken for hiding and exacting is about 1 s, which is suitable for real-time network transmission and application. In conclusion, the proposed method outperforms the related dual images watermarking algorithms in terms of time performance, extraction effect and robustness.
- Published
- 2021
26. Semi-automatic ultrasound curve angle measurement for adolescent idiopathic scoliosis
- Author
-
De Yang, Timothy Lee, René M. Castelein, Kelly Ka Lee Lai, Tsz Ping Lam, Yongping Zheng, Winnie C.W. Chu, and Jack C. Y. Cheng
- Subjects
Measurement method ,medicine.diagnostic_test ,Cobb angle ,business.industry ,Ultrasound ,Idiopathic scoliosis ,Image processing ,Lumbar ,Medicine ,Orthopedics and Sports Medicine ,3D ultrasound ,Semi automatic ,business ,Biomedical engineering - Abstract
Using X-ray to evaluate adolescent idiopathic scoliosis (AIS) conditions is the clinical gold standard, with potential radiation hazards. 3D ultrasound has demonstrated its validity and reliability of estimating X-ray Cobb angle (XCA) using spinous process angle (SPA), which can be automatically measured. While angle measurement with ultrasound using spine transverse process-related landmarks (UCA) shows better agreed with XCA, its automatic measurement is challenging and not available yet. This research aimed to analyze and measure scoliotic angles through a novel semi-automatic UCA method. 100 AIS subjects (age: 15.0 ± 1.9 years, gender: 19 M and 81 F, Cobb: 25.5 ± 9.6°) underwent both 3D ultrasound and X-ray scanning on the same day. Scoliotic angles with XCA and UCA methods were measured manually; and transverse process-related features were identified/drawn for the semi-automatic UCA method. The semi-automatic method measured the spinal curvature with pairs of thoracic transverse processes and lumbar lumps in respective regions. The new semi-automatic UCA method showed excellent correlations with manual XCA (R2 = 0.815: thoracic angles R2 = 0.857, lumbar angles R2 = 0.787); and excellent correlations with manual UCA (R2 = 0.866: thoracic angles R2 = 0.921, lumbar angles R2 = 0.780). The Bland–Altman plot also showed a good agreement against manual UCA/XCA. The MADs of semi-automatic UCA against XCA were less than 5°, which is clinically insignificant. The semi-automatic UCA method had demonstrated the possibilities of estimating manual XCA and UCA. Further advancement in image processing to detect the vertebral landmarks in ultrasound images could help building a fully automated measurement method. Level III.
- Published
- 2021
27. Advanced Technique for Thermoelastic Stress Analysis and Dissipation Energy Evaluation Via Visible-Infrared Synchronous Measurement
- Author
-
M. Hori, Daiki Shiozawa, Takahide Sakagami, Y. Uchida, and Kazuki Kobayashi
- Subjects
Digital image correlation ,Materials science ,Infrared ,business.industry ,Mechanical Engineering ,Aerospace Engineering ,Image processing ,Dissipation ,Noise (electronics) ,Optics ,Thermoelastic damping ,Mechanics of Materials ,Thermography ,business ,Energy (signal processing) - Abstract
The false apparent temperature change caused by moving objects generates noise components in thermoelastic stress analysis (TSA) and rapid fatigue limit estimation based on energy dissipation. This paper proposes a motion compensation system using visible-infrared synchronous measurements to remove apparent temperature changes. A new dissipative energy evaluation method that combines visible and infrared measurements is proposed. The displacement information is obtained using digital image correlation (DIC) in visible images. Visible and infrared measurements are performed on the same surface simultaneously. The displacement information obtained from the visible image is reflected in the infrared image by applying image processing for spatial synchronization. A white speckle pattern required for DIC is applied to black paint, and this white paint does not affect the infrared measurement. In the new method, the time series of strain obtained from the visible image is used to calculate the thermoelastic temperature change, which is then compared with the actual temperature change obtained via infrared thermography to evaluate the temperature change due to energy dissipation. Motion compensation systems have been applied to TSA and dissipative energy measurements. It is confirmed that the edge effect and false apparent dissipated energy can be removed using the developed system. It is discovered that the energy dissipation behavior within one cycle of the load, which cannot be evaluated via conventional frequency analysis, can be observed comprehensively. This synchronous measurement system is useful for enhancing the accuracy of TSA and dissipated energy measurement.
- Published
- 2021
28. A novel image tamper detection approach by blending forensic tools and optimized CNN: Sealion customized firefly algorithm
- Author
-
Farida Khursheed and Mohassin Ahmad
- Subjects
Discrete wavelet transform ,Computer Networks and Communications ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,Convolutional neural network ,Digital image ,Histogram of oriented gradients ,Hardware and Architecture ,Feature (computer vision) ,Media Technology ,Firefly algorithm ,Artificial intelligence ,business ,Software - Abstract
In recent days, the digital images are manipulated more professionally and easily via common image processing tools. This has been highly practiced in diverse applications including the surveillance systems, where the tamper detection with higher reliability is essential. A novel image tamper detection framework is designed with two major phases: fused feature extraction framework and tamper detection. The collected data are subjected to the fused feature extraction framework, where the features like adaptive speeded up robust features (SURF), Discrete Wavelet Transform (DWT) based Patched Local Vector Pattern (LVP) features, Proposed Principal Component Analysis (PCA) based Histogram of Oriented Gradients (HoG) feature and Mode Based First Digit Feature (MBFDF) are extracted. Subsequently, the extracted features are fed as the input to Optimized Convolutional Neural Network (CNN), which results in the type of tampering in the image: copy-move, splicing, noise inconsistency and double compression. To make the detection more accurate, the weights of CNN are fine-tuned by a new hybrid optimization algorithm referred as Sealion Customized Firefly algorithm (SCFF). The proposed hybrid optimization algorithm is the amalgamation of the standard Sea Lion Optimization Algorithm (SLnO) and Firefly Algorithm (FF). Finally, a comparative evaluation is made between the proposed and existing works in terms of certain performance measures as well.
- Published
- 2021
29. Entropical Optimal Transport, Schrödinger’s System and Algorithms
- Author
-
Liming Wu
- Subjects
S system ,General Mathematics ,General Physics and Astronomy ,Image processing ,Regularization (mathematics) ,Dual (category theory) ,symbols.namesake ,Rate of convergence ,symbols ,Gradient descent ,Constant (mathematics) ,Algorithm ,Schrödinger's cat ,Mathematics - Abstract
In this exposition paper we present the optimal transport problem of Monge-Ampere-Kantorovitch (MAK in short) and its approximative entropical regularization. Contrary to the MAK optimal transport problem, the solution of the entropical optimal transport problem is always unique, and is characterized by the Schrodinger system. The relationship between the Schrodinger system, the associated Bernstein process and the optimal transport was developed by Leonard [32, 33] (and by Mikami [39] earlier via an h-process). We present Sinkhorn’s algorithm for solving the Schrodinger system and the recent results on its convergence rate. We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence, whose rate might be independent of the regularization constant. This exposition is motivated by recent applications of optimal transport to different domains such as machine learning, image processing, econometrics, astrophysics etc..
- Published
- 2021
30. Scaffold-A549: A Benchmark 3D Fluorescence Image Dataset for Unsupervised Nuclei Segmentation
- Author
-
Dejian Huang, Jie Sun, Linzhi Jing, Curran Jude, Kaizhu Huang, and Kai Yao
- Subjects
Confocal laser scanning microscope ,business.industry ,Computer science ,Cognitive Neuroscience ,Deep learning ,Image processing ,Pattern recognition ,Computer Science Applications ,Image (mathematics) ,Benchmark (computing) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Non small cell ,business ,Nuclei segmentation - Abstract
A general trend of nuclei segmentation is the transition from two-dimensional to three-dimensional nuclei segmentation and from traditional image processing methods to data-driven cognitively inspired methods. Existing nuclei segmentation datasets do not meet this trend: They either do not contain enough samples for training the deep learning model or not contain challenging 3D structure. Thus, large-scale datasets are critically demanded for nuclei segmentation tasks. In this paper, we introduce a new benchmark nuclei segmentation dataset termed as Scaffold-A549 for 3D cell culture on bio-scaffold. The A549 human non-small cell lung cancer cells are seeded in the bio-scaffold for cell culture and the samples with different density of nuclei are captured using confocal laser scanning microscope at the first, third, and eighth culture day. A total of 21 3D images are collected containing more than 10,000 nucleus and each of the images containing more than 800 nucleus are annotated manually for evaluation. Scaffold-A549 presents one large, diverse, challenging, and publicly available dataset and can be widely used for the research on 3D unsupervised nuclei segmentation.
- Published
- 2021
31. Computer Vision in the Infrared Spectrum: Challenges and Approaches
- Author
-
Angel D. Sappa, Riad I. Hammoud, and Michael Teutsch
- Subjects
Infrared ,business.industry ,Computer science ,Machine vision ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Infrared spectroscopy ,Image processing ,Multi spectral ,ComputingMethodologies_PATTERNRECOGNITION ,Human visual perception ,Computer vision ,Artificial intelligence ,business ,Autonomous system (mathematics) - Abstract
Human visual perception is limited to the visual-optical spectrum. Machine vision is not. Cameras sensitive to the different infrared spectra can enhance the abilities of autonomous system...
- Published
- 2021
32. Applying Research-Based Teaching Strategies in a Biomedical Engineering Programming Course: Introduction to Computer Aided Diagnosis
- Author
-
S. E. Hopper, Aileen Huang-Saad, and R. Rosario
- Subjects
Concept maps ,Self-efficacy ,Innovation Article ,Conceptual knowledge ,Computer science ,business.industry ,Concept map ,media_common.quotation_subject ,Geography, Planning and Development ,Computer programming ,Research-based instructional strategies ,Image processing ,Management, Monitoring, Policy and Law ,Scaffolding ,Project-based learning ,Course (navigation) ,Perception ,Active learning ,ComputingMilieux_COMPUTERSANDEDUCATION ,business ,Biomedical engineering ,media_common - Abstract
There are increasing calls for the use of research-based teaching strategies to improve engagement and learning in engineering. In this innovation paper, we detail the application of research-based teaching strategies in a computer programming focused biomedical engineering module. This four-week, one-credit undergraduate biomedical engineering (BME) programming-based image processing module consisted of a blend of lectures, active learning exercises, guided labs, and a final project. Students completed surveys and generated concept maps at three time points in the module (pre, mid, and post) to document the impact of integrating research-based teaching strategies. Students demonstrated a significant (p 4 out of 5) perceptions of gains in knowledge and attitudes toward instructor support. Overall, the novel design utilized multiple research-based pedagogies and increased students’ conceptual knowledge, self-efficacy, and perceived usefulness of material. The proposed design is an example of how multiple research-based instructional strategies can be integrated into an undergraduate biomedical engineering course. Supplementary Information The online version contains supplementary material available at 10.1007/s43683-021-00057-w.
- Published
- 2021
33. Hybrid marine predators algorithm for image segmentation: analysis and validations
- Author
-
Mohamed Abdel-Basset, Reda Mohamed, and Mohamed Abouhawwash
- Subjects
Linguistics and Language ,education.field_of_study ,Computer science ,Population ,Image processing ,Image segmentation ,Thresholding ,Language and Linguistics ,Image (mathematics) ,Local optimum ,Ranking ,Artificial Intelligence ,Segmentation ,education ,Algorithm - Abstract
Naturally, to analyze an image accurately, all the similar objects within it should be separated to pay attention to the most important object for reaching more details and hence achieving better accuracy. Therefore, multilevel thresholding is an indispensable image processing technique in the field of image segmentation and is employed widely to separate those similar objects. However, with increasing thresholds, the existing image segmentation techniques might suffer from exponentially-grown computational cost and low accuracy due to local optima shortage. Therefore, in this paper, a new image segmentation algorithm based on the improved marine predators algorithm (MPA) is proposed. MPA is improved using a strategy to find a number of the worst solutions within the population then tries to search for other better ones for those solutions by moving them gradually towards the best solutions to avoid accelerating to local optima and randomly within the search space based on a certain probability. In addition, this number of the worst solutions is increased with the iteration. This strategy is known as the linearly increased worst solutions improvement strategy (LIS). Also, we suggested that apply the ranking strategy based on a novel updating scheme, namely ranking-based updating strategy (RUS), on the solutions that could find better solutions in the last number iterations, perIter, in the hope of finding better solutions near it. RUS updates the particles/solutions which could not find better solutions than the best-local one in a number of consecutive iterations, with those that are generated based on a novel updating strategy. LIS is integrated with MPA to produce a new segmentation meta-heuristic algorithm abbreviated as MPALS. Also, MPALS and RUS are combined to tackle ISP in a strong variant abbreviated as HMPA for overcoming the image segmentation problem. The two proposed algorithms are validated on 14 test images and compared with seven state-of-the-arts meta-heuristic algorithms. The experimental results show the effectiveness of HMPA with increasing the threshold levels compared to the seven state-of-the-arts algorithms when segmenting an image, while their performance is roughly the same for the image with a small threshold level.
- Published
- 2021
34. Application of the anatomical fiducials framework to a clinical dataset of patients with Parkinson’s disease
- Author
-
Jonathan C. Lau, Greydon Gilmore, Terry M. Peters, Ryan Chevalier, Magdalena Jach, Mohamad Abbass, Ali R. Khan, and Alaa Taha
- Subjects
Histology ,Registration ,Computer science ,Image registration ,Image processing ,Brain mapping ,Imaging, Three-Dimensional ,Neuroimaging ,Image Processing, Computer-Assisted ,Deep brain stimulation ,medicine ,Humans ,Preprocessor ,Fiducials ,Accuracy ,Protocol (science) ,Brain Mapping ,medicine.diagnostic_test ,business.industry ,General Neuroscience ,Parkinson Disease ,Magnetic resonance imaging ,Pattern recognition ,Biomarker ,Magnetic Resonance Imaging ,Parkinson’s disease ,Original Article ,Artificial intelligence ,Anatomy ,Fiducial marker ,business - Abstract
Establishing spatial correspondence between subject and template images is necessary in neuroimaging research and clinical applications such as brain mapping and stereotactic neurosurgery. Our anatomical fiducial (AFID) framework has recently been validated to serve as a quantitative measure of image registration based on salient anatomical features. In this study, we sought to apply the AFIDs protocol to the clinic, focusing on structural magnetic resonance images obtained from patients with Parkinson’s disease (PD). We confirmed AFIDs could be placed to millimetric accuracy in the PD dataset with results comparable to those in normal control subjects. We evaluated subject-to-template registration using this framework by aligning the clinical scans to standard template space using a robust open preprocessing workflow. We found that registration errors measured using AFIDs were higher than previously reported, suggesting the need for optimization of image processing pipelines for clinical grade datasets. Finally, we examined the utility of using point-to-point distances between AFIDs as a morphometric biomarker of PD, finding evidence of reduced distances between AFIDs that circumscribe regions known to be affected in PD including the substantia nigra. Overall, we provide evidence that AFIDs can be successfully applied in a clinical setting and utilized to provide localized and quantitative measures of registration error. AFIDs provide clinicians and researchers with a common, open framework for quality control and validation of spatial correspondence and the location of anatomical structures, facilitating aggregation of imaging datasets and comparisons between various neurological conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s00429-021-02408-3.
- Published
- 2021
35. Two-step non-local means method for image denoising
- Author
-
Xiaobo Zhang
- Subjects
Pixel ,Iterative method ,Computer science ,Quantitative Biology::Molecular Networks ,Applied Mathematics ,Noise reduction ,Wiener filter ,Image processing ,Filter (signal processing) ,Non-local means ,Computer Science Applications ,Quantitative Biology::Quantitative Methods ,Noise ,symbols.namesake ,Artificial Intelligence ,Hardware and Architecture ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,symbols ,Algorithm ,Software ,Information Systems - Abstract
Non-local means (NLM) method is a powerful technique in the field of image processing. The center weight CW (the weight of the pixel to be denoised) plays an important role for the performance of NLM. In this paper, several center weights such as Zero-CW and One-CW are researched in the influence of these weights on denoising performance. In order to avoid the disadvantages of excessive smoothness or insufficient denoising of these different NLM filters, a two-step non-local means (TSNLM) iterative scheme is proposed. In the first step, local Wiener filter is introduced to extract image features from the method noise of NLM with Zero-CW. The denoising process is integrated into NLM based on local Wiener filter (LWF-NLM). In the second step, the carefully selected NLM (NLM with One-CW) operates on the output of the first step to remove the remaining noise. The denoising amount of two steps is combined by the decaying parameter depending on noise variance. As far as I know, this is the first time to consider the role of center weight to design an iterative NLM filter. The experimental results show that the proposed TSNLM helps NLM to improve the ability of denoising, giving satisfactory subjective and objective performance. Furthermore, the proposed TSNLM is very efficient compared to other related NLM based iterative methods.
- Published
- 2021
36. Improved artificial bee colony algorithm and its application in image threshold segmentation
- Author
-
Yuanxiong Wang, Fengcai Huo, and Weijian Ren
- Subjects
Computer Networks and Communications ,Computer science ,Orientation (computer vision) ,Computer Science::Neural and Evolutionary Computation ,Image processing ,Image segmentation ,Tent map ,Artificial bee colony algorithm ,Range (mathematics) ,Hardware and Architecture ,Encoding (memory) ,Media Technology ,Segmentation ,Algorithm ,Software - Abstract
Image segmentation is a key problem in the field of computer vision, especially in these fields, such as image processing, analysis and understanding. The key of the problem to be solved is how to obtain reasonable threshold according to the different types of images. Based on these, an improved artificial bee colony algorithm based on Tent mapping in chaos theory is proposed and applied to image threshold segmentation. Firstly, a complementary encoding scheme for the artificial bee colony algorithm is constructed from on the Tent map in chaos theory. According to the characteristics between the current solution and the optimal solution, a fixed orientation updating method is proposed in the colony algorithm updating strategy. The difference between 1 and [0, 1] is still in the range of [0, 1], the way adjust the local optimal solution by the complementary properties. All these construct an Improved Artificial Bee Colony based on Tent Mapping (IABCTM) algorithm. Secondly, according to the characteristics of the algorithm, the algorithm is convergent with probability 1. Finally, the improved algorithm is applied to image threshold segmentation. Through the comparison of multiple images, multiple performance parameters and multiple algorithms, the improved algorithm is proved to have a strong ability to obtain the optimal solution and a good convergence performance.
- Published
- 2021
37. Improved Camshift object tracking algorithm in occluded scenes based on AKAZE and Kalman
- Author
-
Lili Pei, Bo Yang, and He Zhang
- Subjects
Channel (digital image) ,Feature matching ,Computer Networks and Communications ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Kalman filter ,Video processing ,Object tracking ,Frame rate ,Tracking (particle physics) ,Article ,AKAZE algorithm ,Hardware and Architecture ,Feature (computer vision) ,Video tracking ,Media Technology ,Computer vision ,Camshift algorithm ,Artificial intelligence ,business ,Kalman filtering ,Software - Abstract
Camshift algorithm tracking is susceptible to interference when a tracking object is occluded or when its hue is similar to the background. An improved Camshift object-tracking algorithm combining AKAZE (Accelerated-KAZE) feature matching and Kalman filtering is proposed. First, the video channel is converted for processing. Second, AKAZE is used to match the object feature points and Kalman filtering is used to predict the next position. Then different scenes are judged by the threshold and the Camshift and Kalman tracking algorithms are used for object tracking, respectively. Finally, the improved Camshift algorithm is used to test the moving object in a variety of situations and compared with the traditional Camshift algorithm and the Kalman filter improved Camshift algorithm. Experimental results show that the improved joint tracking algorithm can continue tracking under full occlusion. The effective frame rate of recognition is increased by about 20%, and the single-frame image processing time is less than 35 ms, which can meet the real-time tracking requirements.
- Published
- 2021
38. Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review
- Author
-
Prabhpreet Kaur and Jaspreet Kaur
- Subjects
History ,Isolation (health care) ,business.industry ,Applied Mathematics ,Deep learning ,MEDLINE ,Outbreak ,Image processing ,Review Article ,medicine.disease ,Computer Science Applications ,Health care ,Pandemic ,Medical imaging ,medicine ,Artificial intelligence ,Medical emergency ,business - Abstract
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual’s lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
- Published
- 2021
39. Enhancement and denoising method for low-quality MRI, CT images via the sequence decomposition Retinex model, and haze removal algorithm
- Author
-
Zhenkun Lei, Min Xu, Lei Chen, and Chen Tang
- Subjects
Haze ,Color constancy ,Channel (digital image) ,business.industry ,Computer science ,Noise reduction ,media_common.quotation_subject ,Visibility (geometry) ,Biomedical Engineering ,Reproducibility of Results ,Image processing ,Magnetic Resonance Imaging ,Computer Science Applications ,Contrast (vision) ,Computer vision ,Noise (video) ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Algorithms ,media_common - Abstract
The visibility and analyzability of MRI and CT images have a great impact on the diagnosis of medical diseases. Therefore, for low-quality MRI and CT images, it is necessary to effectively improve the contrast while suppressing the noise. In this paper, we propose an enhancement and denoising strategy for low-quality medical images based on the sequence decomposition Retinex model and the inverse haze removal approach. To be specific, we first estimate the smoothed illumination and de-noised reflectance in a successive sequence. Then, we apply a color inversion from 0-255 to the estimated illumination, and introduce a haze removal approach based on the dark channel prior to adjust the inverted illumination. Finally, the enhanced image is generated by combining the adjusted illumination and the de-noised reflectance. As a result, improved visibility is obtained from the processed images and inefficient or excessive enhancement is avoided. To verify the reliability of the proposed method, we perform qualitative and quantitative evaluation on five MRI datasets and one CT dataset. Experimental results demonstrate that the proposed method strikes a splendid balance between enhancement and denoising, providing performance superior to that of several state-of-the-art methods.
- Published
- 2021
40. Application of an Unmanned Aerial Vehicle for Crack Measurement Using Image Calibration Supported by Laser Projectors
- Author
-
Lian-Gui He, Kevin Hsu, Kuang-Wu Chou, Chang-Wei Huang, and Wen-Cheng Liao
- Subjects
Computer science ,business.industry ,System of measurement ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Measure (physics) ,Point cloud ,Image processing ,Satellite system ,Laser ,law.invention ,Visual inspection ,law ,GNSS applications ,Computer vision ,ComputingMethodologies_GENERAL ,Artificial intelligence ,business - Abstract
Traditional bridge inspection methods mostly rely on the experience and expertise of inspectors in visual detection of cracks. Such visual inspection methods are not only time consuming but also could be affected by subjective judgment of inspectors and lack a unified inspection standard. To overcome the shortcomings of visual inspection, this study investigates the feasibilities of using an unmanned aerial vehicle (UAV) combined with laser projectors to measure surface cracks. All related parameters of the laser projectors that are fixed on the UAV are calibrated first in the laboratory. Then, the UAV with the laser projectors is used to capture images of surface cracks. The images from the UAV can be transformed to orthoimages using the laser projectors’ parameters and the image processing technology. Next, surface cracks in the orthoimages are automatically identified by a crack identification algorithm. Finally, the characteristics of surface cracks are evaluated by the image-based measurement technologies. To verify the accuracy of the proposed image-based measurement system (UAV with laser projectors), seven artificial cracks with different widths are measured using the proposed measurement system and the point cloud method (UAV with the global navigation satellite system) under different measurement distances. The test results demonstrate that while the distance between the UAV and the artificial crack is within 150 cm, the proposed image-based measurement system is able to measure crack widths more accurately than the GNSS point cloud method. In addition, the proposed image-based measurement system can automatically identify surface cracks and measure crack widths from UAV-captured images without manual measurements.
- Published
- 2021
41. Non-intrusive Internal Corrosion Characterization using the Potential Drop Technique for Electrical Mapping and Machine Learning
- Author
-
Victor Gomes Silva, José Antônio da Cunha Ponciano Gomes, George Leandro dos Santos Pinto, Jorge Amaral, and Gil Roberto Vieira Pinheiro
- Subjects
Computer science ,business.industry ,Energy Engineering and Power Technology ,Image processing ,Edge enhancement ,Machine learning ,computer.software_genre ,Convolutional neural network ,Finite element method ,Computer Science Applications ,Corrosion ,Control and Systems Engineering ,Approximation error ,Pitting corrosion ,Artificial intelligence ,Gradient boosting ,Electrical and Electronic Engineering ,business ,computer - Abstract
This paper describes a non-intrusive method for collecting data about internal corrosion damages in AISI-304 stainless steel plates and classifying them according to severity. The mapping of the electric potential gradient is derived using the potential drop technique, which is then analyzed using image processing techniques including edge enhancement and segmentation. Simulations were run using finite element modeling to produce examples of damaged plates, with four types of defects that can be considered part of pitting corrosion. The image processing stage plays the role of an extractor of features that, when employed as inputs of machine learning algorithms, make it possible to determine the damage severity. With the Gradient Boosting regressor, the maximum absolute error of 0.879 mm was obtained in the estimate of the depth of the defects. Additionally, with the application of a Convolutional Neural Network, an accuracy of 94.84% was achieved to classify of the severity of the damages.
- Published
- 2021
42. Computer-Aided Detection of COVID-19 from CT Images Based on Gaussian Mixture Model and Kernel Support Vector Machines Classifier
- Author
-
Saygılı, Ahmet
- Subjects
Multidisciplinary ,business.industry ,Computer science ,Decision tree ,COVID-19 ,Image processing ,Pattern recognition ,Classification ,Mixture model ,Ensemble learning ,Research Article-Computer Engineering and Computer Science ,Support vector machine ,Segmentation ,Kernel (image processing) ,Preprocessor ,GMM ,Artificial intelligence ,Expectation–Maximization ,business - Abstract
COVID-19 is a virus that has been declared an epidemic by the world health organization and causes more than 2 million deaths in the world. To achieve this, computer-aided automatic diagnosis systems are created on medical images. In this study, an image processing and machine learning-based method is proposed that enables segmenting of CT images taken from COVID-19 patients and automatic detection of the virus through the segmented images. The main purpose of the study is to automatically diagnose the COVID-19 virus. The study consists of three basic steps: preprocessing, segmentation and classification. Image resizing, image sharpening, noise removal, contrast stretching processes are included in the preprocessing phase and segmentation of images with Expectation–Maximization-based Gaussian Mixture Model in the segmentation phase. In the classification stage, COVID-19 is classified as positive and negative by using kNN, decision tree, and two different ensemble methods together with the kernel support vector machines method. In the study, two different CT datasets that are open to the public and a mixed dataset created by combining these datasets were used. The best accuracy values for Dataset-1, Dataset-2 and Mixed Dataset are 98.5%, 86.3%, 94.5%, respectively. The achieved results prove that the proposed approach advances state-of-the-art performance. Within the scope of the study, a GUI that can automatically detect COVID-19 has been created. © 2021, King Fahd University of Petroleum & Minerals. 21.317, NKUBAP.06 This work was supported by Research Fund of the Tekirdag Nam?k Kemal University. Project Number: NKUBAP.06.GA.21.317 This work was supported by Research Fund of the Tekirdag Namık Kemal University. Project Number: NKUBAP.06.GA.21.317
- Published
- 2021
43. Whole-cell organelle segmentation in volume electron microscopy
- Author
-
Aubrey V. Weigel, Alyson Petruncio, Jan Funke, Wyatt Korff, Nils Eckstein, Jennifer Lippincott-Schwartz, Jody Clements, Woohyun Park, Davis Bennett, Larissa Heinrich, Song Pang, Stephan Saalfeld, Harald F. Hess, C. Shan Xu, John A. Bogovic, and David G. Ackerman
- Subjects
Source code ,Computer science ,media_common.quotation_subject ,Datasets as Topic ,Image processing ,Endoplasmic Reticulum ,computer.software_genre ,Microtubules ,Focused ion beam ,law.invention ,Deep Learning ,Voxel ,law ,Chlorocebus aethiops ,Organelle ,Animals ,Humans ,Segmentation ,Cell Size ,media_common ,Organelles ,Multidisciplinary ,Information Dissemination ,business.industry ,Resolution (electron density) ,Reproducibility of Results ,Pattern recognition ,Microscopy, Fluorescence ,COS Cells ,Microscopy, Electron, Scanning ,Artificial intelligence ,Electron microscope ,business ,Ribosomes ,computer ,Biomarkers ,HeLa Cells - Abstract
Cells contain hundreds of organelles and macromolecular assemblies. Obtaining a complete understanding of their intricate organization requires the nanometre-level, three-dimensional reconstruction of whole cells, which is only feasible with robust and scalable automatic methods. Here, to support the development of such methods, we annotated up to 35 different cellular organelle classes—ranging from endoplasmic reticulum to microtubules to ribosomes—in diverse sample volumes from multiple cell types imaged at a near-isotropic resolution of 4 nm per voxel with focused ion beam scanning electron microscopy (FIB-SEM)1. We trained deep learning architectures to segment these structures in 4 nm and 8 nm per voxel FIB-SEM volumes, validated their performance and showed that automatic reconstructions can be used to directly quantify previously inaccessible metrics including spatial interactions between cellular components. We also show that such reconstructions can be used to automatically register light and electron microscopy images for correlative studies. We have created an open data and open-source web repository, ‘OpenOrganelle’, to share the data, computer code and trained models, which will enable scientists everywhere to query and further improve automatic reconstruction of these datasets. Focused ion beam scanning electron microscopy (FIB-SEM) combined with deep-learning-based segmentation is used to produce three-dimensional reconstructions of complete cells and tissues, in which up to 35 different organelle classes are annotated.
- Published
- 2021
44. Humanoid robots play chess using visual control
- Author
-
Li-Hong Juang
- Subjects
Robot kinematics ,Computer Networks and Communications ,Machine vision ,Computer science ,business.industry ,ComputingMilieux_PERSONALCOMPUTING ,Ranging ,Image processing ,Visual control ,Hardware and Architecture ,Media Technology ,Robot ,Computer vision ,Point (geometry) ,Artificial intelligence ,business ,Software ,Humanoid robot - Abstract
This paper mainly considers the humanoid robot vision system and the type of its joint structure, and designs a set of effective planning strategies for the robots grabbing the chess pieces from the chess board and places them into the designated positions and ensures their success in the chess game. The core content of this paper is the design of the humanoid robot performing the operation of chess. The main procedure of the humanoid robot performing the operation of the next chess game is as follows: firstly, the image is acquired by the vision system mounted on the camera of the humanoid robot; secondly, the pixel position of the checkerboard corner point is obtained through the image processing; furthermore, a target position is set for the humanoid robot and then the improvement is performed to let it become more efficiencies. The monocular distance ranging algorithm is used to obtain the actual position of the corner point of the checkerboard. Finally, the robot kinematics accurately controls the humanoid robot to perform an operation of grabbing the chess to place the chess to the specified position successfully. The superiority of the proposed work is that it can be for various complex conditions. In the three-dimensional environment, the visual system of the NAO robot was firstly used to perceive its surrounding environment, and then the image processing technology was used to identify the chess position. The first general remark is that this work can be conceived as a good technological/engineering achievement.
- Published
- 2021
45. AI-based diagnosis of COVID-19 patients using X-ray scans with stochastic ensemble of CNNs
- Author
-
Balasubramanian Raman, Vinodh J Sahayasheela, Himanshu Buckchash, Vipul Bansal, Narayanan Narayanan, Rahul Kumar, Ridhi Arora, and Ganesh N. Pandian
- Subjects
Coronavirus disease 2019 (COVID-19) ,Computer science ,Gaussian ,Feature vector ,Feature extraction ,Biomedical Engineering ,Biophysics ,Image processing ,Scientific Paper ,World health ,X-ray ,symbols.namesake ,Machine learning ,Humans ,Radiology, Nuclear Medicine and imaging ,Instrumentation ,Radiological and Ultrasound Technology ,SARS-CoV-2 ,business.industry ,X-Rays ,Deep learning ,COVID-19 ,Pattern recognition ,Classification ,symbols ,Neural Networks, Computer ,Artificial intelligence ,business ,Algorithms ,Latent vector ,Biotechnology - Abstract
According to the World Health Organization (WHO), novel coronavirus (COVID-19) is an infectious disease and has a significant social and economic impact. The main challenge in fighting against this disease is its scale. Due to the outbreak, medical facilities are under pressure due to case numbers. A quick diagnosis system is required to address these challenges. To this end, a stochastic deep learning model is proposed. The main idea is to constrain the deep-representations over a Gaussian prior to reinforce the discriminability in feature space. The model can work on chest X-ray or CT-scan images. It provides a fast diagnosis of COVID-19 and can scale seamlessly. The work presents a comprehensive evaluation of previously proposed approaches for X-ray based disease diagnosis. The approach works by learning a latent space over X-ray image distribution from the ensemble of state-of-the-art convolutional-nets, and then linearly regressing the predictions from an ensemble of classifiers which take the latent vector as input. We experimented with publicly available datasets having three classes: COVID-19, normal and pneumonia yielding an overall accuracy and AUC of 0.91 and 0.97, respectively. Moreover, for robust evaluation, experiments were performed on a large chest X-ray dataset to classify among Atelectasis, Effusion, Infiltration, Nodule, and Pneumonia classes. The results demonstrate that the proposed model has better understanding of the X-ray images which make the network more generic to be later used with other domains of medical image analysis.
- Published
- 2021
46. Workflow Development to Scale up Petrophysical Properties from Digital Rock Physics Scale to Laboratory Scale
- Author
-
Marco Miarelli and Augusto Della Torre
- Subjects
Scale (ratio) ,Computer simulation ,business.industry ,General Chemical Engineering ,Petrophysics ,Reservoir modeling ,Image processing ,Context (language use) ,Computational fluid dynamics ,business ,Core plug ,Catalysis ,Computational science - Abstract
Petrophysical rock properties are the crucial point of any reservoir characterization project and represent fundamental input parameters for any simulation. To obtain reservoir characterization data such as porosity, absolute and relative permeabilities, typically core analysis tests are needed. Unfortunately, there are cases where these tests cannot be accomplished. In these situations, digital rock physics (DRP) techniques are useful and may represent a powerful approach to obtain these parameters. Fluid flow at the pore scale can be simulated by DRP. To compare DRP results (micrometric scale) and laboratory tests (centimetric scale), the implementation of an upscaling method is required. In this context, this work aims to propose a novel methodology to allow the digital characterization of rock properties at the plug scale. In particular, the developed workflow exploits and combines different technologies: micro-CT scan, advanced image processing, machine learning, CFD numerical simulation. The first step of the methodology consists of acquiring micro-CT low-resolution scan of the entire core plug; then, machine learning techniques are applied to decompose the digital plug (derived by image processing on micro-CT scan) in reference element of volume (REV)-type equivalent blocks, determining the optimum number of REV type and their locations. One or several high-resolution 3D fine-scale images are used to derive the petrophysical properties of each REV type from individual fluid flow simulations at the pore scale. The resulting REV-type properties are then scaled up to the core plug scale. Finally, the scaled up results are compared to the results of core analysis tests. The overall methodology is validated on a heterogeneous carbonate rock.
- Published
- 2021
47. Global structure-guided learning framework for underwater image enhancement
- Author
-
Jinyuan Liu, Risheng Liu, Runjia Lin, and Xin Fan
- Subjects
Matching (graph theory) ,Artificial neural network ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Computer Graphics and Computer-Aided Design ,Object detection ,Edge detection ,Computer graphics ,Distortion ,Path (graph theory) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
Underwater image enhancement (UIE), as an image processing technique, plays a vital role in computer vision. However, existing approaches treat the restoration process as a whole; thus, they cannot adequately handle the color distortion and low contrast in the enhanced images. In this paper, we propose a global–local-guided model for realizing UIE tasks in a coarse-to-fine manner to alleviate these issues. The proposed model is divided into two paths. The global path targets to estimate basic structure and color information, while the local path targets to remove the undesirable artifacts, e.g., noises over-exposure regions, and blurred edges. By integrating two neural networks into our model, we could recover the underwater images with clear textural details and vivid color. Besides, a learning-based weight map is introduced to make the global–local path on friendly terms, which can balance the pixel intensity distribution from both sides and remove redundant information to a certain degree. Qualitative and quantitative experimental results on various benchmarks demonstrate that our method can effectively tackle color distortion and blurred edges compared with several state-of-the-art methods by a large margin. Finally, we also conduct experiments to demonstrate that our method can be applied in various computer vision tasks, e.g., object detection, matching and edge detection.
- Published
- 2021
48. In-line monitoring of focus shift by kerf width detection with coaxial thermal imaging during laser cutting
- Author
-
Matteo Pacher, Barbara Previtali, and Ema Vasileska
- Subjects
Materials science ,Laser cutting ,Mechanical Engineering ,Acoustics ,Process (computing) ,Image processing ,Focus shifting ,Laser ,Industrial and Manufacturing Engineering ,Computer Science Applications ,law.invention ,Coaxial process monitoring ,Control and Systems Engineering ,Position (vector) ,law ,Line (geometry) ,Cutting kerf monitoring ,Coaxial ,Focus (optics) ,Software ,Thermal lensing - Abstract
Nowadays, industrial laser cutting systems employ a fixed set of process parameters throughout the cut of the same workpiece, which results in a good compromise between maximum productivity and surface quality. The process parameters are commonly set by trial-and-error experiments carried out on different materials and thicknesses or less frequently by physical modelling. However, the final cut quality is not constant even though the process parameters are kept fixed due to degradation of the initial status of the laser cutting system. One of the common issues in the laser cutting process is the local heating of the optical components due to contamination and/or high powers commonly employed, which cause shifting of the focus position. This can worsen the cutting-edge quality, and even result with loss of cut. Therefore, the online measurement of the position of focus is a requirement for a consistent process. An empirical method used in the industrial practice for initially setting and successively examining and adjusting the focus position is to measure the kerf width of a straight-line cut performed with constant process parameters. This paper proposes an algorithm to monitor the kerf width and yield the estimated focus position in real-time during the cutting process. The kerf width is observed during the process with a coaxial camera module mounted on the laser head which monitors the thermal interaction between the laser beam and the material. An image processing algorithm was developed for extracting the kerf width from the acquired images, and the algorithm parameters were experimentally calibrated such that the extracted value of the kerf width matches with its physical measure. To understand the influence of the focus position on the cutting kerf, an experimental campaign was conducted and subsequently a regression model was fitted. The real-time monitoring and computation of the kerf width and its correlation to the focus position give the opportunity for a closed-loop control of the focus shift, that would eventually lead to a gain of process stability and repeatability.
- Published
- 2021
49. Image Analysis Algorithm-Based Platform for Determining Micron and Higher Aggregate Size Distribution of Therapeutic IgG Using Brightfield and Fluorescence Microscope Images
- Author
-
Deepak Sonawat, Shravan Sreenivasan, and Anurag S. Rathore
- Subjects
Materials science ,Microscope ,Pharmaceutical Science ,Image processing ,Grayscale ,law.invention ,Protein Aggregates ,law ,Digital image processing ,Image Processing, Computer-Assisted ,Fluorescence microscope ,Pharmacology (medical) ,Fluorescent Dyes ,Pharmacology ,Microscopy, Confocal ,Optical Imaging ,Organic Chemistry ,Aggregate (data warehouse) ,Temperature ,Hydrogen-Ion Concentration ,Thresholding ,Fluorescence ,Immunoglobulin G ,Molecular Medicine ,Stress, Mechanical ,Algorithm ,Algorithms ,Software ,Biotechnology - Abstract
A platform for determining size distribution of micron (1–100 μm) and larger (> 100 μm) aggregates of therapeutic IgG has been established by using image processing algorithms for brightfield and fluorescence microscope images. The algorithm for brightfield images involved conversion to grayscale followed by pixel-based and size-based thresholding. Morphological operations were then applied and the size distribution of aggregates were extracted. Fluorescence images of the aggregates of mAb tagged by a fluorescent dye were captured using widefield fluorescence microscope, confocal laser scanning microscope, and Cytell Cell Imaging System and the images were processed using a series of denoising steps followed by thresholding and morphological operations. The samples were subjected to different stresses, among which the aggregates were visible in the microscope for sample subjected to bubbling, stirring, and temperature. The images of these aggregates were effectively denoised and the size distribution of aggregates was analyzed using the algorithm. The overall aggregate size distribution obtained by image processing ranged in the micron and higher size range. The size obtained from brightfield image processing was validated using images of liquid chromatography resins. Further, the aggregate size distribution obtained using image processing was compared with experimental techniques such as Mastersizer 2000 and Micro Flow Imaging. It was found that analysis of IgG aggregates using image processing could serve as an orthogonal methodology to the existing approaches.
- Published
- 2021
50. Morphological control enables nanometer-scale dissection of cell-cell signaling complexes
- Author
-
Liam P. Dow, Guido Gaietta, Yair Kaufman, Mark F. Swift, Moara Lemos, Kerry Lane, Matthew Hopcroft, Armel Bezault, Cécile Sauvanet, Niels Volkmann, Beth L. Pruitt, Dorit Hanein, University of California [Santa Barbara] (UC Santa Barbara), University of California (UC), The Scintillon Institute, Études structurales de machines moléculaires in cellulo - Structural studies of macromolecular machines in cellula, Institut Pasteur [Paris] (IP)-Centre National de la Recherche Scientifique (CNRS), Imagerie structurale - Structural Image Analysis, Institut Pasteur [Paris] (IP)-Centre National de la Recherche Scientifique (CNRS)-Université Paris Cité (UPCité), This work was supported National Institutes of Health grants R01 GM119948 (N.V., D.H., and B.L.P.), NSF CMMI-183476 (B.L.P.) and seeding funding from UCSB CNSI. KL was supported by NSF GRFP and UCSB fellowship funding., LPD acknowledges useful conversations with Dr. Leeya Engel (Stanford University). G.G. and D.H. thank the Nelson lab for their generous gift of the pEGFP-C1-ACAT plasmid and Dr. Kathleen A Siemers for helping with the plasmid, characterization work, and for providing the anti-alpha catenin antibody used in the immunofluorescence experiments. The authors acknowledge the use of the Nanostructures Cleanroom Facility and Microfluidics Lab within the California NanoSystems Institute, supported by the University of California, Santa Barbara and the University of California, Office of the President. The authors acknowledge the use of the Titan Krios, Tecnai Spirit T12 and auxiliary equipment at the cryo-EM unit of the Sanford Burnham Prebys Medical Discovery Institute, which was created in part with the support of US National Institutes of Health Grant S10-OD012372 (D.H.) and Pew Charitable Trust 864K625 innovation award funds (D.H.). The authors acknowledge access to the Titan Krios, Glacios, and Aquilos-2 instruments at the NanoImaging Core of the Institut Pasteur. The NanoImaging Core at Institut Pasteur was created with the help of a grant from the French Government’s Investissements d’Avenir program (EQUIPEX CACSICE - Centre d’analyse de systèmes complexes dans les environnements complexes, ANR-11-EQPX-0008)., and ANR-11-EQPX-0008,CACSICE,Centre d'analyse de systèmes complexes dans les environnements complexes(2011)
- Subjects
Multidisciplinary ,Image Processing ,1.1 Normal biological development and functioning ,[SDV]Life Sciences [q-bio] ,Cryoelectron Microscopy ,Proteins ,General Physics and Astronomy ,Bioengineering ,General Chemistry ,Carbon ,General Biochemistry, Genetics and Molecular Biology ,Computer-Assisted ,Underpinning research ,Nanotechnology ,Generic health relevance ,Signal Transduction - Abstract
Protein micropatterning enables robust control of cell positioning on electron-microscopy substrates for cryogenic electron tomography (cryo-ET). However, the combination of regulated cell boundaries and the underlying electron-microscopy substrate (EM-grids) provides a poorly understood microenvironment for cell biology. Because substrate stiffness and morphology affect cellular behavior, we devised protocols to characterize the nanometer-scale details of the protein micropatterns on EM-grids by combining cryo-ET, atomic force microscopy, and scanning electron microscopy. Measuring force displacement characteristics of holey carbon EM-grids, we found that their effective spring constant is similar to physiological values expected from skin tissues. Despite their apparent smoothness at light-microscopy resolution, spatial boundaries of the protein micropatterns are irregular at nanometer scale. Our protein micropatterning workflow provides the means to steer both positioning and morphology of cell doublets to determine nanometer details of punctate adherens junctions. Our workflow serves as the foundation for studying the fundamental structural changes governing cell-cell signaling.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.