96 results on '"Hitoshi Iyatomi"'
Search Results
2. LeafGAN: An Effective Data Augmentation Method for Practical Plant Disease Diagnosis
- Author
-
Hitoshi Iyatomi, Quan Huu Cap, Hiroyuki Uga, and Satoshi Kagiwada
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,media_common.quotation_subject ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,Disease classification ,02 engineering and technology ,Overfitting ,Machine learning ,computer.software_genre ,Plant disease ,020901 industrial engineering & automation ,Transformation (function) ,Control and Systems Engineering ,Code (cryptography) ,Quality (business) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,media_common ,Test data - Abstract
Many applications for the automated diagnosis of plant disease have been developed based on the success of deep learning techniques. However, these applications often suffer from overfitting, and the diagnostic performance is drastically decreased when used on test datasets from new environments. In this paper, we propose LeafGAN, a novel image-to-image translation system with own attention mechanism. LeafGAN generates a wide variety of diseased images via transformation from healthy images, as a data augmentation tool for improving the performance of plant disease diagnosis. Thanks to its own attention mechanism, our model can transform only relevant areas from images with a variety of backgrounds, thus enriching the versatility of the training images. Experiments with five-class cucumber disease classification show that data augmentation with vanilla CycleGAN cannot help to improve the generalization, i.e., disease diagnostic performance increased by only 0.7% from the baseline. In contrast, LeafGAN boosted the diagnostic performance by 7.4%. We also visually confirmed the generated images by our LeafGAN were much better quality and more convincing than those generated by vanilla CycleGAN. The code is available publicly at: https://github.com/IyatomiLab/LeafGAN., Accepted as a regular paper in the IEEE Transactions on Automation Science and Engineering (T-ASE)
- Published
- 2022
- Full Text
- View/download PDF
3. DM2S2: Deep Multimodal Sequence Sets With Hierarchical Modality Attention
- Author
-
Shunsuke Kitada, Yuki Iwazaki, Riku Togashi, and Hitoshi Iyatomi
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,General Computer Science ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,General Engineering ,Multimedia (cs.MM) ,Machine Learning (cs.LG) ,Artificial Intelligence (cs.AI) ,General Materials Science ,Electrical and Electronic Engineering ,Computation and Language (cs.CL) ,Computer Science - Multimedia - Abstract
There is increasing interest in the use of multimodal data in various web applications, such as digital advertising and e-commerce. Typical methods for extracting important information from multimodal data rely on a mid-fusion architecture that combines the feature representations from multiple encoders. However, as the number of modalities increases, several potential problems with the mid-fusion model structure arise, such as an increase in the dimensionality of the concatenated multimodal features and missing modalities. To address these problems, we propose a new concept that considers multimodal inputs as a set of sequences, namely, deep multimodal sequence sets (DM$^2$S$^2$). Our set-aware concept consists of three components that capture the relationships among multiple modalities: (a) a BERT-based encoder to handle the inter- and intra-order of elements in the sequences, (b) intra-modality residual attention (IntraMRA) to capture the importance of the elements in a modality, and (c) inter-modality residual attention (InterMRA) to enhance the importance of elements with modality-level granularity further. Our concept exhibits performance that is comparable to or better than the previous set-aware models. Furthermore, we demonstrate that the visualization of the learned InterMRA and IntraMRA weights can provide an interpretation of the prediction results., Comment: 12 pages, 3 figures. Accepted by IEEE Access on Nov. 3, 2022
- Published
- 2022
- Full Text
- View/download PDF
4. Image analysis in advanced skin imaging technology
- Author
-
Lei Bi, M. Emre Celebi, Hitoshi Iyatomi, Pablo Fernandez-Penas, and Jinman Kim
- Subjects
Health Informatics ,Software ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
5. Disease-Oriented Image Embedding With Pseudo-Scanner Standardization for Content-Based Image Retrieval on 3D Brain MRI
- Author
-
Yusuke Chayama, Hayato Arai, Yuto Onga, Hitoshi Iyatomi, Kumpei Ikuta, and Kenichi Oishi
- Subjects
FOS: Computer and information sciences ,convolutional auto encoders ,Scanner ,General Computer Science ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,020209 energy ,Big data ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,data harmonization ,Content-based image retrieval ,Image (mathematics) ,ADNI ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Image retrieval ,CBIR ,business.industry ,General Engineering ,Pattern recognition ,Spectral clustering ,TK1-9971 ,stomatognathic diseases ,CycleGAN ,Metric (mathematics) ,Embedding ,020201 artificial intelligence & image processing ,Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,data standardization - Abstract
To build a robust and practical content-based image retrieval (CBIR) system that is applicable to a clinical brain MRI database, we propose a new framework -- Disease-oriented image embedding with pseudo-scanner standardization (DI-PSS) -- that consists of two core techniques, data harmonization and a dimension reduction algorithm. Our DI-PSS uses skull stripping and CycleGAN-based image transformations that map to a standard brain followed by transformation into a brain image taken with a given reference scanner. Then, our 3D convolutioinal autoencoders (3D-CAE) with deep metric learning acquires a low-dimensional embedding that better reflects the characteristics of the disease. The effectiveness of our proposed framework was tested on the T1-weighted MRIs selected from the Alzheimer's Disease Neuroimaging Initiative and the Parkinson's Progression Markers Initiative. We confirmed that our PSS greatly reduced the variability of low-dimensional embeddings caused by different scanner and datasets. Compared with the baseline condition, our PSS reduced the variability in the distance from Alzheimer's disease (AD) to clinically normal (CN) and Parkinson disease (PD) cases by 15.8-22.6% and 18.0-29.9%, respectively. These properties allow DI-PSS to generate lower dimensional representations that are more amenable to disease classification. In AD and CN classification experiments based on spectral clustering, PSS improved the average accuracy and macro-F1 by 6.2% and 10.7%, respectively. Given the potential of the DI-PSS for harmonizing images scanned by MRI scanners that were not used to scan the training data, we expect that the DI-PSS is suitable for application to a large number of legacy MRIs scanned in heterogeneous environments., Comment: 13 pages, 7 figures
- Published
- 2021
- Full Text
- View/download PDF
6. Key Area Acquisition Training for Practical Image-based Plant Disease Diagnosis
- Author
-
Kaito Odagiri, Shogo Shibuya, Quan Huu Cap, and Hitoshi Iyatomi
- Published
- 2022
- Full Text
- View/download PDF
7. Super-Resolution for Brain MR Images from a Significantly Small Amount of Training Data
- Author
-
Kumpei Ikuta, Hitoshi Iyatomi, Kenichi Oishi, and on behalf of the Alzheimer’s Disease Neuroimaging Initiative
- Published
- 2022
- Full Text
- View/download PDF
8. Loc-VAE: Learning Structurally Localized Representation from 3D Brain MR Images for Content-Based Image Retrieval
- Author
-
Kei Nishimaki, Kumpei Ikuta, Yuto Onga, Hitoshi Iyatomi, and Kenichi Oishi
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Quantitative Biology - Neurons and Cognition ,Computer Vision and Pattern Recognition (cs.CV) ,FOS: Biological sciences ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Neurons and Cognition (q-bio.NC) ,Electrical Engineering and Systems Science - Image and Video Processing ,Information Retrieval (cs.IR) ,Computer Science - Information Retrieval ,Machine Learning (cs.LG) - Abstract
Content-based image retrieval (CBIR) systems are an emerging technology that supports reading and interpreting medical images. Since 3D brain MR images are high dimensional, dimensionality reduction is necessary for CBIR using machine learning techniques. In addition, for a reliable CBIR system, each dimension in the resulting low-dimensional representation must be associated with a neurologically interpretable region. We propose a localized variational autoencoder (Loc-VAE) that provides neuroanatomically interpretable low-dimensional representation from 3D brain MR images for clinical CBIR. Loc-VAE is based on $\beta$-VAE with the additional constraint that each dimension of the low-dimensional representation corresponds to a local region of the brain. The proposed Loc-VAE is capable of acquiring representation that preserves disease features and is highly localized, even under high-dimensional compression ratios (4096:1). The low-dimensional representation obtained by Loc-VAE improved the locality measure of each dimension by 4.61 points compared to naive $\beta$-VAE, while maintaining comparable brain reconstruction capability and information about the diagnosis of Alzheimer's disease., Comment: 6 pages, 6 figures. Accepted at the International Conference on Systems, Man, and Cybernetics (IEEE SMC '22)
- Published
- 2022
- Full Text
- View/download PDF
9. Ad Creative Discontinuation Prediction with Multi-Modal Multi-Task Neural Survival Networks
- Author
-
Shunsuke Kitada, Hitoshi Iyatomi, and Yoshifumi Seki
- Subjects
Fluid Flow and Transfer Processes ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,ad creative ,deep learning ,online advertising ,survival prediction ,Computer Science - Artificial Intelligence ,Process Chemistry and Technology ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,General Engineering ,Computer Science - Information Retrieval ,Computer Science Applications ,Machine Learning (cs.LG) ,Artificial Intelligence (cs.AI) ,General Materials Science ,Instrumentation ,Computation and Language (cs.CL) ,Information Retrieval (cs.IR) - Abstract
Discontinuing ad creatives at an appropriate time is one of the most important ad operations that can have a significant impact on sales. Such operational support for ineffective ads has been less explored than that for effective ads. After pre-analyzing 1,000,000 real-world ad creatives, we found that there are two types of discontinuation: short-term (i.e., cut-out) and long-term (i.e., wear-out). In this paper, we propose a practical prediction framework for the discontinuation of ad creatives with a hazard function-based loss function inspired by survival analysis. Our framework predicts the discontinuations with a multi-modal deep neural network that takes as input the ad creative (e.g., text, categorical, image, numerical features). To improve the prediction performance for the two different types of discontinuations and for the ad creatives that contribute to sales, we introduce two new techniques: (1) a two-term estimation technique with multi-task learning and (2) a click-through rate-weighting technique for the loss function. We evaluated our framework using the large-scale ad creative dataset, including 10 billion scale impressions. In terms of the concordance index (short: 0.896, long: 0.939, and overall: 0.792), our framework achieved significantly better performance than the conventional method (0.531). Additionally, we confirmed that our framework (i) demonstrated the same degree of discontinuation effect as manual operations for short-term cases, and (ii) accurately predicted the ad discontinuation order, which is important for long-running ad creatives for long-term cases., Comment: 23 pages, 5 figures. Accepted by Appl. Sci. on March 29th, 2022
- Published
- 2022
- Full Text
- View/download PDF
10. Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired
- Author
-
Kazuya Ohata, Shunsuke Kitada, and Hitoshi Iyatomi
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Human-Computer Interaction ,Computation and Language (cs.CL) ,Human-Computer Interaction (cs.HC) ,Machine Learning (cs.LG) - Abstract
We propose a simple yet effective image captioning framework that can determine the quality of an image and notify the user of the reasons for any flaws in the image. Our framework first determines the quality of images and then generates captions using only those images that are determined to be of high quality. The user is notified by the flaws feature to retake if image quality is low, and this cycle is repeated until the input image is deemed to be of high quality. As a component of the framework, we trained and evaluated a low-quality image detection model that simultaneously learns difficulty in recognizing images and individual flaws, and we demonstrated that our proposal can explain the reasons for flaws with a sufficient score. We also evaluated a dataset with low-quality images removed by our framework and found improved values for all four common metrics (e.g., BLEU-4, METEOR, ROUGE-L, CIDEr), confirming an improvement in general-purpose image captioning capability. Our framework would assist the visually impaired, who have difficulty judging image quality., Comment: 6 pages, 4 figures. Accepted at 2022 IEEE 19th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET) as a full paper
- Published
- 2022
- Full Text
- View/download PDF
11. Trends and Challenges of Automatic Diagnosis Techniques for Plant Diseases
- Author
-
Hitoshi Iyatomi
- Published
- 2019
- Full Text
- View/download PDF
12. PPIG: Productive and Pathogenic Image Generation for Plant Disease Diagnosis
- Author
-
Satoi Kanno, Shunta Nagasawa, Satoshi Kagiwada, Hitoshi Iyatomi, Syogo Shibuya, Hiroyuki Uga, and Quan Huu Cap
- Subjects
Image generation ,Computer science ,business.industry ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,Overfitting ,Diagnostic system ,01 natural sciences ,Plant disease ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Medical diagnosis ,Performance improvement ,business ,0105 earth and related environmental sciences - Abstract
Image-based autonomous diagnosis for plants is a difficult task since plant symptoms are visually subtle. This subtlety leads to the system overfitting as it sometimes responds to non-essential parts in images such as background or sunlight conditions. Thus, this causes a significant drop in performance when diagnosing diseases in different test fields. Several data augmentation methods utilizing generative adversarial networks (GANs) have been proposed to address this overfitting problem. However, performance improvement is limited due to the limited variety of generated images. This study proposes a productive and pathogenic image generation (PPIG) technique, a framework for generating varied and quality plant images to train the diagnostic systems. PPIG is comprised of two phases: the bulk production phase and the pathogenic phase. In the first phase, a number of healthy leaf images are generated to form the basis for the generation of disease images. Then, in the second phase, the symptomatic characteristics are added to the leaf part of the generated healthy images. In this study, we conducted experiments to evaluate PPIG using test images taken in different fields from the training images, assuming six disease classes of cucumber leaves. The proposed PPIG can generate natural-looking, healthy and disease images, and data augmentation using these images effectively improved the robustness of the diagnostic system. Experiments on 8,834 test images taken in different fields from 53,045 training images show that our proposal improved the disease diagnostic performance from the baseline by 9.4% for the macro-average F1-score. Moreover, it also outperformed the previous cutting-edge data augmentation methodology by 4.5%.
- Published
- 2021
- Full Text
- View/download PDF
13. Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training
- Author
-
Shunsuke Kitada and Hitoshi Iyatomi
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Artificial Intelligence ,Computation and Language (cs.CL) - Abstract
Although attention mechanisms have become fundamental components of deep learning models, they are vulnerable to perturbations, which may degrade the prediction performance and model interpretability. Adversarial training (AT) for attention mechanisms has successfully reduced such drawbacks by considering adversarial perturbations. However, this technique requires label information, and thus, its use is limited to supervised settings. In this study, we explore the concept of incorporating virtual AT (VAT) into the attention mechanisms, by which adversarial perturbations can be computed even from unlabeled data. To realize this approach, we propose two general training techniques, namely VAT for attention mechanisms (Attention VAT) and "interpretable" VAT for attention mechanisms (Attention iVAT), which extend AT for attention mechanisms to a semi-supervised setting. In particular, Attention iVAT focuses on the differences in attention; thus, it can efficiently learn clearer attention and improve model interpretability, even with unlabeled data. Empirical experiments based on six public datasets revealed that our techniques provide better prediction performance than conventional AT-based as well as VAT-based techniques, and stronger agreement with evidence that is provided by humans in detecting important words in sentences. Moreover, our proposal offers these advantages without needing to add the careful selection of unlabeled data. That is, even if the model using our VAT-based technique is trained on unlabeled data from a source other than the target task, both the prediction performance and model interpretability can be improved., Comment: 18 pages, 3 figures. Accepted for publication in Springer Applied Intelligence (APIN)
- Published
- 2021
- Full Text
- View/download PDF
14. MIINet: An Image Quality Improvement Framework for Supporting Medical Diagnosis
- Author
-
Quan Huu Cap, Hitoshi Iyatomi, and Atsushi Fukuda
- Subjects
Image quality ,Computer science ,business.industry ,media_common.quotation_subject ,Lower score ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,Computer vision ,Artificial intelligence ,Medical diagnosis ,business ,media_common - Abstract
Medical images have been indispensable and useful tools for supporting medical experts in making diagnostic decisions. However, taken medical images especially throat and endoscopy images are normally hazy, lack of focus, or uneven illumination. Thus, these could difficult the diagnosis process for doctors. In this paper, we propose MIINet, a novel image-to-image translation network for improving quality of medical images by unsupervised translating low-quality images to the high-quality clean version. Our MIINet is not only capable of generating high-resolution clean images, but also preserving the attributes of original images, making the diagnostic more favorable for doctors. Experiments on dehazing 100 practical throat images show that our MIINet largely improves the mean doctor opinion score (MDOS), which assesses the quality and the reproducibility of the images from the baseline of 2.36 to 4.11, while dehazed images by CycleGAN got lower score of 3.83. The MIINet is confirmed by three physicians to be satisfying in supporting throat disease diagnostic from original low-quality images.
- Published
- 2021
- Full Text
- View/download PDF
15. LASSR: Effective Super-Resolution Method for Plant Disease Diagnosis
- Author
-
Quan Huu Cap, Satoshi Kagiwada, Hitoshi Iyatomi, Hiroki Tani, and Hiroyuki Uga
- Subjects
0106 biological sciences ,FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Horticulture ,01 natural sciences ,FOS: Electrical engineering, electronic engineering, information engineering ,Artifact (error) ,Training set ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Forestry ,Pattern recognition ,04 agricultural and veterinary sciences ,Electrical Engineering and Systems Science - Image and Video Processing ,Superresolution ,Plant disease ,Computer Science Applications ,Leaf disease ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Artificial intelligence ,business ,Agronomy and Crop Science ,010606 plant biology & botany - Abstract
The collection of high-resolution training data is crucial in building robust plant disease diagnosis systems, since such data have a significant impact on diagnostic performance. However, they are very difficult to obtain and are not always available in practice. Deep learning-based techniques, and particularly generative adversarial networks (GANs), can be applied to generate high-quality super-resolution images, but these methods often produce unexpected artifacts that can lower the diagnostic performance. In this paper, we propose a novel artifact-suppression super-resolution method that is specifically designed for diagnosing leaf disease, called Leaf Artifact-Suppression Super-Resolution (LASSR). Thanks to its own artifact removal module that detects and suppresses artifacts to a considerable extent, LASSR can generate much more pleasing, high-quality images compared to the state-of-the-art ESRGAN model. Experiments based on a five-class cucumber disease (including healthy) discrimination model show that training with data generated by LASSR significantly boosts the performance on an unseen test dataset by over 21% compared with the baseline, and that our approach is more than 2% better than a model trained with images generated by ESRGAN.
- Published
- 2020
- Full Text
- View/download PDF
16. AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss
- Author
-
Hitoshi Iyatomi, Shunsuke Kitada, and Mahmoud Daif
- Subjects
Feature engineering ,FOS: Computer and information sciences ,Arabic ,Computer science ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,Computer Science - Computation and Language ,business.industry ,Deep learning ,Document classification ,Text segmentation ,Class (biology) ,language.human_language ,ComputingMethodologies_PATTERNRECOGNITION ,language ,Modern Standard Arabic ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,020201 artificial intelligence & image processing ,Artificial intelligence ,Classical Arabic ,business ,computer ,Computation and Language (cs.CL) ,Natural language processing - Abstract
Classical and some deep learning techniques for Arabic text classification often depend on complex morphological analysis, word segmentation, and hand-crafted feature engineering. These could be eliminated by using character-level features. We propose a novel end-to-end Arabic document classification framework, Arabic document image-based classifier (AraDIC), inspired by the work on image-based character embeddings. AraDIC consists of an image-based character encoder and a classifier. They are trained in an end-to-end fashion using the class balanced loss to deal with the long-tailed data distribution problem. To evaluate the effectiveness of AraDIC, we created and published two datasets, the Arabic Wikipedia title (AWT) dataset and the Arabic poetry (AraP) dataset. To the best of our knowledge, this is the first image-based character embedding framework addressing the problem of Arabic text classification. We also present the first deep learning-based text classifier widely evaluated on modern standard Arabic, colloquial Arabic, and Classical Arabic. AraDIC shows performance improvement over classical and deep learning baselines by 12.29% and 23.05% for the micro and macro F-score, respectively.
- Published
- 2020
- Full Text
- View/download PDF
17. Video-based Estimation System Using Convolutional Neural Networks for Audiences’ State in the Classroom and Discussion of its Essential Image Features
- Author
-
Hitoshi Iyatomi and Daiki Shimada
- Subjects
Estimation ,Multimedia ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,State (computer science) ,business ,Video based ,computer ,0105 earth and related environmental sciences - Published
- 2017
- Full Text
- View/download PDF
18. Stochastic Gastric Image Augmentation for Cancer Detection from X-ray Images
- Author
-
Jun Hashimoto, Quan Huu Cap, Hitoshi Iyatomi, Hideaki Okamoto, and Takakiyo Nomura
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Computer science ,Cancer ,Gastric fold ,Cancer detection ,medicine.disease ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Endoscopy ,03 medical and health sciences ,0302 clinical medicine ,Computer-aided diagnosis ,030220 oncology & carcinogenesis ,medicine ,X ray image ,Radiology - Abstract
X-ray examinations are a common choice in mass screenings for gastric cancer. Compared to endoscopy and other common modalities, X-ray examinations have the significant advantage that they can be performed not only by radiologists but also by radiology technicians. However, the diagnosis of gastric X-ray images is very difficult and it has been reported that the diagnostic accuracy of these images is only 85.5%. In this study, we propose a practical diagnosis support system for gastric X-ray images. An important component of our system is the proposed on-line data augmentation strategy named stochastic gastric image augmentation (sGAIA), which stochastically generates various enhanced images of gastric folds in X-ray images. The proposed sGAIA improves the detection performance of the malignant region by 6.9% in F1-score and our system demonstrates promising screening performance for gastric cancer (recall of 92.3% with a precision of 32.4%) from X-ray images in a clinical setting based on Faster R-CNN with ResNetl01 networks.
- Published
- 2019
- Full Text
- View/download PDF
19. Towards Explainable Melanoma Diagnosis: Prediction of Clinical Indicators Using Semi-supervised and Multi-task Learning
- Author
-
Hitoshi Iyatomi and Seiya Murabayashi
- Subjects
business.industry ,Computer science ,Deep learning ,Multi-task learning ,02 engineering and technology ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Labeled data ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Melanoma diagnosis ,Reliability (statistics) - Abstract
Although image-based melanoma diagnosis has achieved a sufficient level of numerical accuracy, providing objective evidence is essential to enhance the explainability and reliability of this approach. The collection of label information based on quantitative clinical indicators is very expensive, meaning that the amount of labeled data available is limited. In this paper, we propose an effective method for predicting explainable melanoma indicators defined by a 7-point checklist in a situation where only a limited number of labeled data are available. Our proposal effectively utilizes virtual adversarial training as a semi-supervised learning framework with multi-task learning. This approach gives favorable performance for only a very limited number of expensive labeled data. The proposed method improves the final accuracy of melanoma diagnosis calculated based on these predicted indices by 7.5% (making it equivalent to expert dermatologists), based on 9,124 unlabeled images with diagnosis information added to the 226 base labeled training images.
- Published
- 2019
- Full Text
- View/download PDF
20. A comparable study: Intrinsic difficulties of practical plant diagnosis from wide-angle images
- Author
-
Katsumasa Suwa, Hiroyuki Uga, Quan Huu Cap, Satoshi Kagiwada, Hitoshi Iyatomi, and Ryunosuke Kotani
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Population ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Disease ,Overfitting ,Machine learning ,computer.software_genre ,Field (computer science) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,education ,education.field_of_study ,business.industry ,Deep learning ,Cognitive neuroscience of visual object recognition ,Plant disease ,Object detection ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Test data - Abstract
Practical automated detection and diagnosis of plant disease from wide-angle images (i.e. in-field images containing multiple leaves using a fixed-position camera) is a very important application for large-scale farm management, in view of the need to ensure global food security. However, developing automated systems for disease diagnosis is often difficult, because labeling a reliable wide-angle disease dataset from actual field images is very laborious. In addition, the potential similarities between the training and test data lead to a serious problem of model overfitting. In this paper, we investigate changes in performance when applying disease diagnosis systems to different scenarios involving wide-angle cucumber test data captured on real farms, and propose an effective diagnostic strategy. We show that leading object recognition techniques such as SSD and Faster R-CNN achieve excellent end-to-end disease diagnostic performance only for a test dataset that is collected from the same population as the training dataset (with F1-score of 81.5% - 84.1% for diagnosed cases of disease), but their performance markedly deteriorates for a completely different test dataset (with F1-score of 4.4 - 6.2%). In contrast, our proposed two-stage systems using independent leaf detection and leaf diagnosis stages attain a promising disease diagnostic performance that is more than six times higher than end-to-end systems (with F1-score of 33.4 - 38.9%) on an unseen target dataset. We also confirm the efficiency of our proposal based on visual assessment, concluding that a two-stage model is a suitable and reasonable choice for practical applications., 7 pages, 3 figures
- Published
- 2019
- Full Text
- View/download PDF
21. AOP: An Anti-overfitting Pretreatment for Practical Image-based Plant Diagnosis
- Author
-
Takumi Saikawa, Hiroyuki Uga, Satoshi Kagiwada, Hitoshi Iyatomi, and Quan Huu Cap
- Subjects
FOS: Computer and information sciences ,Calibration (statistics) ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,010401 analytical chemistry ,Computer Science - Computer Vision and Pattern Recognition ,Pattern recognition ,02 engineering and technology ,Overfitting ,01 natural sciences ,0104 chemical sciences ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Image based - Abstract
In image-based plant diagnosis, clues related to diagnosis are often unclear, and the other factors such as image backgrounds often have a significant impact on the final decision. As a result, overfitting due to latent similarities in the dataset often occurs, and the diagnostic performance on real unseen data (e,g. images from other farms) is usually dropped significantly. However, this problem has not been sufficiently explored, since many systems have shown excellent diagnostic performance due to the bias caused by the similarities in the dataset. In this study, we investigate this problem with experiments using more than 50,000 images of cucumber leaves, and propose an anti-overfitting pretreatment (AOP) for realizing practical image-based plant diagnosis systems. The AOP detects the area of interest (leaf, fruit etc.) and performs brightness calibration as a preprocessing step. The experimental results demonstrate that our AOP can improve the accuracy of diagnosis for unknown test images from different farms by 12.2% in a practical setting., To appear in the IEEE BigData 2019 Workshop on Big Food and Nutrition Data Management and Analysis
- Published
- 2019
- Full Text
- View/download PDF
22. Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives
- Author
-
Hitoshi Iyatomi, Shunsuke Kitada, and Yoshifumi Seki
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,Human–computer interaction ,Computer science ,Key (cryptography) ,Computation and Language (cs.CL) ,Machine Learning (cs.LG) ,Task (project management) - Abstract
Accurately predicting conversions in advertisements is generally a challenging task, because such conversions do not occur frequently. In this paper, we propose a new framework to support creating high-performing ad creatives, including the accurate prediction of ad creative text conversions before delivering to the consumer. The proposed framework includes three key ideas: multi-task learning, conditional attention, and attention highlighting. Multi-task learning is an idea for improving the prediction accuracy of conversion, which predicts clicks and conversions simultaneously, to solve the difficulty of data imbalance. Furthermore, conditional attention focuses attention of each ad creative with the consideration of its genre and target gender, thus improving conversion prediction accuracy. Attention highlighting visualizes important words and/or phrases based on conditional attention. We evaluated the proposed framework with actual delivery history data (14,000 creatives displayed more than a certain number of times from Gunosy Inc.), and confirmed that these ideas improve the prediction performance of conversions, and visualize noteworthy words according to the creatives' attributes., Comment: 9 pages, 6 figures. Accepted at The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2019) as an applied data science paper
- Published
- 2019
- Full Text
- View/download PDF
23. Super-Resolution for Practical Automated Plant Disease Diagnosis System
- Author
-
Satoshi Kagiwada, Hiroki Tani, Hiroyuki Uga, Quan Huu Cap, and Hitoshi Iyatomi
- Subjects
FOS: Computer and information sciences ,Boosting (machine learning) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Diagnostic accuracy ,02 engineering and technology ,01 natural sciences ,parasitic diseases ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Preprocessor ,business.industry ,Deep learning ,010401 analytical chemistry ,Image and Video Processing (eess.IV) ,Disease classification ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Superresolution ,Plant disease ,0104 chemical sciences ,Bicubic interpolation ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Automated plant diagnosis using images taken from a distance is often insufficient in resolution and degrades diagnostic accuracy since the important external characteristics of symptoms are lost. In this paper, we first propose an effective pre-processing method for improving the performance of automated plant disease diagnosis systems using super-resolution techniques. We investigate the efficiency of two different super-resolution methods by comparing the disease diagnostic performance on the practical original high-resolution, low-resolution, and super-resolved cucumber images. Our method generates super-resolved images that look very close to natural images with 4$\times$ upscaling factors and is capable of recovering the lost detailed symptoms, largely boosting the diagnostic performance. Our model improves the disease classification accuracy by 26.9% over the bicubic interpolation method of 65.6% and shows a small gap (3% lower) between the original result of 95.5%., Comment: Published as a conference paper at CISS 2019, Baltimore, MD, USA
- Published
- 2019
- Full Text
- View/download PDF
24. Efficient feature embedding of 3D brain MRI images for content-based image retrieval with deep metric learning
- Author
-
Shingo Fujiyama, Hayato Arai, Yuto Onga, Yusuke Chayama, Hitoshi Iyatomi, and Kenichi Oishi
- Subjects
FOS: Computer and information sciences ,010504 meteorology & atmospheric sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Content-based image retrieval ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Cluster analysis ,Image retrieval ,Image resolution ,0105 earth and related environmental sciences ,Contextual image classification ,business.industry ,Dimensionality reduction ,Image and Video Processing (eess.IV) ,Data compression ratio ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Autoencoder ,Feature (computer vision) ,Metric (mathematics) ,Embedding ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Increasing numbers of MRI brain scans, improvements in image resolution, and advancements in MRI acquisition technology are causing significant increases in the demand for and burden on radiologists' efforts in terms of reading and interpreting brain MRIs. Content-based image retrieval (CBIR) is an emerging technology for reducing this burden by supporting the reading of medical images. High dimensionality is a major challenge in developing a CBIR system that is applicable for 3D brain MRIs. In this study, we propose a system called disease-oriented data concentration with metric learning (DDCML). In DDCML, we introduce deep metric learning to a 3D convolutional autoencoder (CAE). Our proposed DDCML scheme achieves a high dimensional compression rate (4096:1) while preserving the disease-related anatomical features that are important for medical image classification. The low-dimensional representation obtained by DDCML improved the clustering performance by 29.1\% compared to plain 3D-CAE in terms of discriminating Alzheimer's disease patients from healthy subjects, and successfully reproduced the relationships of the severity of disease categories that were not included in the training., Comment: To appear in the IEEE BigData 2019 Workshop on Advances in High Dimensional (AdHD) Big Data
- Published
- 2019
- Full Text
- View/download PDF
25. Significant Dimension Reduction of 3D Brain MRI using 3D Convolutional Autoencoders
- Author
-
Hitoshi Iyatomi, Hayato Arai, Yusuke Chayama, and Kenichi Oishi
- Subjects
Databases, Factual ,Computer science ,Feature extraction ,Iterative reconstruction ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Reduction (complexity) ,03 medical and health sciences ,0302 clinical medicine ,Voxel ,medicine ,Image retrieval ,Neuroradiology ,medicine.diagnostic_test ,business.industry ,Dimensionality reduction ,Brain ,Magnetic resonance imaging ,Pattern recognition ,Effective dimension ,Magnetic Resonance Imaging ,Visualization ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery - Abstract
Content-based image retrieval (CBIR) is a technology designed to retrieve images from a database based on visual features. While the CBIR is highly desired, it has not been applied to clinical neuroradiology, because clinically relevant neuroradiological features are swamped by a huge number of noisy and unrelated voxel information. Thus, effective dimension reduction is the key to successful CBIR. We propose a novel dimensional compression method based on 3D convolutional autoencoders (3D-CAE), which was applied to the ADNI2 3D brain MRI dataset. Our method succeeded in compressing 5 million voxel information to only 150 dimensions, while preserving clinically relevant neuroradiological features. The RMSE per voxel was as low as 8.4%, suggesting a promise of our method toward the application to the CBIR.
- Published
- 2018
26. A deep learning approach for on-site plant leaf detection
- Author
-
Hiroyuki Uga, Katsumasa Suwa, Erika Fujita, Satoshi Kagiwada, Hitoshi Iyatomi, and Huu Quan Cap
- Subjects
Computer science ,business.industry ,Deep learning ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Early detection ,02 engineering and technology ,01 natural sciences ,0104 chemical sciences ,Image frame ,Range (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Narrow range ,Detection performance ,020201 artificial intelligence & image processing ,Computer vision ,Surveillance camera ,Artificial intelligence ,business ,Image resolution - Abstract
Plant diseases are the major problem in the worldwide agriculture sector. Therefore, the early detection is essential for reducing economic losses and mitigating the seriousness of the global food problem. Some fast and accurate computer-based methods have been applied to detect plant diseases. However, as far as our best knowledge, all those methodologies only accept a narrow range image, typically one or limited number of target(s) are in the image frame as their input. Thus, they are time-consuming and difficult to be applied for on-site wide range images (e.g. images or videos from stationary surveillance camera). In this paper, we propose leaf localization method from on-site wide-angle images with a deep learning approach. Our method achieves a detection performance of 78.0% in F1-measure at 2.0 fps.
- Published
- 2018
- Full Text
- View/download PDF
27. One-dimensional convolutional neural networks for Android malware detection
- Author
-
Hitoshi Iyatomi and Chihiro Hasegawa
- Subjects
business.industry ,Computer science ,Computation ,Byte ,020206 networking & telecommunications ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,Convolution ,Mobile phone ,Embedded system ,Android malware ,0202 electrical engineering, electronic engineering, information engineering ,Malware ,business ,computer ,Humanoid robot ,0105 earth and related environmental sciences - Abstract
In recent years, malware aims at Android OS has been increasing due to its rapid popularization. Several studies have been conducted for automated malware detection with machine learning approach and reported promising performance. However, they require a large amount of computation when running on the client; typically mobile phone and/or similar devices. Thus, problems remain in terms of practicality. In this paper, we propose an accurate and light-weight Android malware detection method. Our method treats very limited part of raw APK (Android application package) file of the target as a short string and analyzes it with one-dimensional convolutional neural network (1-D CNN). We used two different datasets each consisting of 5,000 malwares and 2,000 goodwares. We confirmed our method using only the last 512–1K bytes of APK file achieved 95.40–97.04% in accuracy discriminating their malignancy under the 10-fold cross-validation strategy.
- Published
- 2018
- Full Text
- View/download PDF
28. Web application firewall using character-level convolutional neural network
- Author
-
Hitoshi Iyatomi and Michiaki Ito
- Subjects
021103 operations research ,business.industry ,Computer science ,020208 electrical & electronic engineering ,Feature extraction ,0211 other engineering and technologies ,02 engineering and technology ,Convolutional neural network ,Cross-validation ,Kernel (image processing) ,SQL injection ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,Pattern matching ,Application firewall ,business ,Computer network - Abstract
Web applications can be maliciously exploited by malicious HTTP requests. Normally, web application firewall (WAF) protects web applications from known attacks using pattern matching method. However, introduction of WAF is usually expensive as it requires the definition of patterns according to the situation. Furthermore, the system cannot block unknown malicious request. In this paper, we come up with an efficient machine learning approach to solve these issues. Our approach uses Character-level convolutional neural network (CLCNN) with very large global max-pooling for extracting the feature of HTTP request and identify it into normal or malicious request. We evaluated our system on HTTP DATASET CSIC 2010 dataset and achieved 98.8% of accuracy under 10-fold cross validation and the average processing time per request was 2.35ms.
- Published
- 2018
- Full Text
- View/download PDF
29. End-to-End Text Classification via Image-based Embedding using Character-level Networks
- Author
-
Shunsuke Kitada, Ryunosuke Kotani, and Hitoshi Iyatomi
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,business.industry ,Computer science ,Deep learning ,Document classification ,05 social sciences ,Text segmentation ,010501 environmental sciences ,Overfitting ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,Character (mathematics) ,0502 economics and business ,Artificial intelligence ,Language model ,050207 economics ,business ,computer ,Computation and Language (cs.CL) ,Natural language processing ,Word (computer architecture) ,0105 earth and related environmental sciences - Abstract
For analysing and/or understanding languages having no word boundaries based on morphological analysis such as Japanese, Chinese, and Thai, it is desirable to perform appropriate word segmentation before word embeddings. But it is inherently difficult in these languages. In recent years, various language models based on deep learning have made remarkable progress, and some of these methodologies utilizing character-level features have successfully avoided such a difficult problem. However, when a model is fed character-level features of the above languages, it often causes overfitting due to a large number of character types. In this paper, we propose a CE-CLCNN, character-level convolutional neural networks using a character encoder to tackle these problems. The proposed CE-CLCNN is an end-to-end learning model and has an image-based character encoder, i.e. the CE-CLCNN handles each character in the target document as an image. Through various experiments, we found and confirmed that our CE-CLCNN captured closely embedded features for visually and semantically similar characters and achieves state-of-the-art results on several open document classification tasks. In this paper we report the performance of our CE-CLCNN with the Wikipedia title estimation task and analyse the internal behaviour., Comment: To appear in IEEE Applied Imagery Pattern Recognition (AIPR) 2018 workshop
- Published
- 2018
- Full Text
- View/download PDF
30. Abstracts from the 4th World Congress of the International Dermoscopy Society, April 16-18, 2015, Vienna, Austria
- Author
-
Michael A. Marchetti, Alexandros Stratigos, Claudia Jaeger, Nanja van Geel, Erika Varga, Rachel M Bowden, Nebojsa Pesic, Lauren A. Penn, Francesca Farnetani, Irena Walecka, Otto S. Wolfbeis, Anna Pogorzelska-Antkowiak, Małgorzata Zadurska, Miriam A. Jesús Silva, Mari Grönroos, Fabrizio Ayala, Claudia Sprincenatu, Ausilia Maria Manganoni, Jhonatan Rafael S. Pinheiro, Vincent Descamps, Era C. Murzaku, Josephine Rau, Christian Landi, Josep Malvehy, Othon Papadopoulos, Renato Talamini, Savitha L. Beergouder, Adrian Ballano Ruiz, Karina Scandura, Flavia Persechino, Yunxian Tian, Mark Berneburg, Iara Drakensjö, Luis Javier Del pozo, Elizabeth Lazaridou, Marwah A. Saleh, Wei Zhang, Dalal Mosaad, Aida Carolina Medina, Alka Lalji, Robabeh Abedini, FZ Debagh, Ligia Brzezinska-Wcislo, Nurşah Doğan, Naglaa Ahmed, Tamerlan Shaipov, Ritta Khoury, Lidija Kandolf-Sekulovic, Aldo Bono, Luis Angel Vera, Naotomo Kambe, Jaka Rados, Sergio Talarico, Milvia Maria S. E. S. Enokihara, Iris Zalaudek, Malgorzata Maj, Francesca Specchio, Paloma Arribas, Nazan Emiroglu, Andreea Ioana Popescu, Irina Sergeeva, Virginia Chitu, Michael Kirschbaum, Sergio Yamada, Niken Wulandari, Rotaru Maria, Lore Pil, Lieve Brochez, Anthony Azzi, Vasiliy Y. Sergeev, Raimonds Karls, Zeynep Topkarci, Tanja Planinsek Rucigaj, Osvania Maris, Graham J. Mann, Timótio Dorn, Lubomir Drlik, Pilar Iranzo, Sara Minghetti, Michael Noe, Ahmet R Akar, Jesus Cuevas Santos, Laura Raducu, Salim Ysmail-Dahlouk, Laura Mazzoni, Sidharth Sonthalia, Neşe Çallı Demirkan, Yaei Togawa, Branislava Gajic, Ayelet Rishpon, Chih-Hsun Yang, Barbara Boone, José Luis López-Estebaranz, Markus Albert, George Evangelou, André L.M. Oliveira, Ioana Gencia, Nada Vuckovic, Rosa Perelló, Ana Maria Draganita, Michel Colomb, Ayse Cefle, Hongguang Lu, Annarosa Virgili, Hayriye Saricaoglu, Esther A.W. Wolberink, Michael Russu, Elisabeth Arnoult-Coudoux, Caroline Nicaise-Bergère, Aleksandra M Ignjatović, Necmettin Özdemir, Kristīne Zabludovska, Cemal Bilaç, Jose Luis Lopez Estebaranz, Marie-Christine Lami, Harold S. Rabinovitz, Izabel Bota, Damien Grivet, Dimitrije Brasanac, Andrei Jalba, Joep Hoevenaars, Sofie De Schepper, Deniz Duman, Vladimir Vasku, Anna Belloni Fortina, Rosa Cristina Coppola, Marion Chavez-Bourgeois, Hoon-Soo Kim, Zamira Barragan, Julia Welzel, Thomas Ruzicka, Patricia V. Cristodor, Pierfrancesco Zampieri, Michael Lanthaler, Marc Haspeslagh, Jürgen Christian Becker, Gamze Erfan, Tanja Maier, Hui Mei Cheng, Mauro Enokihara, Ana Arance, Emel Dikicioglu Cetin, Pranaya A. Bagde, Mona M. Elfangary, Stefano Cavicchini, Alicia Barreiro, Odivânia Krüger, Mariana Petaccia Macedo, Itziar Erana Tomas, Elimar Elias Gomes, Monika Vrablova, Marcio Lorencini, Javier Alcántara González, Giuseppe Micali, Kerstin Kellermann, Mauricio Mendonca do Nascimento, Elisabeth Mt Wurm, Elena Sánchez-Largo Uceda, Yury Sergeev, Céleste Lebbé, Manfred Fiebiger, Gisele Gargantini Rezze, Antonio Graziano, Ana Pampín, Márcia Ferreira Candido, Martine Bagot, Jan Lapins, Nahide Onsun, Daniela Göppner, Katie Lee, Josef Schröder, Gisele G Rezze, Reyes Gamo, Mauricio Soto-Gamboa, Giovanni Pellacani, Maria Luiza P. Freitas, Mizuki Sawada, Hyun-Chang Ko, Ramon M Pujol Vallverdú, Jin gyoon Park, Peter Weber, Alberto Mota, Theofanis Spiliopoulos, Renata B. Marques, Daiji Furusho, Barbora Divisova, Pascale Guitera, Johan Heilborn, Alexandr Fedoseev, Athanasios Kyrgidis, Zakia Douhi, Mariame Meziane, Florent Grange, Alister Lilleyman, Juliana C. Marques-Da-Costa, Mitsuyasu Nakajima, Camilla Reggiani, Marina Meneses, Anna Sokolova, Zoe Apalla, Leo Čabrijan, Tim Lee, Piergiacomo Calzavara-Pinton, Tomas Fikrle, Georgios Chaidemenos, Braun Ralph, Aikaterini Patsatsi, Ekin Şavk, Marcela Pecora Cohen, Ioannis Efstratiou, Gurol Acikgoz, Pietro Quaglino, Nati Angelica, Luc Thomas, Edileia Bagatin, Kedima C. Nassif, Dimitrios Sotiriadis, Regina Fink-Puches, Anna Maria Wozniak, Salvador González, Agnieszka Buszko, Fezal Ozdemir, Banu Yaman, Vishnu Moodalgiri, Anne Grange, Robert J Meier, Davorin Loncaric, Fatmagül Keleş, Renato Marchiori Bakos, Sergio Chimenti, Sebastian Podlipnik, Pınar Incel Uysal, Devinder M Thappa, Nida Kaçar, Emel Bulbul Baskan, Erna Snellman, Pietro Rubegni, J. Kreusch, Hae Jin Pak, Danijela Dobrosavljevic Vukojevic, Bengü Nisa Akay, Holger A. Haenssle, Horacio Cabo, Anna Rammlmair, Fred Godtliebsen, Chiara Ferrari, Hiroshi Sakai, Christina Kemanetzi, Åsa Ingvar, Jitka Suchmannova, Zlata Janjic, Samira Zobiri, Haishan Zeng, Emine Böyük, Antonello Felli, Je-Ho Mun, Pablo Fernández Peñas, Ercan Caliskan, Satish S. Udare, Borna Pavičić, Max Hundeiker, Cristel Ruini, A. Hakan Cermik, Ülker Gül, Auro ra Parodi, Timothy P. Wu, Bernardo Gontijo, Ivan Klyuzhin, Gabriela Turcu, Sylvia Aidé Martínez-Cabriales, Francisco Alcántara Nicolás, Inge A. Krisanti, Sandra Cecilia García-García, Meriem Benfodda, Nika Madjlessi, Paraskevi Karagianni, Gizem Yağcıoğlu, Didem Dizman, Danielle I. Shitara, Nilda Eliana Gomez-Bernal, Mirna Šitum, Natalia Ilina, Job Van Der Heijden, Małgorzata Kwiatkowska, Bota Izabel, Ismini Vassilaki, Irene Potouridou, Jorge Luis Rosado, Lukas Prantl, María-José Bañuls, Fernando N. Barbosa, Seitaro Nakagawa, Jana Dornheim, Hitoshi Iyatomi, Rifat Saitburkhanov, Çiğdem Çağlayan, Natalie Ong, Stefano Gardini, Temeida Alendar, Zrinka Rendić-Miočević, Ryuhei Okuyama, Wafae Bono, Olga Warszawik-Hendzel, Danica Tiodorovic-Zivkovic, Alise Balcere, Ramazan Kahveci, Sebastian Gehmert, Herbert M. Kirchesch, Fernando Javier Pinedo, Raul Niin, Dan Savastru, Andreas Blum, Valeria Coco, Alexander C. Katoulis, Yosuke Yamamoto, Mumtaz Jabeen, Louise De Brot Andrade, Lidia Rudnicka, Pierre Wolkenstein, Fatma Pelin Cengiz, Woo-il Kim, Rainer Hofmann-Wellenhof, Tine Vestergaard, Maria Valeria B. Pinheiro, Ana Filipa Pedrosa, Caroline M. Takigami, Nilgün Bilen, Feroze Kaliyadan, Lotte Themstrup, Awatef Kelati, Katrien Vossaert, Burak Sezen, Natalia Jaimes, Olga Zhukova, Peter Jung, Nidhi Singh, Uxua Floristan, Ivette Alarcon, Michel Baccard, Flávia V. Bittencourt, Nicolas Dupin, Neslihan Şendur, Flavia Boff, Lydia Garcia Gaba, João Pedreira Duprat Neto, Caius Solovan, Byung Soo Kim, Anamaria Jović, Toshitsugu Sato, Antoni Bennassar, Ilkka Pölönen, Svetlana Rogozarski, Agnieszka Kardynał, Harald P.M. Gollnick, Anastasia Trigoni, Harvey Lui, Hiroshi Koga, Dai Ogata, Zeynep N. Saraçoğlu, Nilton B Rodrigues, Ketty Peris, Vanessa da Silva, Akira Hamada, Monica Corazza, Azmat A. Khan, Cengizhan Erdem, Victor Desmond Mandel, Sabina Zurac, Laura Elena Barbosa-Moreno, Filomena Azevedo, Matsue Hiroyuki, Philippe Saiag, Kara Shah, Stephen W. Dusza, Margaret Song, Francesca Giusti, Lidija Zolotarevski, Romain Vie, Rutao Cui, Aylin Okçu Heper, Kerstin Wöltje, Kyoko Tonomura, Charlotte H. Vuong, Moira Ragazzi, Marta Andreu Barasoain, Stephan Schreml, Branka Marinović, Mona R E Abdel Halim, Selimir Kovacevic, Noriaki Kamada, Adriana Garcia-Herrera, Ayse S. Filiz, Helena Collgros, Joan A. Puig-Butille, Ulvi Loite, Meng-Tsan Tsai, Nele Degryse, Philipp Tschandl, Seiichiro Wakabayashi, Korina Tzima, Kari Nielsen, Edith Arzberger, Alain Archimbaud, Makiko Miyamoto, Steffen Emmert, Katharine Hanlon, Stefano Astorino, Andre Sobiecki, Trevino A Pakasi, Giovanni Ghigliotti, Arzu Karataş Toğral, Sara Bassoli, Mahdi Akhbardeh, Martina Ulrich, Mirna Bradamante, Gökhan Uslu, Ross Flewell-Smith, Mauro Alaibac, Bettina Kranzelbinder, Steven Gazal, Nina Malishevskaya, Mikhail Ustinov, Noora Neittaanmäki-Perttu, Olga Simionescu, Saime Irkoren, Mahsa Ansari, Mustafa Turhan Sahin, Priit Kruus, Jana Janovska, Vesna Gajanin, Giovanni Ponti, Alon Scope, Ozkan Kanat, Cesare Massone, Thomas Schopf, Karolina Hadasik, Magnus Karlsson, Ayça Tan, Ignacio Gómez Martín, Armand Bensussan, Dilara Tüysüz, Saleh M. H. El Shiemy, Ine De Wispelaere, Malou Peppelman, Kenan Aydogan, Christian Teutsch, Ryszard A. Antkowiak, Nathalie De Carvahlo, Fatma Shabaka, Matthias Karasek, Christina Fotiadou, Wael M. Saudi, Matthias Weber, Maria Saletta Palumbo, Elisa Benati, Hana Helppikangas, Mariana Grigore, Leonard Witkamp, Rajiv Kumar, Stella Atkins, Eugene Y. Neretin, Dirk Berndt, Piet E.J van Erp, Alessandro Testori, David Duffy, Steluta Ratiu, Tara Bronsnick, Christoph Rinner, Soo-Han Woo, Federica Ferrari, Gabriela Garbin, Eduardo Nagore, Claus Duschl, Caterina Longo, Daniel Alcala-Perez, Helmut Beltraminelli, Sarah Hedtrich, David C McLean, Bojana Spasic, Martin Laimer, Malgorzata Pawlowska-Kisiel, Bohdan Lytvynenko, Heba I. Nagy Abd El-Gawad, Jean-Luc Perrot, Daška Štulhofer Buzina, Dimitrios Rigopoulos, Christian Hallermann, Jeffrey Keir, Adriana Martín Fuentes, Franz Trautinger, Walter L. G. Machado, Emese Gellén, Tatjana Ros, Gabriella Emri, Pinar Y. Basak, Nilay Duman, Reinhart Speeckaert, Peter Komericki, Maciel Zortea, Raphaela Kaestle, Lucía Pérez Carmona, Masaru Tanaka, Ionela Manole, Calin Giurcaneanu, Cristina Carrera, Jianhua Zhao, Marsha Mitchum, Isil Kilinc Karaarslan, Michael Muntifering, Alice Casari, Nicole Basset-Seguin, Seok-Kweon Yun, Vesna Mikulic, Albert Brugués, Kim-Dung Nguyen, Reshmi Madankumar, Joo-Ik Kim, Anna Skrok, Nicolle Mazzotti, Aomar Ammar-Khodja, Alina Avram, Laxmisha Chandrashekar, Dilek Biyik Ozkaya, Refika F. Artuz, Joanna Czuwara-Ladykowska, Hana Szakos, Dejan M Nikolic, Katarzyna Żórawicz, Georg Duftschmid, Natalia Pikelgaupt, Jorge Ocampo-Candiani, Irdina Drljevic, Canten Tataroglu, Esther Jiménez Blázquez, Philippe Gain, Simonetta Piana, Yunus Bulgu, Lars Dornheim, Bruno Labeille, Helmut Schaider, Nitul Khiroya, Sofia Theotokoglou, Christian Morsczeck, Kalliopi Armyra, Serap Öztürkcan, Shricharit h Shetty, Ozlem Su, Susana Puig, Lina Ivert, Katia Ongenae, Hirotsugu Shirabe, Ardalan Benam, Gustav Christensen, Veronika Paťavová, Adria Gual, Laura Pavoni, Mihaita Viorica Mihalceanu, Slobodan Jesic, Abdurrahman Bugra Cengiz, Jerome Becquart, Yasutomo Mikoshiba, Mattia Carbotti, Marcelo O. Samolé, Margherita Raucci, Sven Lanssens, Maria João M. Vasconcelos, Valeriy Semisazhenov, Fabio Facchetti, Monia Maccaferri, Vincenzo Panasiti, Camila M. Carvalho, Elena Tolomio, Ercan Arca, Celia Badenas, Sonia Segura Tigell, Francesco Lacarrubba, Ruzica Jurakic Toncic, Uday Khopkar, Uwe Seidl, Clóvis Antônio Lopes Pinto, Alice Marneffe, Zhenguo Wu, Josefin Lysell, Malgorzata Olszewska, Marta Ruano Del Salado, Alina Gogulescu, Tarl W. Prow, Christine Fink, Jean-Marie Tan, Milana Ivkov Simic, Mahshid S. Ansari, Stamatina Geleki, Sondang P. Sirait, Flavia Baderca, Marcella N. Silva, Andra Pehoiu, Joost Koehoorn, Ajay Goyal, Maria Dirlei Ferreira de Souza Begnami, Hui-bin Lu, Hoda A. Moneib, Maria Antonietta Pizzichetta, Scott Menzies, Gulsel Anil Bahali, Vesna Tlaker Zunter, Elfrida Carstea, Ines Chevolet, Septimiu Enache, Aysun Şikar Aktürk, Clara Kirchner, Greg Canning, Dina M. Shahin, Incilay Kalay Tugrul, Kristina Opletalova, Lars Hofmann, Mario Santinami, Anna Elisa Verzì, Asunción Vicente, Nathalia Delcourt, null Mernissi, Duru Tabanlıoglu Onan, Dorothy Polydorou, Irma Korom, Sara Moreno Fernández, Salim Gallouj, Annamari Ranki, Riina Hallik, Saduman Balaban Adim, Erietta Christofidou, Gustavo D. C. Dieamant, Vincenzo De Giorgi, Gregor B.E. Jemec, Kajsa Møllersen, Monisha lalji, Georgiana Simona Mohor, Hans-Jürgen Schulz, Justin R Sharpe, Karinna S. Machado, Efterpi Demiri, Mohammed I. AlJasser, Jelena Stojkovic-Filipovic, Harald Kittler, José M. A. Lopes, Adriana Diaconeasa, Patricia Serrano, Alfonso D’Orazio, Luca Mazzucchelli, Riccardo Bono, Oliver Felthaus, Juan Garcias-Ladaria, Zeljko Mijuskovic, Zsuzsanna Bago-Horvath, Alin Laurentiu Tatu, Christine Prodinger, Roland Blum, Demetrios Ioannides, Nadem Soufir, Diego Serraino, Ahmed M. Sadek, Leticia Calzado Villareal, Elliot Coates, Mariana Costache, Machuel Bruno, Bengu Gerceker Turk, Liliana Gabriela Popa, Han-Uk Kim, Lisa Hoogedoorn, Efstratios Vakirlis, Monika Kotrlá, Gabriel Salerni, Ela Comert, Salvatore Zanframundo, Zsuzsanna Lengyel, Francisco Jose Deleon, Maryam Sadeghi Naeeni, Georgios Kontochristopoulos, Ana Carolina Cherobin, Michiyo Matsumoto-Nakano, Gabriela Fortes Escobar, Maria Concetta Fargnoli, Ayse Oktem, Petra Fedorcova, Slavomir Urbancek, Hyunju Jin, Frédéric Cambazard, Tracey Newlove, Nataliya Sirmays, Cliff Rosendahl, Tamara Micantonio, Shirin Bajaj, Masa Gorsic, Ana Carolina L. Viana, Valentin Popa, Hubert Pehamberger, Anna Maria Carrozzo, Valentina Girgenti, Phil McClenahan, Beata Bergler-Czop, Alex Llambrich, Özgür Bakar, David Polsky, Krishnakant B. Pandya, Andrea Maurichi, Isabelle Hoorens, Paola Sorgi, Marianne Niin, Serena Magi, Malathi Munisamy, Zlatko Marušić, Cristina Mangas, Hakan Yesil, Miriam Potrony, Safaa Y. Negm, Maria T. Corradin, Stefania Seidenari, Işıl Bulur, Evelin Csernus, Gemma Tell-Marti, Alix Thomas, Juliana Casagrande Tavoloni Braga, Marco Manfredini, Karime M. Hassun, Celia Levy-Silbon, Lali Mekokishvili, Cem Yildirim, Hanna Eriksson, John H. Pyne, Angel Pizarro, Hakim Hammadi, Alessandro Borghi, Mariana A. Cordeiro, Fatima Zohra, A. Tülin Güleç, Ivan Ruiz Victoria, Joanna N. Łudzik, Radwa Magdy, Hisashi Uhara, Grażyna Kamińska-Winciorek, Llúcia Alòs, Pegah Kharazmi, Keisuke Suehiro, Lucian Russu, Zorica Đorđević Brlek, Sandrine Massart-Manil Massart-Manil, Moon-Bum Kim, Noha E. Hashem, Domenico Piccolo, Francesca Cicero, Jan Szymszal, Verena Ahlgrimm-Siess, Marian Gonzalez Inchaurraga, Ignazio Stanganelli, Danica Tiodorovic Zivkovic, Bugce Topukcu, Katharina Jaeger, Michael J. Inskip, Sara M. Mohy, Assya Djeridane, Véronique Del Marmol, Isil Kilinc, Nehal Yossif, Geon-Wook Kim, Oleksandr Litus, Ivana Ilić, Richard A Sturm, Mustafa Tunca, Anndressa da Matta, Elisabeth Jecel, Danijela Ćurković, Giuseppe Argenziano, Lynlee L. Lin, Elena Sotiriou, Mikela Petkovic, Suzana Kamberova, Sara Ibañes del Agua, Alan Cameron, Judit Oláh, Marc Nahuys, Leila Jeskanen, Zrinjka Paštar, Anna Wojas-Pelc, Ingela Ahnlide, Romana Čeović, Geoffrey Cains, Gilles Thuret, Mary Thomas, Marios Fragoulis, Drahomira Jarosikova, Manfred Beleut, Ferda Artüz, Brigitte Lavole, Francesco Todisco Grande, Carine Dal Pizzol, Erika Richtig, Nathalie Teixeira De Carvalho, Hans Peter Soyer, Amer M Alanazi, Vesna Sossi, Manal Bosseila, Monica Sulitan, Biancamaria Scoppio, Zrinka Bukvić Mokos, Marie-Jeanne P. Gerritsen, Mariano Suppa, Danielle Giambrone, Christoph Sinz, Jernej Kukovic, Martina Bosic, Adriana Rakowska, Eleni Mitsiou, Kely Hernandez, Ashfaq A. Marghoob, Daniel Boda, Alessandro Di Stefani, Luciana Trane, Leo Raudonikis, Akane Minagawa, Itaru Dekio, Athanassios Kyrgidis, Magdalena Wawrzynkiewicz, Katharina T Weiß, Chie Kamada, Lamberto Zara, Cristian Navarrete-Dechent, Serkan Yazici, Frédéric Renard, Leonie Mathemeier, Nissrine Amraoui, Mariana Fabris, Mariola Wyględowska-Kania, Nikolay Potekaev, Elisa Cinotti, Sedef Şahin, Peter van de Kerkhof, Silvana Ciardo, Sara Izzi, Paolo Piemonte, William V. Stoecker, Giampiero Mazzocchetti, Pasquale Frascione, Louise Lovatto, Ayşegül Yalçınkaya Iyidal, Jennifer A. Stein, Selçuk Yüksel, Daniela Ledić Drvar, Stine F. Pedersen, Dimitrios Sgouros, Meriem Bounouar, Balachandra S Ankad, Rahul Bute, Julia Brockley, Paula Aguilera-Otalvaro, Sumiko Ishizaki, Daniela Kulichova, Ilias Papadimitriou, Yeser Genc, Tanja Batinac, Jadran Bandic, Jean-Michel Lagarde, Göksun Karaman, Philipp Babilas, Mari Salmivuori, Lieven Annemans, Lennart K Blomqvist, Karel Pizinger, Duncan Lambie, Alexander Michael Witkowski, Meltem Uslu, Irena Savo, Martin Gosau, Raphaela Kastle, Olli Saksela, Pedro Zaballos, Esther De Eusebio Murillo, Hu Hui-Han, Sanda Mirela Cherciu, Claudia Artenie, Elvira Moscarella, Richard Johns, Ozlem Erdem, Valérie Vuong, Basma Birqdar, Jela Tomkova, Kasturee Jagirdar, Vassilios Lambropoulos, Moshira S. Bahrawy, Seong-Jin Kim, Su Chii Kong, Helen Schmid, Tetsuya Tsuchida, Michele Tonellato, Laura Berbegal, Lumír Pock, Iustin Hancu, Babar K Rao, Juliette Jegou, Lajos Kemény, Teresa Deinlein, Usha N. Khemani, Davive Guardoli, Juliana Arêas de Souza Lima Beltrame Ferreira, Tatiana Cristina Moraes Pinto Blumetti, Adhimukti T. Sampurna, Alexandru Telea, Ana Maria Forsea, Gionata Marazza, Lidija Kandolf Sekulovic, Marta Kurzeja, Marija Buljan, Fatima Zohra Mernissi, Alba Maiques-Diaz, Roger González, Dimitrios Kalabalikis, María Gabriela Vallone, Vanessa P. Martins Da Silva, Gemma Flores-Pons, Giuseppe Bertollo, Rolland Gyulai, Giuliana Crisman, Secil Saral, Simon Nicholson, Aimilios Lallas, Willeke Blokx, Marc A. L. M. Boone, and Oana Sindea
- Subjects
Oncology ,business.industry ,RL1-803 ,Genetics ,Medicine ,Library science ,Environmental ethics ,Dermatology ,business ,Molecular Biology - Published
- 2015
- Full Text
- View/download PDF
31. An ensemble classification approach for melanoma diagnosis
- Author
-
Hitoshi Iyatomi, Gerald Schaefer, Bartosz Krawczyk, and M. Emre Celebi
- Subjects
Control and Optimization ,Training set ,General Computer Science ,Artificial neural network ,business.industry ,Computer science ,Pattern recognition ,medicine.disease ,Machine learning ,computer.software_genre ,Ensemble learning ,Statistical classification ,Medical imaging ,medicine ,Artificial intelligence ,Skin cancer ,business ,computer ,Melanoma diagnosis ,Classifier (UML) - Abstract
Malignant melanoma is the deadliest form of skin cancer, and has, among cancer types, one of the most rapidly increasing incidence rates in the world. Early diagnosis is crucial, since if detected early, its cure is simple. In this paper, we present an effective approach to melanoma identification from dermoscopic images of skin lesions based on ensemble classification. First, we perform automatic border detection to segment the lesion from the background skin. Based on the extracted border, we extract a series of colour, texture and shape features. The derived features are then employed in a pattern classification stage for which we employ a novel, dedicated ensemble learning approach to address the class imbalance in the training data and to yield improved classification performance. Our classifier committee trains individual classifiers on balanced subspaces, removes redundant predictors based on a diversity measure and combines the remaining classifiers using a neural network fuser. Experimental results on a large dataset of dermoscopic skin lesion images show our approach to work well, to provide both high sensitivity and specificity, and our presented classifier ensemble to lead to statistically better recognition performance compared to other dedicated classification algorithms.
- Published
- 2014
- Full Text
- View/download PDF
32. Basic Investigation on a Robust and Practical Plant Diagnostic System
- Author
-
Hiroyuki Uga, Erika Fujita, Yusuke Kawasaki, Satoshi Kagiwada, and Hitoshi Iyatomi
- Subjects
Artificial neural network ,Computer science ,business.industry ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,Diagnostic system ,Convolutional neural network ,Agriculture ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Sensitivity (control systems) ,business ,computer - Abstract
Accurate plant diagnosis requires experts' knowledge but is usually expensive and time consuming. Therefore, it has become necessary to design an accurate, easy, and low-cost automated diagnostic system for plant diseases. In this paper, we propose a new practical plant-disease detection system. We use 7,520 cucumber leaf images comprising images of healthy leaves and those infected by almost all types of viral diseases. The leaves were photographed on site under only one requirement, that is, each image must contain a leaf roughly at its center, thus providing them with a large variety of appearances (i.e., parameters including distance, angle, background, and lighting condition were not uniform). Although half of the images used in this experiment were taken in bad conditions, our classification system based on convolutional neural networks attained an average of 82.3% accuracy under the 4-fold cross validation strategy.
- Published
- 2016
- Full Text
- View/download PDF
33. Simple and effective pre-processing for automated melanoma discrimination based on cytological findings
- Author
-
Hitoshi Iyatomi, Takuya Yoshida, M. Emre Celebi, and Gerald Schaefer
- Subjects
Computer science ,business.industry ,Melanoma ,Feature extraction ,Cancer ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,medicine.disease ,Convolutional neural network ,Cross-validation ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Classifier (UML) - Abstract
In this paper, we propose a simple and effective preprocessing method for melanoma classification by considering cytological properties of melanomas, in particular the alignment of the major axis of the tumor in the same direction. We evaluate our method with a set of 1,760 dermoscopic images (329 of melanomas and 1,431 of nevi) and a simple convolutional neural network (CNN) classifier with five-fold cross validation. The proposed tumor alignment method improves the classification performance by 5.8% in terms of the area under the ROC curve (AUC). In addition, it proves to be 2.1% better in term of AUC when compared with the same configured CNN trained using images that are nine times larger. Our results also show that considering the intrinsic features of the classification target is important even when the classifier has a capability to obtain effective features automatically through its learning process.
- Published
- 2016
- Full Text
- View/download PDF
34. Document classification through image-based character embedding and wildcard training
- Author
-
Hitoshi Iyatomi, Daiki Shimada, and Ryunosuke Kotani
- Subjects
business.industry ,Character (computing) ,Computer science ,Document classification ,Text segmentation ,Pattern recognition ,Wildcard character ,02 engineering and technology ,Image segmentation ,computer.file_format ,010501 environmental sciences ,computer.software_genre ,Semantics ,01 natural sciences ,Convolutional neural network ,Wildcard ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Natural language processing ,0105 earth and related environmental sciences - Abstract
Languages such as Chinese and Japanese have a significantly large number (several thousands) of alphabets as compared to other languages, and each of their sentences consists of several concatenated words with wide varieties of inflected forms; thus appropriate word segmentation is quite difficult. Therefore, recently proposed sophisticated language-processing methods designed for languages such as English cannot be applied. In this paper, we address those issues and propose a new and efficient document classification technique for such languages. The proposed method is characterized into a new “image-based character embedding” method and character-level convolutional neural networks method with “wildcard training.” The first method encodes each character based on its pictorial structures and preserves them. Further, the second method treats some of the input characters as wildcards in the classification stage and functions as efficient data augmentation. We confirmed that our proposed method showed superior performance when compared conventional methods for Japanese document classification problems.
- Published
- 2016
- Full Text
- View/download PDF
35. Three-phase general border detection method for dermoscopy images using non-uniform illumination correction
- Author
-
Reiko Suzaki, Ken Kobayashi, Sumiko Ishizaki, Hitoshi Iyatomi, Masaru Tanaka, M. Emre Celebi, Koichi Ogawa, Mizuki Sawada, and Kerri-Ann Norton
- Subjects
Skin Neoplasms ,Lesion segmentation ,business.industry ,Non uniform illumination ,Reproducibility of Results ,Dermoscopy ,Dermatology ,Image Enhancement ,Sensitivity and Specificity ,Accurate segmentation ,Pattern Recognition, Automated ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Humans ,Medicine ,Automatic segmentation ,Segmentation ,Computer vision ,Artificial intelligence ,business ,Skin lesion ,Melanoma ,Lighting - Abstract
Background Computer-aided diagnosis of dermoscopy images has shown great promise in developing a quantitative, objective way of classifying skin lesions. An important step in the classification process is lesion segmentation. Many studies have been successful in segmenting melanocytic skin lesions (MSLs), but few have focused on non-melanocytic skin lesions (NoMSLs), as the wide variety of lesions makes accurate segmentation difficult. Methods We developed an automatic segmentation program for detecting borders of skin lesions in dermoscopy images. The method consists of a pre-processing phase, general lesion segmentation phase, including illumination correction, and bright region segmentation phase. Results We tested our method on a set of 107 NoMSLs and a set of 319 MSLs. Our method achieved precision/recall scores of 84.5% and 88.5% for NoMSLs, and 93.9% and 93.8% for MSLs, in comparison with manual extractions from four or five dermatologists. Conclusion The accuracy of our method was competitive or better than five recently published methods. Our new method is the first method for detecting borders of both non-melanocytic and melanocytic skin lesions.
- Published
- 2011
- Full Text
- View/download PDF
36. Colour and contrast enhancement for improved skin lesion segmentation
- Author
-
Gerald Schaefer, Maher I. Rajab, Hitoshi Iyatomi, and M. Emre Celebi
- Subjects
Skin Neoplasms ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Dermoscopy ,Health Informatics ,Sensitivity and Specificity ,Edge detection ,Lesion ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Contrast (vision) ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Segmentation ,Melanoma ,media_common ,Radiological and Ultrasound Technology ,Artificial neural network ,Pixel ,business.industry ,Reproducibility of Results ,Computer Graphics and Computer-Aided Design ,RGB color model ,Colorimetry ,Neural Networks, Computer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,medicine.symptom ,business ,Filtration - Abstract
Accurate extraction of lesion borders is a critical step in analysing dermoscopic skin lesion images. In this paper, we consider the problems of poor contrast and lack of colour calibration which are often encountered when analysing dermoscopy images. Different illumination or different devices will lead to different image colours of the same lesion and hence to difficulties in the segmentation stage. Similarly, low contrast makes accurate border detection difficult. We present an effective approach to improve the performance of lesion segmentation algorithms through a pre-processing step that enhances colour information and image contrast. We combine this enhancement stage with two different segmentation algorithms. One technique relies on analysis of the image background by iterative measurements of non-lesion pixels, while the other technique utilises co-operative neural networks for edge detection. Extensive experimental evaluation is carried out on a dataset of 100 dermoscopy images with known ground truths obtained from three expert dermatologists. The results show that both techniques are capable of providing good segmentation performance and that the colour enhancement step is indeed crucial as demonstrated by comparison with results obtained from the original RGB images.
- Published
- 2011
- Full Text
- View/download PDF
37. A Practical Plant Diagnosis System for Field Leaf Images and Feature Visualization
- Author
-
Satoshi Kagiwada, Hiroyuki Uga, Hitoshi Iyatomi, and E. E. Fujita
- Subjects
0106 biological sciences ,Environmental Engineering ,Computer science ,business.industry ,General Chemical Engineering ,General Engineering ,food and beverages ,Image processing ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,Visualization ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Downy mildew ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,010606 plant biology & botany ,Biotechnology - Abstract
An accurate, fast and low-cost automated plant diagnosis system has been called for. While several studies utilizing machine learning techniques have been conducted, significant issues remain in most cases where the dataset is not composed of field images and often includes a substantial number of inappropriate labels. In this paper, we propose a practical automated plant diagnosis system. We first build a highly reliable dataset by cultivating plants in a strictly controlled setting. We then develop a robust classifier capable of analyzing a wide variety of field images. We use a total of 9,000 original cucumber field leaf images to identify seven typical viral diseases, Downy mildew and healthy plants including initial symptoms. We also visualize the key regions of diagnostic evidence. Our system attains 93.6% average accuracy, and we confirm that our system captures important features for the diagnosis of Downy mildew.
- Published
- 2018
- Full Text
- View/download PDF
38. An End-To-End Practical Plant Disease Diagnosis System for Wide-Angle Cucumber Images
- Author
-
Satoshi Kagiwada, Q. H. Cap, Katsumasa Suwa, Hiroyuki Uga, E. E. Fujita, and Hitoshi Iyatomi
- Subjects
Environmental Engineering ,Computer science ,General Chemical Engineering ,Real-time computing ,General Engineering ,04 agricultural and veterinary sciences ,02 engineering and technology ,Plant disease ,End-to-end principle ,Hardware and Architecture ,040103 agronomy & agriculture ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,0401 agriculture, forestry, and fisheries ,020201 artificial intelligence & image processing ,Biotechnology - Abstract
With the breakthrough of deep learning techniques, many leaf-based automated plant diagnosis methodologies have been proposed. To the best of our knowledge, most conventional methodologies only accept narrow range images, typically one or quite a limited number of targets are in their input. This is because the appearance of leaves is diverse and leaves usually heavily overlap each other in practical situations. In this paper, we propose a basic and practical end-to-end plant disease diagnosis system for wide-angle images. Our system is principally composed of two specially designed types of convolutional neural networks. The system achieves leaf detection performance of 73.9% in F1-score, overall (detection and diagnosis) performance of 68.1% in recall and 65.8% in precision at around 3 seconds/image on 500 wide-angle on-site images which have 6,860 healthy and 6,741 infected leaves (13,601 in total).
- Published
- 2018
- Full Text
- View/download PDF
39. Computerized quantification of psoriasis lesions with colour calibration: preliminary results
- Author
-
A. Miyake, Masafumi Hagiwara, Masayuki Kimoto, Hiroshi Oka, Hitoshi Iyatomi, Masaru Tanaka, and Koichi Ogawa
- Subjects
medicine.medical_specialty ,Erythema ,business.industry ,Color ,Dermatology ,medicine.disease ,Sensitivity and Specificity ,Severity of Illness Index ,Trunk ,Lesion ,Fully automated ,Psoriasis Area and Severity Index ,Psoriasis ,Calibration ,Image Interpretation, Computer-Assisted ,Severity of illness ,Cyclosporine ,Photography ,medicine ,Humans ,Dermatologic Agents ,medicine.symptom ,business - Abstract
An evaluation was made of a fully automated index of psoriasis, termed Computer-assisted Area and Severity Index (CASI). This method requires taking digital photographs of the target skin area(s) with a colour reference marker, Casmatch. The CASI evaluates the severity of the psoriasis from the size and redness of the lesion(s). In five patients with mild psoriasis vulgaris mainly observed on their trunk, 18 photographs of the trunk were taken every 2 weeks. Three of the five patients [Psoriasis Area and Severity Index (PASI) of 3.0, 3.6 and 10.1, respectively] were treated with oral cyclosporin 3 mg/kg/day for 4 weeks. The mean +/- SD area of lesion selected by a dermatologist was 2.3 +/- 1.3% of the total skin area. This method achieved extraction performance for psoriasis of 72.1 +/- 19.4% for sensitivity and 97.4 +/- 2.0% for specificity. CASI correlated strongly with PASI (r = 0.92), but not with Skindex16 (r = 0.35). Although only erythema was evaluated, our preliminary results indicate that this method is capable of quantifying psoriasis lesions.
- Published
- 2009
- Full Text
- View/download PDF
40. Automatic detection of blue-white veil and related structures in dermoscopy images
- Author
-
Harold S. Rabinovitz, Giuseppe Argenziano, William V. Stoecker, Hitoshi Iyatomi, Randy Hays Moss, H. Peter Soyer, M. Emre Celebi, Celebi, Me, Iyatomi, H, Stoecker, Wv, Moss, Rh, Rabinovitz, H, Argenziano, Giuseppe, and Soyer, Hp
- Subjects
FOS: Computer and information sciences ,Skin Neoplasms ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,I.4.7 ,I.4.9 ,Dermoscopy ,Skin Pigmentation ,Health Informatics ,Dermatology ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,Lesion ,Artificial Intelligence ,Nevus, Blue ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Melanoma ,Radiological and Ultrasound Technology ,business.industry ,Decision Trees ,Pattern recognition ,medicine.disease ,Computer Graphics and Computer-Aided Design ,Feature (computer vision) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,medicine.symptom ,business ,Skin imaging - Abstract
Dermoscopy is a non-invasive skin imaging technique which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One of the most important features for the diagnosis of melanoma in dermoscopy images is the blue-white veil (irregualar, strucuteless areas of confluent blue pigmentation with an overlying white "ground-glass" film). In this article. we present a machine learning approach to the detection of blue-white veil and related structures in dermoscopy images. The method involves contextual pixel classification using a decision tree classifier. The percentage of blue-white areas detected in a lesion combined with a simple shape descriptor yielded a sensitivity of 69.35% and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity rises to 78.20% for detection of blue veil in those cases where it is a primary feature for melanoma recognition. (C) 2008 Elsevier Ltd. All rights reserved.
- Published
- 2008
- Full Text
- View/download PDF
41. Computer-Based Classification of Dermoscopy Images of Melanocytic Lesions on Acral Volar Skin
- Author
-
Hitoshi Iyatomi, Hiroshi Oka, M. Emre Celebi, Toshiaki Saida, Koichi Ogawa, Hiroshi Koga, Masaru Tanaka, Kuniaki Ohara, H. Peter Soyer, Giuseppe Argenziano, Iyatomi, H, Oka, H, Celebi, Me, Ogawa, K, Argenziano, Giuseppe, Soyer, Hp, Koga, H, Saida, T, Ohara, K, and Tanaka, M.
- Subjects
Adult ,Male ,medicine.medical_specialty ,Skin Neoplasms ,Adolescent ,Extraction algorithm ,Linear classifier ,Dermoscopy ,Dermatology ,Sensitivity and Specificity ,Biochemistry ,medicine ,Humans ,Diagnosis, Computer-Assisted ,Child ,Melanoma diagnosis ,Melanoma ,Nevus ,Molecular Biology ,Dermatoscopy ,medicine.diagnostic_test ,Receiver operating characteristic ,business.industry ,Computer based ,Pattern recognition ,Cell Biology ,Middle Aged ,Fully automated ,Melanocytes ,Female ,Artificial intelligence ,business ,Classifier (UML) - Abstract
We describe a fully automated system for the classification of acral volar melanomas. We used a total of 213 acral dermoscopy images (176 nevi and 37 melanomas). Our automatic tumor area extraction algorithm successfully extracted the tumor in 199 cases (169 nevi and 30 melanomas), and we developed a diagnostic classifier using these images. Our linear classifier achieved a sensitivity (SE) of 100%, a specificity (SP) of 95.9%, and an area under the receiver operating characteristic curve (AUC) of 0.993 using a leave-one-out cross-validation strategy (81.1% SE, 92.1% SP; considering 14 unsuccessful extraction cases as false classification). In addition, we developed three pattern detectors for typical dermoscopic structures such as parallel ridge, parallel furrow, and fibrillar patterns. These also achieved good detection accuracy as indicated by their AUC values: 0.985, 0.931, and 0.890, respectively. The features used in the melanoma-nevus classifier and the parallel ridge detector have significant overlap.
- Published
- 2008
- Full Text
- View/download PDF
42. Perioperative Cardiac Risk Prediction in Non Cardiac Surgery-Investigation for Efficiency of Nuclear Scanning
- Author
-
Jun Hashimoto, Jingming Bai, Hitoshi Iyatomi, and Tomotaka Kasamatsu
- Subjects
medicine.medical_specialty ,business.industry ,Non cardiac surgery ,Internal medicine ,Cardiology ,medicine ,Perioperative ,Cardiac risk ,business - Abstract
非心臓手術中に発生する心事故リスクの推定を行った.これまで事故推定が難しいとされてきた,中,低リスク手術を含む1351 の手術記録から,何らかの心事故が発生した「全心事故」および,心臓死もしくは心筋梗塞の「hard event」の発生について解析した.解析因子には,手術の難易度,患者の年齢や既往歴などの臨床ファクタおよび,核医学検査結果に分類できる合計22 の項目を用い,予測モデルには線形および,サポートベクターマシン(SVM)識別器を用いた.交差検定法において,全心事故では,感度 80%,特異度 66%,hard event では,感度 85%,特異度 81%の,従来報告されているより良好な推定精度を実現した.線形,SVM識別器双方ともに,核医学検査で得られる共通のパラメータが選択されており,心臓事故の術前推定が難しい,中,低リスク手術においても核医学検査結果が重要な因子であることが確認できた.
- Published
- 2008
- Full Text
- View/download PDF
43. Application of Support Vector Machine Classifiers to Preoperative Risk Stratification With Myocardial Perfusion Scintigraphy
- Author
-
Tadaki Nakahara, Naoto Kitamura, Atsushi Kubo, Koichi Ogawa, Hitoshi Iyatomi, Jun Hashimoto, Tomotaka Kasamatsu, and Jingming Bai
- Subjects
Male ,medicine.medical_specialty ,Heart Diseases ,information science ,Single-photon emission computed tomography ,Risk Assessment ,Preoperative care ,Cohort Studies ,Myocardial perfusion imaging ,Preoperative Care ,medicine ,Humans ,Aged ,Retrospective Studies ,Tomography, Emission-Computed, Single-Photon ,medicine.diagnostic_test ,business.industry ,Myocardial Perfusion Imaging ,Linear model ,General Medicine ,Perioperative ,Models, Theoretical ,Support vector machine ,Female ,Radiology ,Cardiology and Cardiovascular Medicine ,Risk assessment ,business ,Emission computed tomography - Abstract
Background Myocardial perfusion single-photon emission computed tomography (SPECT) has been used for risk stratification before non-cardiac surgery. However, few authors have used mathematical models for evaluating the likelihood of perioperative cardiac events. Methods and Results This retrospective cohort study collected data of 1,351 patients referred for SPECT before non-cardiac surgery. We generated binary classifiers using support vector machine (SVM) and conventional linear models for predicting perioperative cardiac events. We used clinical and surgical risk, and SPECT findings as input data, and the occurrence of all and hard cardiac events as output data. The area under the receiver-operating characteristic curve (AUC) was calculated for assessing the prediction accuracy. The AUC values were 0.884 and 0.748 in the SVM and linear models, respectively in predicting all cardiac events with clinical and surgical risk, and SPECT variables. The values were 0.861 (SVM) and 0.677 (linear) when not using SPECT data as input. In hard events, the AUC values were 0.892 (SVM) and 0.864 (linear) with SPECT, and 0.867 (SVM) and 0.768 (linear) without SPECT. Conclusion The SVM was superior to the linear model in risk stratification. We also found an incremental prognostic value of SPECT results over information about clinical and surgical risk. (Circ J 2008; 72: 1829 - 1835)
- Published
- 2008
- Full Text
- View/download PDF
44. A methodological approach to the classification of dermoscopy images
- Author
-
Randy Hays Moss, M. Emre Celebi, Y. Alp Aslandogan, Bakhtiyar Uddin, Hassan A. Kingravi, Hitoshi Iyatomi, and William V. Stoecker
- Subjects
Skin Neoplasms ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Dermoscopy ,Health Informatics ,Feature selection ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Melanoma ,Mathematics ,Support vector machine classification ,Feature data ,Radiological and Ultrasound Technology ,business.industry ,Model selection ,Reproducibility of Results ,Pattern recognition ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Support vector machine ,Colorimetry ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Pigmented skin ,business ,Classifier (UML) ,Area under the roc curve ,Algorithms - Abstract
In this paper a methodological approach to the classification of pigmented skin lesions in dermoscopy images is presented. First, automatic border detection is performed to separate the lesion from the background skin. Shape features are then extracted from this border. For the extraction of color and texture related features, the image is divided into various clinically significant regions using the Euclidean distance transform. This feature data is fed into an optimization framework, which ranks the features using various feature selection algorithms and determines the optimal feature subset size according to the area under the ROC curve measure obtained from support vector machine classification. The issue of class imbalance is addressed using various sampling strategies, and the classifier generalization error is estimated using Monte Carlo cross validation. Experiments on a set of 564 images yielded a specificity of 92.34% and a sensitivity of 93.33%.
- Published
- 2007
- Full Text
- View/download PDF
45. Building of Readable Decision Trees for Automated Melanoma Discrimination
- Author
-
Keiichi Ohki, Hitoshi Iyatomi, Gerald Schaefer, and M. Emre Celebi
- Subjects
Computer science ,business.industry ,Decision tree ,Pattern recognition ,Feature selection ,Computer vision ,Artificial intelligence ,business ,Readability ,Random forest - Abstract
Even expert dermatologists cannot easily diagnose a melanoma, because its appearance is often similar to that of a nevus, in particular in its early stage. For this reason, studies of automated melanoma discrimination using image analysis have been conducted. However, no systematic studies exist that offer grounds for the discrimination result in a readable form. In this paper, we propose an automated melanoma discrimination system that it is capable of providing not only the discrimination results but also their grounds by means of utilizing a Random Forest (RF) technique. Our system was constructed based on a total of 1,148 dermoscopy images (168 melanomas and 980 nevi) and uses only their color features in order to ensure the readability of the grounds for the discrimination results. By virtue of our efficient feature selection procedure, our system provides accurate discrimination results (a sensitivity of 79.8 % and a specificity of 80.7 % with 10-fold cross-validation) under human-oriented limitations and presents the grounds for the results in an intelligible format.
- Published
- 2015
- Full Text
- View/download PDF
46. Automated Habit Detection System: A Feasibility Study
- Author
-
Hitoshi Iyatomi, Hiroki Misawa, and Takashi Obara
- Subjects
Recall ,business.industry ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,System a ,Motion (physics) ,Wavelet ,Principal component analysis ,Conversation ,Artificial intelligence ,Habit ,Remainder ,business ,media_common - Abstract
In this paper, we propose an automated habit detection system. We define a “habit” in this study as some motion that is significantly different from our common behaviors. The behaviors of two subjects during conversation are tracked by the Kinect sensor and their skeletal and facial conformations are detected. The proposed system detects the motions considered as habits by analyzing them using a principal component analysis (PCA) and wavelet multi-resolution analysis (MRA). In our experiments, we prepare a total of 108 movies containing 5 min of conversation. Of these, 100 movies are used to build the average motion model (AMM), and the remainder are used for the evaluation. The accuracy of habit detection in the proposed system is shown to have a precision of 84.0 \(\%\) and a recall of 81.8 \(\%\).
- Published
- 2015
- Full Text
- View/download PDF
47. Prototype of Super-Resolution Camera Array System
- Author
-
Daiki Hirao and Hitoshi Iyatomi
- Subjects
Color calibration ,Computer science ,business.industry ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Camera array ,Superresolution ,Quality (physics) ,Computer vision ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,Noise (video) ,business ,Parallax - Abstract
We present a prototype of a super-resolution camera array system. Since the proposed system consists of a number of low-cost camera devices, all of which operate synchronously, it is a low-cost, high quality imaging system, and capable of handling moving targets. However, when the targets are located near the system, parallax and differences in photographic conditions among the cameras become pronounced. In addition, conventional super-resolution techniques frequently emphasize noise, as well as edges, contours, and so on, when the number of the observed (i.e., low resolution) images is limited. Therefore, we also propose the following procedures for our camera-array system: (1) color calibration among cameras, (2) automated region of the interest (ROI) detection under large parallax, and (3) effective noise reduction with effective edge preservation. We developed a camera array system comprising 12 low-cost Web camera devices. We confirm that the proposed system in general reduces the drawbacks of the array system and achieves approximately a 2 dB higher S/N ratio, i.e., equivalent to the effect of two additional images.
- Published
- 2015
- Full Text
- View/download PDF
48. Basic Study of Automated Diagnosis of Viral Plant Diseases Using Convolutional Neural Networks
- Author
-
Hiroyuki Uga, Hitoshi Iyatomi, Yusuke Kawasaki, and Satoshi Kagiwada
- Subjects
Diagnostic methods ,business.industry ,Computer science ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,Convolutional neural network ,Class (biology) ,computer ,Melon yellow spot virus ,Plant disease - Abstract
Detecting plant diseases is usually difficult without an experts’ knowledge. Therefore, fast and accurate automated diagnostic methods are highly desired in agricultural fields. Several studies on automated plant disease diagnosis have been conducted using machine learning methods. However, with these methods, it can be difficult to detect regions of interest, (ROIs) and to design and implement efficient parameters. In this study, we present a novel plant disease detection system based on convolutional neural networks (CNN). Using only training images, CNN can automatically acquire the requisite features for classification, and achieve high classification performance. We used a total of 800 cucumber leaf images to train CNN using our innovative techniques. Under the 4-fold cross-validation strategy, the proposed CNN-based system (which also extends the training dataset by generating additional images) achieves an average accuracy of 94.9 % in classifying cucumbers into two typical disease classes and a non-diseased class.
- Published
- 2015
- Full Text
- View/download PDF
49. Quantitative assessment of tumour extraction from dermoscopy images and evaluation of computer-based extraction methods for an automatic melanoma diagnostic system
- Author
-
Seiichiro Kobayashi, Ayako Miyake, Hitoshi Iyatomi, H. Peter Soyer, Giuseppe Argenziano, Masayuki Kimoto, Hiroshi Oka, Akiko Tanikawa, Jun Yamagami, Koichi Ogawa, Masaru Tanaka, Masafumi Hagiwara, Masataka Saito, Iyatomi, H, Oka, H, Saito, M, Miyake, A, Kimoto, M, Yamagami, J, Kobayashi, S, Tanikawa, A, Hagiwara, M, Ogawa, K, Argenziano, Giuseppe, Soyer, Hp, and Tanaka, M.
- Subjects
Cancer Research ,medicine.medical_specialty ,Skin Neoplasms ,Diagnostic accuracy ,Dermatology ,Diagnostic system ,Sensitivity and Specificity ,Image Processing, Computer-Assisted ,medicine ,Quantitative assessment ,Humans ,skin and connective tissue diseases ,Melanoma ,Receiver operating characteristic ,business.industry ,Computer based ,Reproducibility of Results ,Pattern recognition ,Thresholding ,Surgery ,ROC Curve ,Oncology ,Area Under Curve ,Extraction methods ,Artificial intelligence ,Precision and recall ,business ,Algorithms - Abstract
The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.
- Published
- 2006
- Full Text
- View/download PDF
50. Multipurpose image recognition based on active search and adaptive fuzzy inference neural network
- Author
-
Hitoshi Iyatomi and Masafumi Hagiwara
- Subjects
Computational Theory and Mathematics ,Hardware and Architecture ,Information Systems ,Theoretical Computer Science - Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.