65 results on '"Ayman M. Eldeib"'
Search Results
2. Studying the effects of haplotype partitioning methods on the RA-associated genomic results from the North American Rheumatoid Arthritis Consortium (NARAC) dataset
- Author
-
Mohamed N. Saad, Mai S. Mabrouk, Ayman M. Eldeib, and Olfat G. Shaker
- Subjects
Medicine (General) ,R5-920 ,Science (General) ,Q1-390 - Abstract
The human genome, which includes thousands of genes, represents a big data challenge. Rheumatoid arthritis (RA) is a complex autoimmune disease with a genetic basis. Many single-nucleotide polymorphism (SNP) association methods partition a genome into haplotype blocks. The aim of this genome wide association study (GWAS) was to select the most appropriate haplotype block partitioning method for the North American Rheumatoid Arthritis Consortium (NARAC) dataset. The methods used for the NARAC dataset were the individual SNP approach and the following haplotype block methods: the four-gamete test (FGT), confidence interval test (CIT), and solid spine of linkage disequilibrium (SSLD). The measured parameters that reflect the strength of the association between the biomarker and RA were the P-value after Bonferroni correction and other parameters used to compare the output of each haplotype block method. This work presents a comparison among the individual SNP approach and the three haplotype block methods to select the method that can detect all the significant SNPs when applied alone. The GWAS results from the NARAC dataset obtained with the different methods are presented. The individual SNP, CIT, FGT, and SSLD methods detected 541, 1516, 1551, and 1831 RA-associated SNPs respectively, and the individual SNP, FGT, CIT, and SSLD methods detected 65, 156, 159, and 450 significant SNPs respectively, that were not detected by the other methods. Three hundred eighty-three SNPs were discovered by the haplotype block methods and the individual SNP approach, while 1021 SNPs were discovered by all three haplotype block methods. The 383 SNPs detected by all the methods are promising candidates for studying RA susceptibility. A hybrid technique involving all four methods should be applied to detect the significant SNPs associated with RA in the NARAC dataset, but the SSLD method may be preferred because of its advantages when only one method was used. Keywords: Confidence interval test, Four-gamete test, Genome-wide association study, NARAC, Rheumatoid arthritis, Solid spine of linkage disequilibrium
- Published
- 2019
- Full Text
- View/download PDF
3. Identification of rheumatoid arthritis biomarkers based on single nucleotide polymorphisms and haplotype blocks: A systematic review and meta-analysis
- Author
-
Mohamed N. Saad, Mai S. Mabrouk, Ayman M. Eldeib, and Olfat G. Shaker
- Subjects
Haplotype block ,Linkage disequilibrium ,Major histocompatibility complex ,Rheumatoid arthritis ,Single nucleotide polymorphism ,Medicine (General) ,R5-920 ,Science (General) ,Q1-390 - Abstract
Genetics of autoimmune diseases represent a growing domain with surpassing biomarker results with rapid progress. The exact cause of Rheumatoid Arthritis (RA) is unknown, but it is thought to have both a genetic and an environmental bases. Genetic biomarkers are capable of changing the supervision of RA by allowing not only the detection of susceptible individuals, but also early diagnosis, evaluation of disease severity, selection of therapy, and monitoring of response to therapy. This review is concerned with not only the genetic biomarkers of RA but also the methods of identifying them. Many of the identified genetic biomarkers of RA were identified in populations of European and Asian ancestries. The study of additional human populations may yield novel results. Most of the researchers in the field of identifying RA biomarkers use single nucleotide polymorphism (SNP) approaches to express the significance of their results. Although, haplotype block methods are expected to play a complementary role in the future of that field.
- Published
- 2016
- Full Text
- View/download PDF
4. EEG Signal Classification Using Convolutional Neural Networks on Combined Spatial and Temporal Dimensions for BCI Systems.
- Author
-
Ayman M. Anwar and Ayman M. Eldeib
- Published
- 2020
- Full Text
- View/download PDF
5. Protein Subcellular Localization Prediction Based on Internal Micro-similarities of Markov Chains.
- Author
-
Asem Alaa, Ayman M. Eldeib, and Ahmed A. Metwally
- Published
- 2019
- Full Text
- View/download PDF
6. A Two Stage Heuristics for Improvement of Existing Multi Floor Healthcare Facility Layout.
- Author
-
Ahmed El Kady, Sherif A. Sami, and Ayman M. Eldeib
- Published
- 2017
- Full Text
- View/download PDF
7. Alzheimer's disease diagnosis from diffusion tensor images using convolutional neural networks.
- Author
-
Eman N Marzban, Ayman M Eldeib, Inas A Yassine, Yasser M Kadah, and Alzheimer’s Disease Neurodegenerative Initiative
- Subjects
Medicine ,Science - Abstract
Machine learning algorithms are currently being implemented in an escalating manner to classify and/or predict the onset of some neurodegenerative diseases; including Alzheimer's Disease (AD); this could be attributed to the fact of the abundance of data and powerful computers. The objective of this work was to deliver a robust classification system for AD and Mild Cognitive Impairment (MCI) against healthy controls (HC) in a low-cost network in terms of shallow architecture and processing. In this study, the dataset included was downloaded from the Alzheimer's disease neuroimaging initiative (ADNI). The classification methodology implemented was the convolutional neural network (CNN), where the diffusion maps, and gray-matter (GM) volumes were the input images. The number of scans included was 185, 106, and 115 for HC, MCI and AD respectively. Ten-fold cross-validation scheme was adopted and the stacked mean diffusivity (MD) and GM volume produced an AUC of 0.94 and 0.84, an accuracy of 93.5% and 79.6%, a sensitivity of 92.5% and 62.7%, and a specificity of 93.9% and 89% for AD/HC and MCI/HC classification respectively. This work elucidates the impact of incorporating data from different imaging modalities; i.e. structural Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI), where deep learning was employed for the aim of classification. To the best of our knowledge, this is the first study assessing the impact of having more than one scan per subject and propose the proper maneuver to confirm the robustness of the system. The results were competitive among the existing literature, which paves the way for improving medications that could slow down the progress of the AD or prevent it.
- Published
- 2020
- Full Text
- View/download PDF
8. Interactive high resolution reconstruction of 3D ultrasound volumes on the GPU.
- Author
-
Marwan Abdellak, Asem Abdelaziz, and Ayman M. Eldeib
- Published
- 2016
- Full Text
- View/download PDF
9. Motor imagery based brain computer interface using transform domain features.
- Author
-
Ahmed M. Elbaz, Ahmed T. Ahmed, Ayman M. Mohamed, Mohamed A. Oransa, Khaled S. Sayed, and Ayman M. Eldeib
- Published
- 2016
- Full Text
- View/download PDF
10. Parallel generation of digitally reconstructed radiographs on heterogeneous multi-GPU workstations.
- Author
-
Marwan Abdellah, Asem Abdelaziz, Eslam Ali, Sherief Abdelaziz, Abdel Rahman Sayed, Mohamed I. Owis, and Ayman M. Eldeib
- Published
- 2016
- Full Text
- View/download PDF
11. Efficient rendering of digitally reconstructed radiographs on heterogeneous computing architectures using central slice theorem.
- Author
-
Marwan Abdellah, Mohamed Abdallah, Mohamed Alzanati, and Ayman M. Eldeib
- Published
- 2016
- Full Text
- View/download PDF
12. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.
- Author
-
Marwan Abdellah, Ayman M. Eldeib, and Mohamed I. Owis
- Published
- 2015
- Full Text
- View/download PDF
13. Accelerating DRR generation using Fourier slice theorem on the GPU.
- Author
-
Marwan Abdellah, Ayman M. Eldeib, and Mohamed I. Owis
- Published
- 2015
- Full Text
- View/download PDF
14. An integrated evaluation for the performance of clinical engineering department.
- Author
-
Ahmed M. Yousry, Bassem K. Ouda, and Ayman M. Eldeib
- Published
- 2014
- Full Text
- View/download PDF
15. Interactive telemedicine solution based on a secure mHealth application.
- Author
-
Ayman M. Eldeib
- Published
- 2014
- Full Text
- View/download PDF
16. Comparative study for haplotype block partitioning methods - Evidence from chromosome 6 of the North American Rheumatoid Arthritis Consortium (NARAC) dataset.
- Author
-
Mohamed N Saad, Mai S Mabrouk, Ayman M Eldeib, and Olfat G Shaker
- Subjects
Medicine ,Science - Abstract
Haplotype-based methods compete with "one-SNP-at-a-time" approaches on being preferred for association studies. Chromosome 6 contains most of the known genetic biomarkers for rheumatoid arthritis (RA) disease. Therefore, chromosome 6 serves as a benchmark for the haplotype methods testing. The aim of this study is to test the North American Rheumatoid Arthritis Consortium (NARAC) dataset to find out if haplotype block methods or single-locus approaches alone can sufficiently provide the significant single nucleotide polymorphisms (SNPs) associated with RA. In addition, could we be satisfied with only one method of the haplotype block methods for partitioning chromosome 6 of the NARAC dataset? In the NARAC dataset, chromosome 6 comprises 35,574 SNPs for 2,062 individuals (868 cases, 1,194 controls). Individual SNP approach and three haplotype block methods were applied to the NARAC dataset to identify the RA biomarkers. We employed three haplotype partitioning methods which are confidence interval test (CIT), four gamete test (FGT), and solid spine of linkage disequilibrium (SSLD). P-values after stringent Bonferroni correction for multiple testing were measured to assess the strength of association between the genetic variants and RA susceptibility. Moreover, the block size (in base pairs (bp) and number of SNPs included), number of blocks, percentage of uncovered SNPs by the block method, percentage of significant blocks from the total number of blocks, number of significant haplotypes and SNPs were used to compare among the three haplotype block methods. Individual SNP, CIT, FGT, and SSLD methods detected 432, 1,086, 1,099, and 1,322 associated SNPs, respectively. Each method identified significant SNPs that were not detected by any other method (Individual SNP: 12, FGT: 37, CIT: 55, and SSLD: 189 SNPs). 916 SNPs were discovered by all the three haplotype block methods. 367 SNPs were discovered by the haplotype block methods and the individual SNP approach. The P-values of these 367 SNPs were lower than those of the SNPs uniquely detected by only one method. The 367 SNPs detected by all the methods represent promising candidates for RA susceptibility. They should be further investigated for the European population. A hybrid technique including the four methods should be applied to detect the significant SNPs associated with RA for chromosome 6 of the NARAC dataset. Moreover, SSLD method may be preferred for its favored benefits in case of selecting only one method.
- Published
- 2018
- Full Text
- View/download PDF
17. Volume Registration by Surface Point Signature and Mutual Information Maximization with Applications in Intra-Operative MRI Surgeries.
- Author
-
Ayman M. Eldeib, Sameh M. Yamany, and Aly A. Farag
- Published
- 2000
- Full Text
- View/download PDF
18. Multi-modal medical volumes fusion by surface matching.
- Author
-
Ayman M. Eldeib, Sameh M. Yamany, and Aly A. Farag
- Published
- 1999
- Full Text
- View/download PDF
19. Contactless Vital Signs Monitoring for Public Health Welfare
- Author
-
Laila Abbas, Soha Samy, Reem Ghazal, Ayman M. Eldeib, and Sherif H. ElGohary
- Published
- 2021
20. 3D visualization for medical volume segmentation validation.
- Author
-
Ayman M. Eldeib
- Published
- 2002
- Full Text
- View/download PDF
21. EEG Signal Classification Using Convolutional Neural Networks on Combined Spatial and Temporal Dimensions for BCI Systems
- Author
-
Ayman M. Eldeib and Ayman M. Anwar
- Subjects
Computer science ,02 engineering and technology ,Electroencephalography ,Convolutional neural network ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Brain–computer interface ,medicine.diagnostic_test ,Contextual image classification ,Artificial neural network ,business.industry ,Deep learning ,020206 networking & telecommunications ,Pattern recognition ,Topographic map ,Frequency domain ,Brain-Computer Interfaces ,Artificial intelligence ,Neural Networks, Computer ,business ,030217 neurology & neurosurgery ,Algorithms - Abstract
EEG signal classification is an important task to build an accurate Brain Computer Interface (BCI) system. Many machine learning and deep learning approaches have been used to classify EEG signals. Besides, many studies have involved the time and frequency domain features to classify EEG signals. On the other hand, a very limited number of studies combine the spatial and temporal dimensions of the EEG signal. Brain dynamics are very complex across different mental tasks, thus it is difficult to design efficient algorithms with features based on prior knowledge. Therefore, in this study, we utilized the 2D AlexNet Convolutional Neural Network (CNN) to learn EEG features across different mental tasks without prior knowledge. First, this study adds spatial and temporal dimensions of EEG signals to a 2D EEG topographic map. Second, topographic maps at different time indices were cascaded to populate a 2D image for a given time window. Finally, the topographic maps enabled the AlexNet to learn features from the spatial and temporal dimensions of the brain signals. The classification performance was obtained by the proposed method on a multiclass dataset from BCI Competition IV dataset 2a. The proposed system obtained an average classification accuracy of 81.09%, outperforming the previous state-of-the-art methods by a margin of 4% for the same dataset. The results showed that converting the EEG classification problem from a (1D) time series to a (2D) image classification problem improves the classification accuracy for BCI systems. Also, our EEG topographic maps enabled CNN to learn subtle features from spatial and temporal dimensions, which better represent mental tasks than individual time or frequency domain features.
- Published
- 2020
22. La diversidad cultural en la adopción de la educación abierta en las universidades de Oriente Medio: colectivismo y distancia del poder
- Author
-
Daniel Villar-Onrubia, Ayman M. Eldeib, Katherine Wimpenny, Isidro Maya Jariego, Mohammed Aldaoud, Omar Hiari, Romina Cachia, Adiy Tweissi, Universidad de Sevilla. Departamento de Psicología Social, and Universidad de Sevilla. HUM059: Laboratorio de Redes Personales y Comunidades
- Subjects
History ,Economic growth ,Preparación comunitaria ,Sociology and Political Science ,Higher education ,B1-5802 ,Participatory action research ,050801 communication & media studies ,Context (language use) ,eta ,Open education ,Middle East ,0508 media and communications ,Arts and Humanities (miscellaneous) ,Political science ,Cultural diversity ,AZ20-999 ,Community Readiness ,Hofstede's cultural dimensions theory ,terrorismo ,Philosophy (General) ,franquismo ,proceso de burgos ,business.industry ,Distancia del poder ,05 social sciences ,Collectivism ,Open Education ,open education, oer, open education practices, mediterranean, middle east, Open Education, Middle East, Individualism-Collectivism, Power Distance, Community Readiness ,050301 education ,Philosophy ,Power Distance ,Oriente Medio ,Educación abierta ,History of scholarship and learning. The humanities ,business ,0503 education ,Humanities ,Individualismo/ colectivismo ,Social Sciences (miscellaneous) ,Individualism-Collectivism - Abstract
In this paper, we examine how open education is adopted in the Middle East region in the context of a European-funded project for capacity building in Higher Education. Basing our study on Hofstede’s model, we examine how culture, in particularly collectivism and power distance influence the adoption of open education. In addition, we look at the relationship between internationalisation of tertiary education and open education. Based on indepth interviews, focus group, and participatory action research with experts in the fields from Egypt, Jordan, Lebanon and Palestine, our findings suggest that beyond the technical aspect and the development of content, adoption of open education in the Middle East region is influenced by cultural aspects, which needs to be taken into consideration. As an emerging sub-culture, open education has the potential to transform and change some cultural barriers related to both power distance and collectivist cultures En este artículo mostramos cómo la adopción de recursos educativos abiertos en las universidades de Oriente Medio depende en parte de factores culturales, tales como el grado de individualismo/colectivismo y la distancia a la autoridad. Utilizando el modelo de Hofstede, describimos el caso de un proyecto Erasmus+ para el desarrollo de capacidades en Educación Superior en el que se promovieron prácticas abiertas en instituciones de educación superior en Egipto, Jordania, Líbano y Palestina. Los resultados muestran cómo el grado de internacionalización es un antecedente directo de la incorporación de prácticas educativas abiertas en el contexto universitario
- Published
- 2020
23. Alzheimer's disease diagnosis from diffusion tensor images using convolutional neural networks
- Author
-
Yasser M. Kadah, Eman N. Marzban, Inas A. Yassine, and Ayman M. Eldeib
- Subjects
Central Nervous System ,Male ,Support Vector Machine ,Computer science ,Diffusion map ,Hippocampus ,Alzheimer's Disease ,Convolutional neural network ,Brain mapping ,Nervous System ,Diagnostic Radiology ,Machine Learning ,0302 clinical medicine ,Medicine and Health Sciences ,Gray Matter ,Cognitive impairment ,Cognitive Impairment ,0303 health sciences ,Brain Mapping ,Multidisciplinary ,medicine.diagnostic_test ,Cognitive Neurology ,Radiology and Imaging ,Physics ,Brain ,Neurodegenerative Diseases ,Condensed Matter Physics ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Diffusion Tensor Imaging ,Neurology ,Physical Sciences ,Disease Progression ,Medicine ,Female ,Anatomy ,Algorithms ,Research Article ,Computer and Information Sciences ,Imaging Techniques ,Cognitive Neuroscience ,Brain Morphometry ,Science ,Central nervous system ,Materials Science ,Material Properties ,Neuroimaging ,Research and Analysis Methods ,03 medical and health sciences ,Deep Learning ,Robustness (computer science) ,Diagnostic Medicine ,Artificial Intelligence ,Alzheimer Disease ,Support Vector Machines ,Mental Health and Psychiatry ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Cognitive Dysfunction ,030304 developmental biology ,Aged ,business.industry ,Deep learning ,Biology and Life Sciences ,Magnetic resonance imaging ,Pattern recognition ,Cognitive Science ,Anisotropy ,Dementia ,Artificial intelligence ,Neural Networks, Computer ,business ,030217 neurology & neurosurgery ,Diffusion MRI ,Neuroscience - Abstract
Machine learning algorithms are currently being implemented in an escalating manner to classify and/or predict the onset of some neurodegenerative diseases; including Alzheimer's Disease (AD); this could be attributed to the fact of the abundance of data and powerful computers. The objective of this work was to deliver a robust classification system for AD and Mild Cognitive Impairment (MCI) against healthy controls (HC) in a low-cost network in terms of shallow architecture and processing. In this study, the dataset included was downloaded from the Alzheimer's disease neuroimaging initiative (ADNI). The classification methodology implemented was the convolutional neural network (CNN), where the diffusion maps, and gray-matter (GM) volumes were the input images. The number of scans included was 185, 106, and 115 for HC, MCI and AD respectively. Ten-fold cross-validation scheme was adopted and the stacked mean diffusivity (MD) and GM volume produced an AUC of 0.94 and 0.84, an accuracy of 93.5% and 79.6%, a sensitivity of 92.5% and 62.7%, and a specificity of 93.9% and 89% for AD/HC and MCI/HC classification respectively. This work elucidates the impact of incorporating data from different imaging modalities; i.e. structural Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI), where deep learning was employed for the aim of classification. To the best of our knowledge, this is the first study assessing the impact of having more than one scan per subject and propose the proper maneuver to confirm the robustness of the system. The results were competitive among the existing literature, which paves the way for improving medications that could slow down the progress of the AD or prevent it.
- Published
- 2020
24. BCI Integrated with VR for Rehabilitation
- Author
-
Alshaimaa Aamer, Amira Esawy, Ayman M. Anwar, Omnia Swelam, Toaa Nabil, and Ayman M. Eldeib
- Subjects
Rehabilitation ,020205 medical informatics ,Computer science ,Speech recognition ,medicine.medical_treatment ,Feature extraction ,02 engineering and technology ,Virtual reality ,Sudden death ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,Motor imagery ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,030212 general & internal medicine ,Brain–computer interface - Abstract
Stroke is one of the leading causes of paralysis. It is defined by being the sudden death of some brain cells due to lack of oxygen when blood flow to the brain is lost. Stroke leaves a large number of people with cognitive, affective and motor impairments depending on assistance in their daily lives. Many studies have proved that motor imagery (MI) based Brain Computer Interface (BCI) is an effective technique for post-stroke rehabilitation. It requires a real time response with high achieved accuracy. In this paper, we present a real time rehabilitation system for post-stroke patients. The system consists of a Virtual Reality (VR) 3D game controlled by a BCI system. We utilized various schemes of pre-processing, features extraction, feature selectors and classifiers to ameliorate the performance of the system for dataset IV 2a [Four class motor imagery (001–2014) – Institute for Knowledge Discovery, Graz University of Technology]. We achieved a maximum accuracy of 79% for the signal processing part using CSP feature extractor and SVM classifier with signal processing time of 0.02 seconds and 0.6 seconds for the response of whole system as a current status.
- Published
- 2019
25. EEG-based motor imagery classification using digraph Fourier transforms and extreme learning machines
- Author
-
M. H. Said, Ayman M. Eldeib, Muhammad A. Rushdi, and Mahmoud H. Annaby
- Subjects
Polynomial ,Computer science ,0206 medical engineering ,Biomedical Engineering ,Health Informatics ,02 engineering and technology ,Electroencephalography ,03 medical and health sciences ,0302 clinical medicine ,Motor imagery ,medicine ,Extreme learning machine ,Brain–computer interface ,Quantitative Biology::Neurons and Cognition ,medicine.diagnostic_test ,business.industry ,Pattern recognition ,Directed graph ,020601 biomedical engineering ,ComputingMethodologies_PATTERNRECOGNITION ,Autoregressive model ,Signal Processing ,Graph (abstract data type) ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Motor imagery patterns are extensively exploited in brain-computer interface systems in order to control outer devices without using peripheral nerves or muscles. Classification of these patterns can be based on the associated electroencephalogram (EEG) signals. Recent approaches addressed this classification problem through techniques exploiting mainly information from one or two EEG channels. However, these approaches overlook correlations between multiple EEG channels. In this paper, we create motor-imagery classification systems based on graph-theoretic models of multichannel EEG signals. In particular, multivariate autoregressive models are used to establish the relations between the EEG channels and construct directed graph signals. Also, we constructed undirected graph signal models with Gaussian-weighted distances between graph nodes. Then, a novel variant of the graph Fourier transform is applied to the directed and undirected graph models with and without edge weights. Distinctive features were thus extracted from the transform coefficients. Additional features were computed using common spatial patterns, polynomial representations and principal components of EEG signals. Significant performance improvements were achieved using extreme learning machine (ELM) classifiers. For Dataset Ia of the BCI Competition 2003, our approach led to a classification accuracy of 96.58% with fully-connected weighted directed graph features computed on the delta-band EEG signals. For the six subjects of the Dataset 1 of the BCI Competition IV, our approach compared well with other state-of-the-art methods in the alpha and beta EEG bands.
- Published
- 2021
26. Analyzing cytogenetic chromosomal aberrations on fibrolamellar hepatocellular carcinoma detected by single-nucleotide polymorphs array
- Author
-
Ayman M. Eldeib, Esraa M. Hashem, and Mai S. Mabrouk
- Subjects
0209 industrial biotechnology ,Liver tumor ,02 engineering and technology ,Biology ,medicine.disease ,Genome ,DNA sequencing ,Gene expression profiling ,020901 industrial engineering & automation ,Fibrolamellar hepatocellular carcinoma ,Artificial Intelligence ,Hepatocellular carcinoma ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Cancer research ,Target signature ,020201 artificial intelligence & image processing ,Gene ,Software - Abstract
Fibrolamellar hepatocellular carcinoma is a unique malignant liver tumor type which arises in young adults and children. It is uncommon variation subtype of hepatocellular carcinoma which remains ineffectively recorded. Learning of cytogenetic changes in fibrolamellar hepatocellular carcinoma has lagged behind the information obtained from alternate entities of hepatocellular carcinoma lately. Gene expression profiling may prompt new biomarkers that may help develop diagnostic precision for distinguishing fibrolamellar hepatocellular carcinoma. The subatomic cytogenetic approach permits positional identification of gains, amplification, and deletion of DNA sequences of the whole tumor genome, to search for recurrent and particular cytogenetic changes in human fibrolamellar hepatocellular carcinoma. In this work, 13 cell lines of fibrolamellar carcinomas and 30 hepatocellular carcinoma samples examined by a single-nucleotide polymorphs array using two techniques to give more accuracy of the results. The majority of the abnormalities found in the fibrolamellar hepatocellular carcinoma positive cases seen as gain in 1q, 4q, 6q, 7p, 8q, 17q, 20q and loss in 1p, 4p-q, 8p, 11p, 13q, 17p, 18q, 19p, and 22q. The ultimate successive were central amplification at 1q (in 54% of 13 samples), 4q (in 54% of 13 samples), 7p (in 46% of 13 samples), and deletions at 19p13 (in 28% of 13 samples). The study revealed 3 distinct structural variations highlights-related genes MDM4, PRDM5, and WHSC1, and these genes are a novel target signature that can help to predict survival of patients with detecting fibrolamellar hepatocellular carcinoma.
- Published
- 2017
27. Breast Cancer Classification in Ultrasound Images using Transfer Learning
- Author
-
Mohammed Mohammed Mohammed Gomaa, Muhammad A. Rushdi, Ahmed Hijab, and Ayman M. Eldeib
- Subjects
Training set ,Computer science ,business.industry ,Deep learning ,education ,Ultrasound ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Overfitting ,Convolutional neural network ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,Medical imaging data ,Transfer of learning ,business ,Breast cancer classification - Abstract
Computer-aided detection of malignant breast tumors in ultrasound images has been receiving growing attention. In this paper, we propose a deep learning methodology to tackle this problem. The training data, which contains several hundred images of benign and malignant cases, was used to train a deep convolutional neural network (CNN). Three training approaches are proposed: a baseline approach where the CNN architecture is trained from scratch, a transfer-learning approach where the pre-trained VGG16 CNN architecture is further trained with the ultrasound images, and a fine-tuned learning approach where the deep learning parameters are fine-tuned to overcome overfitting. The experimental results demonstrate that the fine-tuned model had the best performance (0.97 accuracy, 0.98 AUC), with pre-training on US images. Creating pre-trained models using medical imaging data would certainly improve deep learning outcomes in biomedical applications.
- Published
- 2019
28. Protein Subcellular Localization Prediction Based on Internal Micro-similarities of Markov Chains
- Author
-
Ahmed A. Metwally, Ayman M. Eldeib, and Asem Alaa
- Subjects
Computer science ,Feature vector ,Markov process ,Computational biology ,Markov model ,Proteomics ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Sequence Analysis, Protein ,Databases, Protein ,030304 developmental biology ,chemistry.chemical_classification ,0303 health sciences ,Markov chain ,Drug discovery ,Probabilistic logic ,Computational Biology ,Proteins ,Protein engineering ,Subcellular localization ,Protein subcellular localization prediction ,Markov Chains ,Amino acid ,Protein Transport ,chemistry ,030220 oncology & carcinogenesis ,symbols ,Software ,Subcellular Fractions - Abstract
Elucidating protein subcellular localization is an essential topic in proteomics research due to its importance in the process of drug discovery. Unfortunately, experimentally uncovering protein subcellular targets is an arduous process that may not result in a successful localization. In contrast, computational methods can rapidly predict protein subcellular targets and are an efficient alternative to experimental methods for unannotated proteins. In this work, we introduce a new method to predict protein subcellular localization which increases the predictive power of generative probabilistic models while preserving their explanatory benefit. Our method exploits Markov models to produce a feature vector that records micro-similarities between the underlying probability distributions of a given sequence and their counterparts in reference models. Compared to ordinary Markov chain inference, we show that our method improves overall accuracy by 10% under 10-fold cross-validation on a dataset consisting of 10 subcellular locations. The source code is publicly available on https://github.com/aametwally/MC MicroSimilarities.
- Published
- 2019
29. Using open education practices across the Mediterranean for intercultural curriculum development in higher education
- Author
-
Ayman M. Eldeib, Fabio Nascimbeni, Ahmed Almakari, Isidro Maya Jariego, Saida Affouneh, Katherine Wimpenny, and Universidad de Sevilla. Departamento de Psicología Social
- Subjects
Mediterranean climate ,intercultural ,Higher education ,business.industry ,dewesternization ,05 social sciences ,050301 education ,Education ,Open education ,Multinational corporation ,JCR ,Political science ,higher education ,0502 economics and business ,Pedagogy ,international collaboration ,Curriculum development ,open education practices ,Scopus ,business ,0503 education ,050203 business & management - Abstract
This multinational authored article presents the findings and recommendations of a three-year, European-funded project ‘OpenMed: Opening up education in South Mediterranean countries’, which brought together five higher education partners from Europe and nine from the South Mediterranean region. This was the first cross-European initiative to promote the adoption of Open Educational Practices (OEP) within higher education involving educational institutions in each of the countries. A three-phase project design included gathering and analyzing case studies of OEPs globally, and, in particular, in the South Mediterranean; the organization of regional forums to encourage priorities for change; and the multi-national design and pilot implementation of a ‘training of trainers’ course for academic capacity building in OEPs as part of curricula reform. We will discuss how the cultural approaches used among experts and project partners with different national, linguistic, and educational backgrounds have instigated change in policy and practice at a personal, institutional, and national level. © 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group.
- Published
- 2019
30. Decision Support System for Medical Equipment Failure Analysis
- Author
-
Ayman M. Eldeib, A. Osman, Neven Saleh, and Walid Al-Atabany
- Subjects
Decision support system ,021103 operations research ,Process (engineering) ,Computer science ,0211 other engineering and technologies ,Medical equipment management ,Medical equipment ,Analytic hierarchy process ,02 engineering and technology ,Plan (drawing) ,Set (abstract data type) ,Risk analysis (engineering) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Decision-making models - Abstract
Medical equipment management raises a range of complex problems including those associated with maintenance process. In developing countries, hospitals rarely implement a coherent management plan in medical equipment management. One of the most significant challenges is to distinguish medical equipment that requires repair from those require replacement. Thus, a Decision Support System (DSS) is required to manage this pitfall properly. In this paper, a multi-criteria decision making model, Analytic Hierarchy Process (AHP) is presented to select an optimum maintenance strategy. With reference to literature review and experts' opinions; a set of criteria is employed to calculate a criticality score for each piece of equipment. Therefore, a list of equipment is ranked based on their scores and an optimum threshold is selected to differentiate between maintenance and replacement requirements. Fifty different types of medical equipment located in multiple public hospitals have been used in the validation of the proposed model. Results show that the proposed model can efficiently differentiate the equipment that requires repair and the others that needs to be scrapped.
- Published
- 2018
31. Breast cancer classification using deep belief networks
- Author
-
Ayman M. Eldeib and Ahmed M. Abdel-Zaher
- Subjects
020205 medical informatics ,Computer science ,02 engineering and technology ,computer.software_genre ,Machine learning ,World health ,Deep belief network ,Breast cancer ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Artificial neural network ,business.industry ,General Engineering ,Cancer ,medicine.disease ,Backpropagation ,Computer Science Applications ,020201 artificial intelligence & image processing ,Data mining ,Artificial intelligence ,Differential diagnosis ,Breast cancer classification ,business ,computer ,Classifier (UML) - Abstract
We present a CAD scheme using DBN unsupervised path followed by NN supervised path.Our two-phase method 'DBN-NN' classification accuracy is higher than using one phase.Overall accuracy of DBN-NN reaches 99.68% with 100% sensitivity & 99.47% specificity.DBN-NN was tested on the Wisconsin Breast Cancer Dataset (WBCD).DBN-NN results show classifier performance improvements over previous studies. Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions.
- Published
- 2016
32. Identification of rheumatoid arthritis biomarkers based on single nucleotide polymorphisms and haplotype blocks: A systematic review and meta-analysis
- Author
-
Ayman M. Eldeib, Olfat G. Shaker, Mohamed N. Saad, and Mai S. Mabrouk
- Subjects
0301 basic medicine ,Linkage disequilibrium ,Major histocompatibility complex ,Single-nucleotide polymorphism ,Review ,Bioinformatics ,03 medical and health sciences ,medicine ,SNP ,Rheumatoid arthritis ,General ,lcsh:Science (General) ,ComputingMethodologies_COMPUTERGRAPHICS ,lcsh:R5-920 ,Multidisciplinary ,biology ,business.industry ,Haplotype ,medicine.disease ,3. Good health ,Single nucleotide polymorphism ,Haplotype block ,030104 developmental biology ,Meta-analysis ,biology.protein ,Biomarker (medicine) ,business ,lcsh:Medicine (General) ,lcsh:Q1-390 - Abstract
Graphical abstract, Genetics of autoimmune diseases represent a growing domain with surpassing biomarker results with rapid progress. The exact cause of Rheumatoid Arthritis (RA) is unknown, but it is thought to have both a genetic and an environmental bases. Genetic biomarkers are capable of changing the supervision of RA by allowing not only the detection of susceptible individuals, but also early diagnosis, evaluation of disease severity, selection of therapy, and monitoring of response to therapy. This review is concerned with not only the genetic biomarkers of RA but also the methods of identifying them. Many of the identified genetic biomarkers of RA were identified in populations of European and Asian ancestries. The study of additional human populations may yield novel results. Most of the researchers in the field of identifying RA biomarkers use single nucleotide polymorphism (SNP) approaches to express the significance of their results. Although, haplotype block methods are expected to play a complementary role in the future of that field.
- Published
- 2016
33. Exploring intercultural learning through a blended course about open education practices across the Mediterranean
- Author
-
Katherine Wimpenny, Daniel Burgos, Fabio Nascimbeni, Ayman M. Eldeib, Stefania Aceto, Cristina Stefanelli, and Isidro Maya
- Subjects
open educational resources ,Training course ,05 social sciences ,Scopus(2) ,blended learning ,010501 environmental sciences ,01 natural sciences ,Open educational resources ,Course (navigation) ,Intercultural learning ,Blended learning ,Learning experience ,Open education ,intercultural learning ,0502 economics and business ,Pedagogy ,open education ,Sociology ,Palestine ,Mediterranean region ,050203 business & management ,0105 earth and related environmental sciences - Abstract
Ponencia de la conferencia "Volume 2018-April, 23 May 2018, Pages 285-288 2018 IEEE Global Engineering Education Conference - Emerging Trends and Challenges of Engineering Education, EDUCON 2018; Santa Cruz de Tenerife, Canary Islands; Spain; 17 April 2018 through 20 April 2018" This paper presents and reflects upon the training course organized by the OpenMed project, aimed at building capacity in Open Educational Resources (OER) and Open Education approaches across universities from the South Mediterranean, namely in Egypt, Jordan, Lebanon, Morocco and Palestine. The course, which is currently being piloted among 10 universities, represents an example of an intercultural and multilingual learning experience, both from the way in which it was conceived and developed, to the way it is actually being delivered. In this paper, we reflect on the challenges and benefits of adopting such an open approach towards intercultural learning.
- Published
- 2018
34. Unsupervised processing methods for motor imagery-based brain-computer interface
- Author
-
Ola Sarhan, Manal Abdel Wahed, and Ayman M. Eldeib
- Subjects
Computer science ,business.industry ,Interface (computing) ,SIGNAL (programming language) ,Process (computing) ,Wavelet transform ,Experimental data ,02 engineering and technology ,Machine learning ,computer.software_genre ,Motor imagery ,020204 information systems ,Frequency domain ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Brain–computer interface - Abstract
Brain-computer interface (BCI) research is speedily growing and numerous innovative techniques are proposed for implementing BCI. One of the major drawbacks in BCI's application is the problem of searching out a response from one trial. Hence, many trials are performed for every element so as to decrease the error in prediction. This led to a long time before accurately predicting the user intent and intensive user training is needed. The objective of this paper is to investigate a new technique to process the signal from brain in a real time without any prior training. The new approach is applied to experimental data for motor imagery-based BCI and is compared to the classification results of the same data using the conventional processing techniques requiring prior calibration. Different classification methodologies were used as in time and frequency domain. It is concluded that wavelet transform get best performance reaches 82.14%. Therefore, these promising results recommend that this approach can reach accuracies not extremely far from those got with training.
- Published
- 2018
35. Zero training processing technique for P300-based brain-computer interface
- Author
-
Ola Sarhan, Manal Abdel Wahed, and Ayman M. Eldeib
- Subjects
Computer science ,business.industry ,Probabilistic logic ,Experimental data ,Pattern recognition ,02 engineering and technology ,Construct (python library) ,Measure (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,020204 information systems ,Principal component analysis ,Singular value decomposition ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Brain–computer interface ,Block (data storage) - Abstract
In the past recognition of P300-based Brain Computer Interface (BCI), depends on the experience increased through adjustment or training sessions before usage by applying supervised training sets to construct the classification model. In this paper, a new technique has been introduced for BCI data that requires no earlier training. The fundamental rule of this unsupervised technique is that the trial with true activation signal inside every block must be distinctive from whatever remains of the trials inside that block. Consequently, a measure that is delicate to this difference can be utilized to settle on a choice taking into account a single block with no earlier training. In addition, different algorithms of aggregating data from many trials have been applied to extend communication speed and bit rate. Aggregation strategies include simple average, Principle Component Analysis (PCA) and Probabilistic Principle Component Analysis (PPCA). This new technique is applied to experimental data for P300-based BCI for both healthy and disabled subjects. By comparing our results to the classification output results of the same data utilizing the same processing techniques but with different classification method, it is found that the results by averaging the sample individual cases show that the proposed technique supported Singular Value Decomposition (SVD) with effective performance reaches 98.61%, while the other reached 95.83%.
- Published
- 2018
36. High Performance GPU-Based Fourier Volume Rendering
- Author
-
Ayman M. Eldeib, Amr A. Sharawi, and Marwan Abdellah
- Subjects
lcsh:Medical physics. Medical radiology. Nuclear medicine ,lcsh:Medical technology ,Article Subject ,Computer science ,lcsh:R895-920 ,OpenGL ,Graphics processing unit ,Volume rendering ,Graphics pipeline ,Rendering (computer graphics) ,Computational science ,CUDA ,lcsh:R855-855.5 ,Radiology, Nuclear Medicine and imaging ,Central processing unit ,Tiled rendering ,ComputingMethodologies_COMPUTERGRAPHICS ,Research Article - Abstract
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of itsO(N2logN)time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that areO(N3)computationally complex. Relying on theFourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look likeX-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
- Published
- 2015
37. A Two Stage Heuristics for Improvement of Existing Multi Floor Healthcare Facility Layout
- Author
-
Ayman M. Eldeib, Ahmed El Kady, and Sherif A. Sami
- Subjects
0209 industrial biotechnology ,medicine.medical_specialty ,Engineering ,Engineering drawing ,021103 operations research ,Operations research ,business.industry ,0211 other engineering and technologies ,02 engineering and technology ,Space (commercial competition) ,020901 industrial engineering & automation ,Resource (project management) ,Hungarian algorithm ,Health care ,medicine ,Key (cryptography) ,Stage (hydrology) ,business ,Heuristics ,Clinical engineering - Abstract
Many of the existing healthcare facilities face resource restrictions in reallocating of departments or expanding in space and infrastructure, which is a key requirement for providing proper services. The purpose of this research paper is to present a user-friendly Two-stage method using exchange heuristics techniques that finds improved solutions to the existing healthcare multi-floor facility layout problem. The First stage is an assignment Hungarian method that re-assigns departments to floors reducing the total of the departmental interaction costs between vertical floors is minimized. The Second stage uses the Craft algorithm that solves the layout of each floor independently of the other floors, the two stages are simulated using Microsoft Excel and it is clear that the proposed method can provide several improved layouts for medium and large-scale problem events.
- Published
- 2017
38. Motor imagery based brain computer interface using transform domain features
- Author
-
Ayman M. Eldeib, Mohamed A. Oransa, Ahmed M. Elbaz, Ayman M. Mohamed, Ahmed T. Ahmed, and Khaled Sayed
- Subjects
Adult ,Male ,Computer science ,0206 medical engineering ,Feature extraction ,Wavelet Analysis ,Feature selection ,02 engineering and technology ,Motor Activity ,Discrete Fourier transform ,symbols.namesake ,Wavelet ,0202 electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Invariant (mathematics) ,Brain–computer interface ,Signal processing ,Quantitative Biology::Neurons and Cognition ,Fourier Analysis ,business.industry ,Pattern recognition ,Electroencephalography ,Signal Processing, Computer-Assisted ,Mutual information ,020601 biomedical engineering ,Statistical classification ,Fourier analysis ,Brain-Computer Interfaces ,Principal component analysis ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithms - Abstract
Brain Computer Interface (BCI) is a channel of communication between the human brain and an external device through brain electrical activity. In this paper, we extracted different features to boost the classification accuracy as well as the mutual information of BCI systems. The extracted features include the magnitude of the discrete Fourier transform and the wavelet coefficients for the EEG signals in addition to distance series values and invariant moments calculated for the reconstructed phase space of the EEG measurements. Different preprocessing, feature selection, and classification schemes were utilized to evaluate the performance of the proposed system for dataset III from BCI competition II. The maximum accuracy achieved was 90.7% while the maximum mutual information was 0.76 bit obtained using the distance series features.
- Published
- 2017
39. A wireless real-time remote control and tele-monitoring system for mechanical ventilators
- Author
-
Heba Seddik and Ayman M. Eldeib
- Subjects
Mechanical ventilation ,Telemedicine ,Intranet ,020205 medical informatics ,Computer science ,business.industry ,medicine.medical_treatment ,02 engineering and technology ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,law ,Arduino ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Keypad ,Wireless ,The Internet ,030212 general & internal medicine ,business ,Remote control ,Simulation - Abstract
there are many reasons lead to increase the mortality of the respiratory failure patients, who are undergoing mechanical ventilation. From these reasons, the inability of an expert physician to follow up such patients where he/she cannot stay with them all time. In addition, in many cases, lack of physicians' experience leads to the inability to provide the necessary therapeutic service at the suitable time for such patients. In this work, we provide a new approach for tele-controlling and tele-monitoring the mechanical ventilator, while tele-monitoring a patient when is demanded. In this approach, we use a simple, a small, and a lightweight unit placed inside the device in an easy and a safe way. This unit allows experts to control the device remotely using a virtual keypad via internet/intranet, while monitoring the patient him/herself. Moreover, this system can tele-control the device from anywhere without causing any defect to the internal circuitry of the device. To evaluate this system, we applied it to three types of hospital mechanical ventilators with three different types of keypads. It was easy and familiar for the operator to deal with the web control page, which has the same layout of the ventilator keypad. In addition, the operator could observe the patient and the device clearly and realistically. This system can be applied not only to mechanical ventilators but also to several other medical devices. Our system provides a wireless real-time remote control for many medical devices without any considerable delay.
- Published
- 2016
40. Comparative study for haplotype block partitioning methods – Evidence from chromosome 6 of the North American Rheumatoid Arthritis Consortium (NARAC) dataset
- Author
-
Ayman M. Eldeib, Mai S. Mabrouk, Mohamed N. Saad, and Olfat G. Shaker
- Subjects
Male ,0301 basic medicine ,Linkage disequilibrium ,Heredity ,Four-gamete test ,Biochemistry ,Linkage Disequilibrium ,Computer Applications ,Major Histocompatibility Complex ,Arthritis, Rheumatoid ,Medicine and Health Sciences ,Multidisciplinary ,Chromosome Mapping ,Genetic Mapping ,symbols ,Medicine ,Chromosomes, Human, Pair 6 ,Female ,Research Article ,Computer and Information Sciences ,Genotype ,Science ,Immunology ,Rheumatoid Arthritis ,Single-nucleotide polymorphism ,Computational biology ,Biology ,Research and Analysis Methods ,Polymorphism, Single Nucleotide ,Autoimmune Diseases ,Molecular Genetics ,03 medical and health sciences ,symbols.namesake ,Rheumatology ,Genetics ,Humans ,SNP ,Genetic Predisposition to Disease ,Molecular Biology Techniques ,Molecular Biology ,Genetic Association Studies ,Genetic association ,Arthritis ,Gene Mapping ,Haplotype ,Biology and Life Sciences ,030104 developmental biology ,Bonferroni correction ,Haplotypes ,Case-Control Studies ,Multiple comparisons problem ,Clinical Immunology ,Clinical Medicine ,Biomarkers - Abstract
Haplotype-based methods compete with "one-SNP-at-a-time" approaches on being preferred for association studies. Chromosome 6 contains most of the known genetic biomarkers for rheumatoid arthritis (RA) disease. Therefore, chromosome 6 serves as a benchmark for the haplotype methods testing. The aim of this study is to test the North American Rheumatoid Arthritis Consortium (NARAC) dataset to find out if haplotype block methods or single-locus approaches alone can sufficiently provide the significant single nucleotide polymorphisms (SNPs) associated with RA. In addition, could we be satisfied with only one method of the haplotype block methods for partitioning chromosome 6 of the NARAC dataset? In the NARAC dataset, chromosome 6 comprises 35,574 SNPs for 2,062 individuals (868 cases, 1,194 controls). Individual SNP approach and three haplotype block methods were applied to the NARAC dataset to identify the RA biomarkers. We employed three haplotype partitioning methods which are confidence interval test (CIT), four gamete test (FGT), and solid spine of linkage disequilibrium (SSLD). P-values after stringent Bonferroni correction for multiple testing were measured to assess the strength of association between the genetic variants and RA susceptibility. Moreover, the block size (in base pairs (bp) and number of SNPs included), number of blocks, percentage of uncovered SNPs by the block method, percentage of significant blocks from the total number of blocks, number of significant haplotypes and SNPs were used to compare among the three haplotype block methods. Individual SNP, CIT, FGT, and SSLD methods detected 432, 1,086, 1,099, and 1,322 associated SNPs, respectively. Each method identified significant SNPs that were not detected by any other method (Individual SNP: 12, FGT: 37, CIT: 55, and SSLD: 189 SNPs). 916 SNPs were discovered by all the three haplotype block methods. 367 SNPs were discovered by the haplotype block methods and the individual SNP approach. The P-values of these 367 SNPs were lower than those of the SNPs uniquely detected by only one method. The 367 SNPs detected by all the methods represent promising candidates for RA susceptibility. They should be further investigated for the European population. A hybrid technique including the four methods should be applied to detect the significant SNPs associated with RA for chromosome 6 of the NARAC dataset. Moreover, SSLD method may be preferred for its favored benefits in case of selecting only one method.
- Published
- 2018
41. Efficient rendering of digitally reconstructed radiographs on heterogeneous computing architectures using central slice theorem
- Author
-
Mohamed Alzanati, Marwan Abdellah, Ayman M. Eldeib, and Mohamed Abdallah
- Subjects
Computer science ,Image registration ,Iterative reconstruction ,030218 nuclear medicine & medical imaging ,Rendering (computer graphics) ,Computer graphics ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Computer Systems ,Projection-slice theorem ,Computer graphics (images) ,Computer Graphics ,Humans ,Computer vision ,Fourier Analysis ,business.industry ,05 social sciences ,050301 education ,Volume rendering ,Cone-Beam Computed Tomography ,Radiographic Image Enhancement ,Programming Languages ,Artificial intelligence ,business ,0503 education ,Algorithms ,Software - Abstract
Digitally reconstructed radiographs (DRRs) play a significant role in modern clinical radiation therapy. They are used to verify patient alignments during image guided therapies with 2D-3D image registration. The generation of DRRs can be implemented intuitively in O(N3) relying on direct volume rendering (DVR) methods, such as ray marching. This complexity imposes certain limitations on the rendering performance if high quality DRR images are needed. Those DRRs can be alternatively generated in the k-space using the central slice theorem in O(N2logN). Several rendering pipelines have been designed to create the DRRs in the k-space, but they were either limited to specific vendor or entail particular software requirements. We present a high performance implementation of a k-space-based DRR generation pipeline that is executable on various heterogeneous computing architectures using OpenCL. Our implementation generates a DRR for a 5123 CT volume in 6, 2.7 and 0.68 milli-seconds on a commodity CPU, mid-range and high-end GPUs respectively.
- Published
- 2016
42. Parallel generation of digitally reconstructed radiographs on heterogeneous multi-GPU workstations
- Author
-
Abdelrahman Sayed, Ayman M. Eldeib, Marwan Abdellah, Asem Abdelaziz, E M B S Eslam Ali, Mohamed I. Owis, and Sherief Abdelaziz
- Subjects
Cone beam computed tomography ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Symmetric multiprocessor system ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,Rendering (computer graphics) ,Computer graphics ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Computer graphics (images) ,Computer Graphics ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computers ,X-Rays ,020207 software engineering ,Cone-Beam Computed Tomography ,Parallel generation ,Frame rate ,Graphics pipeline ,Radiographic Image Enhancement ,Programming Languages ,Tomography, X-Ray Computed ,Algorithms ,Software - Abstract
The growing importance of three-dimensional radiotherapy treatment has been associated with the active presence of advanced computational workflows that can simulate conventional x-ray films from computed tomography (CT) volumetric data to create digitally reconstructed radiographs (DRR). These simulated x-ray images are used to continuously verify the patient alignment in image-guided therapies with 2D-3D image registration. The present DRR rendering pipelines are quite limited to handle huge imaging stacks generated by recent state-of-the-art CT imaging modalities. We present a high performance x-ray rendering pipeline that is capable of generating high quality DRRs from large scale CT volumes. The pipeline is designed to harness the immense computing power of all the heterogeneous computing platforms that are connected to the system relying on OpenCL. Load-balancing optimization is also addressed to equalize the rendering load across the entire system. The performance benchmarks demonstrate the capability of our pipeline to generate high quality DRRs from relatively large CT volumes at interactive frame rates using cost-effective multi-GPU workstations. A 5122 DRR frame can be rendered from 1024 × 2048 × 2048 CT volumes at 85 frames per second.
- Published
- 2016
43. Interactive high resolution reconstruction of 3D ultrasound volumes on the GPU
- Author
-
Asem Abdelaziz, Marwan Abdellak, and Ayman M. Eldeib
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Ultrasound ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Frame rate ,Graphics pipeline ,3D rendering ,030218 nuclear medicine & medical imaging ,Rendering (computer graphics) ,03 medical and health sciences ,CUDA ,0302 clinical medicine ,Computer graphics (images) ,medicine ,3D ultrasound ,Computer vision ,Artificial intelligence ,Graphics ,business ,030217 neurology & neurosurgery - Abstract
Since its inception, ultrasound has been recognized to be one of the most significant technologies in clinical imaging. The rapid and continuous development of ultrasound scanning technologies has added another challenge of rendering relatively large scale 3D volumes with high resolution. Moreover, some applications have required particular high quality ultrasound rendering at interactive frame rate for improved interpretation of the data. Based on the vast computing power of the modern Graphics Processing Units (GPUs), we present an efficient ultrasound reconstruction pipeline that is capable of rendering high quality and high resolution ultrasound images at interactive frame rates for large scale volumes. The reconstruction results of our rendering pipeline have been applied on a real fetal ultrasound data. The performance benchmarks of our pipeline demonstrate its capability of rendering high quality ultrasound images for 10243 volumes with resolutions of 20482 at interactive frame rates using a recent commodity GPU.
- Published
- 2016
44. Optimized GPU-Accelerated Framework for X-Ray Rendering Using k-space Volume Reconstruction
- Author
-
Yassin Amer, Ayman M. Eldeib, and Marwan Abdellah
- Subjects
Computer science ,business.industry ,0206 medical engineering ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volume rendering ,02 engineering and technology ,020601 biomedical engineering ,Graphics pipeline ,3D rendering ,030218 nuclear medicine & medical imaging ,Rendering (computer graphics) ,Computational science ,Visualization ,03 medical and health sciences ,CUDA ,0302 clinical medicine ,Projection-slice theorem ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
X-ray rendering is recognized to be an important visualization technique in several scientific and engineering domains. It is capable of generating digital radiographs of volumetric data in the spatial domain using the X-ray transform with \(\mathscr {O}(N^3)\) complexity. Alternatively, these radiographs can be reconstructed in the k-space in \(\mathscr {O}(N^2 log N)\). This paper presents the architecture of an optimized X-ray volume rendering framework based on the Fourier slice theorem. The framework exploits the modern designs of Graphics Processing Units (GPUs). The rendering pipeline is designed to run entirely on the GPUs relying on the Compute Unified Device Architecture (CUDA) technology for computing all the data-parallel kernels and OpenGL for executing complementary geometrical operations. The interoperability between CUDA and OpenGL operations is addressed to optimize the workflow. The benchmarking results show that our framework is capable of rendering an X-ray projection of size \(512^2\) in 0.5 milli-seconds using a GeForce GTX 970 GPU.
- Published
- 2016
45. Novel Altered Region for Biomarker Discovery in Hepatocellular Carcinoma (HCC) Using Whole Genome SNP Array
- Author
-
Ayman M. Eldeib, Mai S. Mabrouk, and Esraa M. Hashem
- Subjects
0301 basic medicine ,General Computer Science ,Liver Carcinogenesis ,Computer science ,Cancer ,Single-nucleotide polymorphism ,Bioinformatics ,medicine.disease ,Gene expression profiling ,03 medical and health sciences ,030104 developmental biology ,Carcinoma Cell ,Hepatocellular carcinoma ,Chromosomal region ,Cancer research ,medicine ,Biomarker (medicine) ,Biomarker discovery ,Genotyping ,SNP array - Abstract
cancer represents one of the greatest medical causes of mortality. The majority of Hepatocellular carcinoma arises from the accumulation of genetic abnormalities, and possibly induced by exterior etiological factors especially HCV and HBV infections. There is a need for new tools to analysis the large sum of data to present relevant genetic changes that may be critical for both understanding how cancers develop and determining how they could ultimately be treated. Gene expression profiling may lead to new biomarkers that may help develop diagnostic accuracy for detecting Hepatocellular carcinoma. In this work, statistical technique (discrete stationary wavelet transform) for detection of copy number alternations to analysis high-density single-nucleotide polymorphism array of 30 cell lines on specific chromosomes, which are frequently detected in Hepatocellular carcinoma have been proposed. The results demonstrate the feasibility of whole-genome fine mapping of copy number alternations via high-density single-nucleotide polymorphism genotyping, Results revealed that a novel altered chromosomal region is discovered; region amplification (4q22.1) have been detected in 22 out of 30-Hepatocellular carcinoma cell lines (73%). This region strike, AFF1 and DSPP, tumor suppressor genes. This finding has not previously reported to be involved in liver carcinogenesis; it can be used to discover a new HCC biomarker, which helps in a better understanding of hepatocellular carcinoma.
- Published
- 2016
46. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability
- Author
-
Mohamed I. Owis, Marwan Abdellah, and Ayman M. Eldeib
- Subjects
Computer science ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Graphics processing unit ,Rendering (computer graphics) ,Computer graphics ,CUDA ,Texture mapping unit ,Computer graphics (images) ,S3 Texture Compression ,Computer Graphics ,Image Processing, Computer-Assisted ,Humans ,Graphics ,ComputingMethodologies_COMPUTERGRAPHICS ,Models, Statistical ,Computers ,X-Rays ,Software rendering ,Volume rendering ,Graphics pipeline ,Radiographic Image Enhancement ,Real-time computer graphics ,Radiographic Image Interpretation, Computer-Assisted ,Programming Languages ,General-purpose computing on graphics processing units ,Alternate frame rendering ,Head ,Texture memory ,Algorithms ,Software ,3D computer graphics - Abstract
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
- Published
- 2015
47. Accelerating DRR generation using Fourier slice theorem on the GPU
- Author
-
Mohamed I. Owis, Ayman M. Eldeib, and Marwan Abdellah
- Subjects
Computer science ,Pipeline (computing) ,Graphics processing unit ,Image registration ,Image processing ,Patient Positioning ,Computer graphics ,Imaging, Three-Dimensional ,Projection-slice theorem ,Digital image processing ,Computer Graphics ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Scaling ,Fourier Analysis ,business.industry ,X-Rays ,Volume rendering ,Magnetic Resonance Imaging ,Radiographic Image Enhancement ,Real-time computer graphics ,Ray casting ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,2D computer graphics ,Algorithms ,Software - Abstract
Digitally Reconstructed Radiographs (DRRs) play a vital role in medical imaging procedures and radiotherapy applications. They allow the continuous monitoring of patient positioning during image guided therapies using multi-dimensional image registration. Conventional generation of DRRs using spatial domain algorithms such as ray casting is associated with computational complexity of O(N(3)). Fourier slice theorem is an alternative approach for generating the DRRs in the k-space with reduced time complexity. In this work, we present a high performance, scalable, and optimized DRR generation pipeline on the Graphics Processing Unit (GPU). The strong scaling performance of the presented pipeline is investigated and demonstrated using two contemporary GPUs. Our pipeline is capable of generating DRRs for 512(3) volumes in less than a milli-second.
- Published
- 2015
48. Effect of MTHFR, TGFβ1, and TNFB polymorphisms on osteoporosis in rheumatoid arthritis patients
- Author
-
Mai S. Mabrouk, Olfat G. Shaker, Mohamed N. Saad, and Ayman M. Eldeib
- Subjects
Oncology ,Adult ,medicine.medical_specialty ,Population ,Osteoporosis ,Single-nucleotide polymorphism ,Biology ,Polymorphism, Single Nucleotide ,Arthritis, Rheumatoid ,Transforming Growth Factor beta1 ,Gene Frequency ,Internal medicine ,Genetics ,medicine ,Humans ,Genetic Predisposition to Disease ,Risk factor ,education ,Genotyping ,Lymphotoxin-alpha ,Genetic Association Studies ,Methylenetetrahydrofolate Reductase (NADPH2) ,education.field_of_study ,General Medicine ,Sequence Analysis, DNA ,Middle Aged ,medicine.disease ,Genetic marker ,Methylenetetrahydrofolate reductase ,Rheumatoid arthritis ,Case-Control Studies ,biology.protein ,Female - Abstract
Diseases of the immune and the skeletal systems should be studied together for the deep interaction between them. Many studies consider osteoporosis (OP) as a risk factor for the prediction of disease progression in rheumatoid arthritis (RA). The aim of this research is to study the effect of four single nucleotide polymorphisms (SNPs) on RA patients with and without OP. The examined SNPs (MTHFR (C677T, and A1298C), TGFβ1 (T869C), and TNFB (A252G)) were tested by genotyping 17 RA patients with OP and 72 RA patients without OP. Associations were tested using four models (multiplicative, dominant, recessive, and co-dominant). The studied SNPs were not significantly associated with the risk of OP in RA. MTHFR, TGFβ1, and TNFB polymorphisms don't appear to be clinically useful genetic markers for predicting RA severity in Egyptian women population.
- Published
- 2015
49. Interactive telemedicine solution based on a secure mHealth application
- Author
-
Ayman M. Eldeib
- Subjects
Telemedicine ,Internet ,Voice over IP ,Computer science ,business.industry ,media_common.quotation_subject ,Mobile computing ,Cloud computing ,Computer security ,computer.software_genre ,User-Computer Interface ,Electronic health record ,Health care ,Wireless ,Humans ,Quality (business) ,business ,mHealth ,computer ,Delivery of Health Care ,Cell Phone ,Computer Security ,media_common - Abstract
In dynamic healthcare environments, caregivers and patients are constantly moving. To increase the healthcare quality when it is necessary, caregivers need the ability to reach each other and securely access medical information and services from wherever they happened to be. This paper presents an Interactive Telemedicine Solution (ITS) to facilitate and automate the communication within a healthcare facility via Voice over Internet Protocol (VOIP), regular mobile phones, and Wi-Fi connectivity. Our system has the capability to exchange/provide securely healthcare information/services across geographic barriers through 3G/4G wireless communication network. Our system assumes the availability of an Electronic Health Record (EHR) system locally in the healthcare organization and/or on the cloud network such as a nation-wide EHR system. This paper demonstrate the potential of our system to provide effectively and securely remote healthcare solution.
- Published
- 2015
50. An integrated evaluation for the performance of clinical engineering department
- Author
-
Ayman M. Eldeib, Ahmed Yousry, and Bassem K. Ouda
- Subjects
Safety indicators ,Engineering ,medicine.medical_specialty ,Process management ,business.industry ,Maintenance ,Software tool ,fungi ,Biomedical Engineering ,Benchmarking ,Documentation ,Hospitals ,Reliability engineering ,Work (electrical) ,Economic indicator ,Health care ,Task Performance and Analysis ,medicine ,Performance indicator ,business ,Clinical engineering - Abstract
Performance benchmarking have become a very important component in all successful organizations nowadays that must be used by Clinical Engineering Department (CED) in hospitals. Many researchers identified essential mainstream performance indicators needed to improve the CED's performance. These studies revealed mainstream performance indicators that use the database of a CED to evaluate its performance. In this work, we believe that those indicators are insufficient for hospitals. Additional important indicators should be included to improve the evaluation accuracy. Therefore, we added new indicators: technical/maintenance indicators, economic indicators, intrinsic criticality indicators, basic hospital indicators, equipment acquisition, and safety indicators. Data is collected from 10 hospitals that cover different types of healthcare organizations. We developed a software tool that analyses collected data to provide a score for each CED under evaluation. Our results indicate that there is an average gap of 67% between the CEDs' performance and the ideal target. The reasons for the noncompliance are discussed in order to improve performance of CEDs under evaluation.
- Published
- 2015
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.