43 results on '"Tamer Abuhmed"'
Search Results
2. Early Detection of Alzheimer’s Disease Based on Laplacian Re-Decomposition and XGBoosting
- Author
-
Hala Ahmed, Hassan Soliman, Shaker El-Sappagh, Tamer Abuhmed, and Mohammed Elmogy
- Subjects
General Computer Science ,Control and Systems Engineering ,Theoretical Computer Science - Published
- 2023
- Full Text
- View/download PDF
3. Multitask Deep Learning for Cost-Effective Prediction of Patient's Length of Stay and Readmission State Using Multimodal Physical Activity Sensory Data
- Author
-
Sajid Ali, Shaker El-Sappagh, Farman Ali, Muhammad Imran, and Tamer Abuhmed
- Subjects
Health Information Management ,Health Informatics ,Electrical and Electronic Engineering ,Computer Science Applications - Abstract
In a hospital, accurate and rapid mortality prediction of Length of Stay (LOS) is essential since it is one of the essential measures in treating patients with severe diseases. When predictions of patient mortality and readmission are combined, these models gain a new level of significance. The likelihood of a patient being readmitted to the hospital is directly proportional to the LOS. Therefore, the most expensive components of patient care are LOS and readmission rates. That is why they are emphasized in health care management. Several studies have assessed readmission to the hospital as a single-task issue. The performance, robustness, and stability of the model increase when many correlated tasks are optimized. This study develops multimodal multitasking Long Short-Term Memory (LSTM) deep learning model that can predict both LOS and readmission for patients using wrist-worn Bosch multi-sensory data from 47 patients. Continuous sensory data is divided into eight sections, each of which is recorded for an hour. The time steps are constructed using a dual 10-second window-based technique, resulting in six steps per hour. The 30 statistical features are computed by transforming the sensory input into the resulting vector. The proposed multitasking model predicts 30-day readmission as a binary classification problem and LOS as a regression task by constructing discrete time-step data based on the length of physical activity during a hospital stay. The proposed deep learning model is compared to conventional machine learning approaches such as a random forest in several contexts. The proposed model is compared to a random forest for a single-task problem (classification or regression) because typical machine learning algorithms are unable to handle the multitasking challenge. In addition, sensory data combined with other cost-effective modalities such as demographics, laboratory tests, and comorbidities to construct reliable models for personalized, cost-effective, and medically acceptable prediction. With a high accuracy of 94.84%, the proposed multitask multimodal deep learning model classifies the patient's readmission status and determines the patient's LOS in hospital with a minimal Mean Square Error (MSE) of 0.025 and Root Mean Square Error (RMSE) of 0.077, which is promising, effective, and trustworthy.
- Published
- 2022
- Full Text
- View/download PDF
4. Automatic detection of Alzheimer’s disease progression: An efficient information fusion approach with heterogeneous ensemble classifiers
- Author
-
Shaker El-Sappagh, Farman Ali, Tamer Abuhmed, Jaiteg Singh, and Jose M. Alonso
- Subjects
Artificial Intelligence ,Cognitive Neuroscience ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
5. An approach for measuring spatial similarity among COVID-19 epicenters
- Author
-
Abolghasem Sadeghi-Niaraki, Tamer ABUHMED, Neda Kaffash Charandabi, and Soo-Mi Choi
- Subjects
Geography, Planning and Development ,Computers in Earth Sciences - Published
- 2022
- Full Text
- View/download PDF
6. Two-stage deep learning model for Alzheimer’s disease detection and prediction of the mild cognitive impairment time
- Author
-
Shaker El-Sappagh, Hager Saleh, Farman Ali, Eslam Amer, and Tamer Abuhmed
- Subjects
Artificial Intelligence ,Software - Published
- 2022
- Full Text
- View/download PDF
7. Crystal structure guided machine learning for the discovery and design of intrinsically hard materials
- Author
-
Jung-Gu Kim, Russlan Jaafreh, Kotiba Hamad, and Tamer AbuHmed
- Subjects
Materials science ,business.industry ,Crystal chemistry ,Metals and Alloys ,Process (computing) ,Crystal structure ,Machine learning ,computer.software_genre ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,Crystal ,Artificial intelligence ,Extreme gradient boosting ,business ,computer - Abstract
In this work, a machine learning (ML) model was created to predict intrinsic hardness of various compounds using their crystal chemistry. For this purpose, an initial dataset, containing the hardness values of 270 compounds and counterpart applied loads, was employed in the learning process. Based on various features generated using crystal information, an ML model, with a high accuracy (R2=0.942), was built using extreme gradient boosting (XGB) algorithm. Experimental validations conducted by hardness measurements of various compounds, including MSi2 (M= Nb, Ce, V, and Ta), Al2O3, and FeB4, showed that the XGB model was able to reproduce load-dependent hardness behaviors of these compounds. In addition, this model was also used to predict the behavior based on prototype crystal structures that are randomly substituted with elements.
- Published
- 2022
- Full Text
- View/download PDF
8. Explainable probabilistic deep learning framework for seismic assessment of structures using distribution‐free prediction intervals
- Author
-
Mohamed Noureldin, Tamer Abuhmed, Melike Saygi, and Jinkoo Kim
- Subjects
Computational Theory and Mathematics ,Building and Construction ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Civil and Structural Engineering - Published
- 2023
- Full Text
- View/download PDF
9. Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges
- Author
-
Shaker El-Sappagh, Jose M. Alonso-Moral, Tamer Abuhmed, Farman Ali, and Alberto Bugarín-Diz
- Subjects
Linguistics and Language ,Artificial Intelligence ,Language and Linguistics - Published
- 2023
- Full Text
- View/download PDF
10. An IoT-Based Approach for Learning Geometric Shapes in Early Childhood
- Author
-
Jalal Safari Bazargani, Abolghasem Sadeghi-Niaraki, Fatema Rahimi, Tamer Abuhmed, and Soo-Mi Choi
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
11. Sepsis prediction in intensive care unit based on genetic feature optimization and stacked deep ensemble learning
- Author
-
Nora El-Rashidy, Hazem M. El-Bakry, Samir Abdelrazek, Louai Alarabi, Farman Ali, Tamer AbuHmed, and Shaker El-Sappagh
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Organ dysfunction ,medicine.disease ,Machine learning ,computer.software_genre ,Ensemble learning ,Regression ,Sepsis ,Artificial Intelligence ,Intensive care ,Feature (machine learning) ,medicine ,Artificial intelligence ,medicine.symptom ,business ,computer ,Software - Abstract
Sepsis is a life-threatening disease that is associated with organ dysfunction. It occurs due to the body’s dysregulated response to infection. It is difficult to identify sepsis in its early stages, this delay in identification has a dramatic effect on mortality rate. Developing prognostic tools for sepsis prediction has been the focus of various studies over previous decades. However, most of these studies relied on tracking a limited number of features, as such, these approaches may not predict sepsis sufficiently accurately in many cases. Therefore, in this study, we concentrate on building a more accurate and medically relevant predictive model for identifying sepsis. First, both NSGA-II (a multi-objective genetic algorithm optimization approach) and artificial neural networks are used concurrently to extract the optimal feature subset from patient data. In the next stage, a deep learning model is built based on the selected optimal feature set. The proposed model has two layers. The first is a deep learning classification model used to predict sepsis. This is a stacking ensemble of neural network models that predicts which patients will develop sepsis. For patients who were predicted to have sepsis, data from their first six hours after admission to the ICU are retrieved, this data is then used for further model optimization. Optimization based on this small, recent timeframe leads to an increase in the effectiveness of our classification model compared to other models from previous works. In the second layer of our model, a multitask regression deep learning model is used to identify the onset time of sepsis and the blood pressure at that time in patients that were predicted to have sepsis by the first layer. Our study was performed using the medical information from the intensive care MIMIC III real-world dataset. The proposed classification model achieved 0.913, 0.921, 0.832, 0.906 for accuracy, specificity, sensitivity, and AUC, respectively. In addition, the multitask regression model obtained an RMSE of 10.26 and 9.22 for predicting the onset time of sepsis and the blood pressure at that time, respectively. There are no other studies in the literature that can accurately predict the status of sepsis in terms of its onset time and predict medically verifiable quantities like blood pressure to build confidence in the onset time prediction. The proposed model is medically intuitive and achieves superior performance when compared to all other current state-of-the-art approaches.
- Published
- 2021
- Full Text
- View/download PDF
12. A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images
- Author
-
Ihsan Ullah, Farman Ali, Babar Shah, Shaker El-Sappagh, Tamer Abuhmed, and Sang Hyun Park
- Subjects
Multidisciplinary - Abstract
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder–decoder convolutional neural network (CNN). The first network in the dual encoder–decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network’s representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder–decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods.
- Published
- 2023
- Full Text
- View/download PDF
13. Real-time human detection and behavior recognition using low-cost hardware
- Author
-
Bojun Wang, Sajid Ali, Xinyi Fan, and Tamer Abuhmed
- Published
- 2023
- Full Text
- View/download PDF
14. Large-scale and Robust Code Authorship Identification with Deep Feature Learning
- Author
-
Tamer AbuHmed, David Mohaisen, DaeHun Nyang, and Mohammed Abuhamad
- Subjects
Source code ,General Computer Science ,Java ,business.industry ,Programming language ,Computer science ,media_common.quotation_subject ,Deep learning ,020207 software engineering ,02 engineering and technology ,computer.file_format ,Python (programming language) ,computer.software_genre ,Toolchain ,Software ,Obfuscation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Executable ,Safety, Risk, Reliability and Quality ,business ,computer ,media_common ,computer.programming_language - Abstract
Successful software authorship de-anonymization has both software forensics applications and privacy implications. However, the process requires an efficient extraction of authorship attributes. The extraction of such attributes is very challenging, due to various software code formats from executable binaries with different toolchain provenance to source code with different programming languages. Moreover, the quality of attributes is bounded by the availability of software samples to a certain number of samples per author and a specific size for software samples. To this end, this work proposes a deep Learning-based approach for software authorship attribution, that facilitates large-scale, format-independent, language-oblivious, and obfuscation-resilient software authorship identification. This proposed approach incorporates the process of learning deep authorship attribution using a recurrent neural network, and ensemble random forest classifier for scalability to de-anonymize programmers. Comprehensive experiments are conducted to evaluate the proposed approach over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1,987 public repositories on GitHub. The results of our work show high accuracy despite requiring a smaller number of samples per author. Experimenting with source-code, our approach allows us to identify 8,903 GCJ authors, the largest-scale dataset used by far, with an accuracy of 92.3%. Using the real-world dataset, we achieved an identification accuracy of 94.38% for 745 C programmers on GitHub. Moreover, the proposed approach is resilient to language-specifics, and thus it can identify authors of four programming languages (e.g., C, C++, Java, and Python), and authors writing in mixed languages (e.g., Java/C++, Python/C++). Finally, our system is resistant to sophisticated obfuscation (e.g., using C Tigress) with an accuracy of 93.42% for a set of 120 authors. Experimenting with executable binaries, our approach achieves 95.74% for identifying 1,500 programmers of software binaries. Similar results were obtained when software binaries are generated with different compilation options, optimization levels, and removing of symbol information. Moreover, our approach achieves 93.86% for identifying 1,500 programmers of obfuscated binaries using all features adopted in Obfuscator-LLVM tool.
- Published
- 2021
- Full Text
- View/download PDF
15. Explainable machine learning models based on multimodal time-series data for the early detection of Parkinson’s disease
- Author
-
Muhammad Junaid, Sajid Ali, Fatma Eid, Shaker El-Sappagh, and Tamer Abuhmed
- Subjects
Health Informatics ,Software ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
16. Alzheimer’s disease progression detection model based on an early fusion of cost-effective multimodal data
- Author
-
Radhya Sahal, Hager Saleh, S. M. Riazul Islam, Eslam Amer, Tamer AbuHmed, Shaker El-Sappagh, and Farman Ali
- Subjects
Medication history ,Computer Networks and Communications ,Computer science ,business.industry ,Disease progression ,020206 networking & telecommunications ,Cognition ,02 engineering and technology ,Disease ,Machine learning ,computer.software_genre ,medicine.disease ,Comorbidity ,Support vector machine ,Chronic disease ,Neuroimaging ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cognitive impairment ,business ,computer ,Software - Abstract
Alzheimer’s disease (AD) is a severe neurodegenerative disease. The identification of patients at high risk of conversion from mild cognitive impairment to AD via earlier close monitoring, targeted investigations, and appropriate management is crucial. Recently, several machine learning (ML) algorithms have been used for AD progression detection. Most of these studies only utilized neuroimaging data from baseline visits. However, AD is a complex chronic disease, and usually, a medical expert will analyze the patient’s whole history when making a progression diagnosis. Furthermore, neuroimaging data are always either limited or not available, especially in developing countries, due to their cost. In this paper, we compare the performance of five widely used ML algorithms, namely, the support vector machine, random forest, k-nearest neighbor, logistic regression, and decision tree to predict AD progression with a prediction horizon of 2.5 years. We use 1029 subjects from the Alzheimer’s disease neuroimaging initiative (ADNI) database. In contrast to previous literature, our models are optimized using a collection of cost-effective time-series features including patient’s comorbidities, cognitive scores, medication history, and demographics. Medication and comorbidity text data are semantically prepared. Drug terms are collected and cleaned before encoding using the therapeutic chemical classification (ATC) ontology, and then semantically aggregated to the appropriate level of granularity using ATC to ensure a less sparse dataset. Our experiments assert that the early fusion of comorbidity and medication features with other features reveals significant predictive power with all models. The random forest model achieves the most accurate performance compared to other models. This study is the first of its kind to investigate the role of such multimodal time-series data on AD prediction.
- Published
- 2021
- Full Text
- View/download PDF
17. Depth, Breadth, and Complexity: Ways to Attack and Defend Deep Learning Models
- Author
-
Firuz Juraev, Eldor Abdukhamidov, Mohammed Abuhamad, and Tamer Abuhmed
- Published
- 2022
- Full Text
- View/download PDF
18. MLxPack: Investigating the Effects of Packers on ML-based Malware Detection Systems Using Static and Dynamic Traits
- Author
-
Qirui Sun, Mohammed Abuhamad, Eldor Abdukhamidov, Eric Chan-Tin, and Tamer Abuhmed
- Published
- 2022
- Full Text
- View/download PDF
19. Leveraging Spectral Representations of Control Flow Graphs for Efficient Analysis of Windows Malware
- Author
-
Qirui Sun, Eldor Abdukhamidov, Tamer Abuhmed, and Mohammed Abuhamad
- Published
- 2022
- Full Text
- View/download PDF
20. Alzheimer’s Disease Diagnosis Based on a Semantic Rule-Based Modeling and Reasoning Approach
- Author
-
Amira Rezk, Shaker El-Sappagh, Sherif Barakat, Nora Shoaip, Tamer AbuHmed, and Mohammed Elmogy
- Subjects
Rule-based modeling ,Computer science ,business.industry ,Disease ,computer.software_genre ,Computer Science Applications ,Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Natural language processing - Published
- 2021
- Full Text
- View/download PDF
21. Timing and Classification of Patellofemoral Osteoarthritis Patients Using Fast Large Margin Classifier
- Author
-
Alaa Eldin Balbaa, Mai Ramadan Ibraheem, Shaker El-Sappagh, Jilan adel, Tamer AbuHmed, and Mohammed Elmogy
- Subjects
Biomaterials ,medicine.medical_specialty ,Physical medicine and rehabilitation ,Mechanics of Materials ,business.industry ,Modeling and Simulation ,Patellofemoral osteoarthritis ,Margin classifier ,medicine ,Electrical and Electronic Engineering ,business ,Computer Science Applications - Published
- 2021
- Full Text
- View/download PDF
22. Multimodal multitask deep learning model for Alzheimer’s disease progression detection based on time series data
- Author
-
Kyung Sup Kwak, Shaker El-Sappagh, S. M. Riazul Islam, and Tamer AbuHmed
- Subjects
0209 industrial biotechnology ,Modality (human–computer interaction) ,Artificial neural network ,business.industry ,Computer science ,Cognitive Neuroscience ,Deep learning ,Stability (learning theory) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Early prediction of Alzheimer’s disease (AD) is crucial for delaying its progression. As a chronic disease, ignoring the temporal dimension of AD data affects the performance of a progression detection and medically unacceptable. Besides, AD patients are represented by heterogeneous, yet complementary, multimodalities. Multitask modeling improves progression-detection performance, robustness, and stability. However, multimodal multitask modeling has not been evaluated using time series and deep learning paradigm, especially for AD progression detection. In this paper, we propose a robust ensemble deep learning model based on a stacked convolutional neural network (CNN) and a bidirectional long short-term memory (BiLSTM) network. This multimodal multitask model jointly predicts multiple variables based on the fusion of five types of multimodal time series data plus a set of background (BG) knowledge. Predicted variables include AD multiclass progression task, and four critical cognitive scores regression tasks. The proposed model extracts local and longitudinal features of each modality using a stacked CNN and BiLSTM network. Concurrently, local features are extracted from the BG data using a feed-forward neural network. Resultant features are fused to a deep network to detect common patterns which jointly used to predict the classification and regression tasks. To validate our model, we performed six experiments on five modalities from Alzheimer’s Disease Neuroimaging Initiative (ADNI) of 1536 subjects. The results of the proposed approach achieve state-of-the-art performance for both multiclass progression and regression tasks. Moreover, our approach can be generalized in other medial domains to analyze heterogeneous temporal data for predicting patient’s future status.
- Published
- 2020
- Full Text
- View/download PDF
23. Multi-χ: Identifying Multiple Authors from Source Code Files
- Author
-
DaeHun Nyang, Mohammed Abuhamad, David Mohaisen, and Tamer AbuHmed
- Subjects
Ethics ,021110 strategic, defence & security studies ,Source code ,program features ,Computer science ,Programming language ,media_common.quotation_subject ,0211 other engineering and technologies ,020207 software engineering ,software forensics ,QA75.5-76.95 ,02 engineering and technology ,BJ1-1725 ,computer.software_genre ,Electronic computers. Computer science ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,code authorship identification ,deep learning identification ,computer ,General Environmental Science ,media_common - Abstract
Most authorship identification schemes assume that code samples are written by a single author. However, real software projects are typically the result of a team effort, making it essential to consider a finegrained multi-author identification in a single code sample, which we address with Multi-χ. Multi-χ leverages a deep learning-based approach for multi-author identification in source code, is lightweight, uses a compact representation for efficiency, and does not require any code parsing, syntax tree extraction, nor feature selection. In Multi-χ, code samples are divided into small segments, which are then represented as a sequence of n-dimensional term representations. The sequence is fed into an RNN-based verification model to assist a segment integration process which integrates positively verified segments, i.e., integrates segments that have a high probability of being written by one author. Finally, the resulting segments from the integration process are represented using word2vec or TF-IDF and fed into the identification model. We evaluate Multi-χ with several Github projects (Caffe, Facebook’s Folly, Tensor-Flow, etc.) and show remarkable accuracy. For example, Multi-χ achieves an authorship example-based accuracy (A-EBA) of 86.41% and per-segment authorship identification of 93.18% for identifying 562 programmers. We examine the performance against multiple dimensions and design choices, and demonstrate its effectiveness.
- Published
- 2020
- Full Text
- View/download PDF
24. AUToSen: Deep-Learning-Based Implicit Continuous Authentication Using Smartphone Sensors
- Author
-
DaeHun Nyang, Tamer AbuHmed, David Mohaisen, and Mohammed Abuhamad
- Subjects
021110 strategic, defence & security studies ,Authentication ,Mobile banking ,Exploit ,Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Real-time computing ,0211 other engineering and technologies ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Information sensitivity ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,Personally identifiable information ,Information Systems - Abstract
Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including, mobile banking and communication, and are used for storing private photos and files. Therefore, there is a high demand for applying usable authentication techniques that prevent unauthorized access to sensitive information. In this article, we propose AUTo Sen , a deep-learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. Unlike conventional approaches, AUTo Sen is based on deep learning to identify user distinct behavior from the embedded sensors with and without the user’s interaction with the smartphone. We investigate different deep learning architectures in modeling and capturing users’ behavioral patterns for the purpose of authentication. Moreover, we explore the sufficiency of sensory data required to accurately authenticate users. We evaluate AUTo Sen on a real-world data set that includes sensors data of 84 participants’ smartphones collected using our designed data-collection application. The experiments show that AUTo Sen operates accurately using readings of only three sensors (accelerometer, gyroscope, and magnetometer) with a high authentication frequency, e.g., one authentication attempt every 0.5 s. Using sensory data of one second enables an authentication F1-score of approximately 98%, false acceptance rate (FAR) of 0.95%, false rejection rate (FRR) of 6.67%, and equal error rate (EER) of 0.41%. While using sensory data of half a second enables an authentication F1-score of 97.52%, FAR of 0.96%, FRR of 8.08%, and EER of 0.09%. Moreover, we investigate the effects of using different sensory data at variable sampling periods on the performance of the authentication models under various settings and learning architectures.
- Published
- 2020
- Full Text
- View/download PDF
25. A Levy-Based Hybrid Pso-Ssa Optimization Algorithm for Large Economic Load Dispatch
- Author
-
OMED HASSAN AHMED, JOAN LU, ARAM MAHMOOD AHMED, Tarik Rashid, TAMER ABUHMED, and ZAHER MUNDHER YASEEN
- Published
- 2022
- Full Text
- View/download PDF
26. Multilayer dynamic ensemble model for intensive care unit mortality prediction of neonate patients
- Author
-
Firuz, Juraev, Shaker, El-Sappagh, Eldor, Abdukhamidov, Farman, Ali, and Tamer, Abuhmed
- Subjects
Machine Learning ,Intensive Care Units ,Intensive Care Units, Neonatal ,Infant, Newborn ,Humans ,Health Informatics ,Length of Stay ,Algorithms ,Computer Science Applications - Abstract
Robust and rabid mortality prediction is crucial in intensive care units because it is considered one of the critical steps for treating patients with serious conditions. Combining mortality prediction with the length of stay (LoS) prediction adds another level of importance to these models. No studies in the literature predict such tasks for neonates, especially using time-series data and dynamic ensemble techniques. Dynamic ensembles are novel techniques that dynamically select the base classifiers for each new case. Medically, implementing an accurate machine learning model is insufficient to gain the trust of physicians. The model must be able to justify its decisions. While explainable AI (XAI) techniques can be used to handle this challenge, no studies have been done in this regard for neonate monitoring in the neonatal intensive care unit (NICU). This study utilizes advanced machine learning approaches to predict mortality and LoS through data-driven learning. We propose a multilayer dynamic ensemble-based model to predict mortality as a classification task and LoS as a regression task for neonates admitted to the NICU. The model has been built based on the patient's time-series data of the first 24 h in the NICU. We utilized a cohort of 3,133 infants from the MIMIC-III real dataset to build and optimize the selected algorithms. It has shown that the dynamic ensemble models achieved better results than other classifiers, and static ensemble regressors achieved better results than classical machine learning regressors. The proposed optimized model is supported by three well-known explainability techniques of SHAP, decision tree visualization, and rule-based system. To provide online assistance to physicians in monitoring and managing neonates in the NICU, we implemented a web-based clinical decision support system based on the most accurate models and selected XAI techniques. The code of the proposed models is publicly available at https://github.com/InfoLab-SKKU/neonateMortalityPrediction.
- Published
- 2022
- Full Text
- View/download PDF
27. Code authorship identification using convolutional neural networks
- Author
-
Ji su Rhim, Mohammed Abuhamad, Sanggil Kang, DaeHun Nyang, Tamer AbuHmed, and Sana Ullah
- Subjects
Source code ,Word embedding ,Information retrieval ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,020206 networking & telecommunications ,Static program analysis ,02 engineering and technology ,Convolutional neural network ,Field (computer science) ,Identification (information) ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,020201 artificial intelligence & image processing ,Feature learning ,Software ,media_common - Abstract
Although source code authorship identification creates a privacy threat for many open source contributors, it is an important topic for the forensics field and enables many successful forensic applications, including ghostwriting detection, copyright dispute settlements, and other code analysis applications. This work proposes a convolutional neural network (CNN) based code authorship identification system. Our proposed system exploits term frequency-inverse document frequency, word embedding modeling, and feature learning techniques for code representation. This representation is then fed into a CNN-based code authorship identification model to identify the code’s author. Evaluation results from using our approach on data from Google Code Jam demonstrate an identification accuracy of up to 99.4% with 150 candidate programmers, and 96.2% with 1,600 programmers. The evaluation of our approach also shows high accuracy for programmers identification over real-world code samples from 1987 public repositories on GitHub with 95% accuracy for 745 C programmers and 97% for the C++ programmers. These results indicate that the proposed approaches are not language-specific techniques and can identify programmers of different programming languages.
- Published
- 2019
- Full Text
- View/download PDF
28. Integration of machine learning algorithms and GIS-based approaches to cutaneous leishmaniasis prevalence risk mapping
- Author
-
Negar Shabanpour, Seyed Vahid Razavi-Termeh, Abolghasem Sadeghi-Niaraki, Soo-Mi Choi, and Tamer Abuhmed
- Subjects
Global and Planetary Change ,Management, Monitoring, Policy and Law ,Computers in Earth Sciences ,Earth-Surface Processes - Published
- 2022
- Full Text
- View/download PDF
29. AdvEdge: Optimizing Adversarial Perturbations Against Interpretable Deep Learning
- Author
-
Eldor Abdukhamidov, Mohammed Abuhamad, Firuz Juraev, Eric Chan-Tin, and Tamer AbuHmed
- Published
- 2021
- Full Text
- View/download PDF
30. Alzheimer Disease Prediction Model Based on Decision Fusion of CNN-BiLSTM Deep Neural Networks
- Author
-
Kyung Sup Kwak, Shaker El-Sappagh, and Tamer AbuHmed
- Subjects
0301 basic medicine ,Modalities ,business.industry ,Computer science ,Process (engineering) ,Deep learning ,Cognition ,Machine learning ,computer.software_genre ,medicine.disease ,Convolutional neural network ,Regression ,Task (project management) ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,medicine ,Artificial intelligence ,Alzheimer's disease ,business ,computer ,030217 neurology & neurosurgery - Abstract
Alzheimer’s disease (AD) is a chronic neurodegenerative disorder. Early prediction of Alzheimer’s progression is a crucial process for the patients and their families. As a chronic disease, AD data are multimodal and time series in nature. Building a deep learning model to optimize multi-objective cost function produces a more stable and accurate model. In this paper, we propose a multimodal multitask deep learning model for AD progression detection based five time series modalities and a collection of static data. The model predicts AD progression as a multi-class classification task and four critical cognitive scores as regression tasks. The experimental results show that our model is medically intuitive, more accurate, and more stable than the state-of-the-art studies.
- Published
- 2020
- Full Text
- View/download PDF
31. Age-hardening behavior guided by the multi-objective evolutionary algorithm and machine learning
- Author
-
Kotiba Hamad, Umer Masood Chaudry, Russlan Jaafreh, and Tamer AbuHmed
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Deep learning ,Metals and Alloys ,Decision tree ,Evolutionary algorithm ,Feature selection ,Machine learning ,computer.software_genre ,Random forest ,Support vector machine ,Mechanics of Materials ,Approximation error ,Materials Chemistry ,Preprocessor ,Artificial intelligence ,business ,computer - Abstract
In the present work, multi-objective evolutionary (MOE) algorithm and machine learning (ML) techniques were employed to predict the age-hardening behavior of aluminum (Al) alloys in a wide range of processing conditions. For this purpose, data containing hardness, information on alloy compositions, and aging conditions (aging time and temperature) were extracted from previous works that reported the age-hardening of Al-Cu-Mg base alloys. Accordingly, 1591 cases were collected for various alloy compositions and processing conditions. Composition features (140) generated based on the alloy composition and element properties (atomic weight, electronegativity, etc.), and processing features (time and temperature) were subjected to a preprocessing using the MOE algorithm to reduce the number of features and use those which highly influence the hardness. MOE-processed features and counterpart hardness values are then employed in the learning process using various ML algorithms, including decision tree (DT), deep learning (DL), linear general model (GM), gradient boosted trees (GBT), random forest (RF), and support vector machine (SVM). The results show that the MOE algorithm's leveraging with ML learning processes can be successfully used to refine the features and build accurate ML predictive models compared to those created using other feature selection and preprocessing methods. In addition, the learning results showed that the predictive model built using the ensemble GBT algorithm exhibits the best performance among all models built based on other ML algorithms, where a relative error of 3.5% was recorded for the GBT-based model, and it could reproduce the experimental aging behavior of Al alloy.
- Published
- 2022
- Full Text
- View/download PDF
32. An Enhanced Grey Wolf Optimizer with a Velocity-Aided Global Search Mechanism
- Author
-
Farshad Rezaei, Hamid Reza Safavi, Mohamed Abd Elaziz, Shaker H. Ali El-Sappagh, Mohammed Azmi Al-Betar, and Tamer Abuhmed
- Subjects
global search ,General Mathematics ,QA1-939 ,Computer Science (miscellaneous) ,swarm intelligence algorithms ,meta-heuristic algorithms ,exploration ,optimization ,Engineering (miscellaneous) ,exploitation ,Mathematics ,grey wolf optimizer - Abstract
This paper proposes a novel variant of the Grey Wolf Optimization (GWO) algorithm, named Velocity-Aided Grey Wolf Optimizer (VAGWO). The original GWO lacks a velocity term in its position-updating procedure, and this is the main factor weakening the exploration capability of this algorithm. In VAGWO, this term is carefully set and incorporated into the updating formula of the GWO. Furthermore, both the exploration and exploitation capabilities of the GWO are enhanced in VAGWO via stressing the enlargement of steps that each leading wolf takes towards the others in the early iterations while stressing the reduction in these steps when approaching the later iterations. The VAGWO is compared with a set of popular and newly proposed meta-heuristic optimization algorithms through its implementation on a set of 13 high-dimensional shifted standard benchmark functions as well as 10 complex composition functions derived from the CEC2017 test suite and three engineering problems. The complexity of the proposed algorithm is also evaluated against the original GWO. The results indicate that the VAGWO is a computationally efficient algorithm, generating highly accurate results when employed to optimize high-dimensional and complex problems.
- Published
- 2022
- Full Text
- View/download PDF
33. Effective Multitask Deep Learning for IoT Malware Detection and Identification Using Behavioral Traffic Analysis
- Author
-
Sajid Ali, Omar Abusabha, Farman Ali, Muhammad Imran, and Tamer ABUHMED
- Subjects
Computer Networks and Communications ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
34. A Short Review on the Machine Learning-Guided Oxygen Uptake Prediction for Sport Science Applications
- Author
-
Kotiba Hamad, Tamer AbuHmed, and Haneen Alzamer
- Subjects
sport science ,TK7800-8360 ,Computer Networks and Communications ,Computer science ,business.industry ,Sports science ,Work (physics) ,Physical activity ,Feature selection ,Machine learning ,computer.software_genre ,Oxygen uptake ,oxygen uptake ,machine learning ,feature selection ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Artificial intelligence ,Electronics ,Electrical and Electronic Engineering ,Graded exercise test ,business ,graded exercise test ,computer - Abstract
In recent years, the rapid improvement in computing facilities combined with that achieved in algorithms and the immense amount of available data led to a great interest in machine learning (ML), which is a subset of artificial intelligence. Nowadays, the ML technique is used mostly in all applications for various purposes, whereby ML will be possible to learn from data, predict, identify patterns, and make decisions. In this regard, the ML was successfully used to predict the oxygen uptake during physical activity without the need for complicated procedures used in the direct measurement. Accordingly, in the present work, the state-of-art and recent advances related to the oxygen uptake prediction using ML were presented. Various exercise and non-exercise predictive models also were discussed.
- Published
- 2021
- Full Text
- View/download PDF
35. Machine learning-aided design of aluminum alloys with high performance
- Author
-
Tamer AbuHmed, Umer Masood Chaudry, and Kotiba Hamad
- Subjects
Materials science ,business.industry ,chemistry.chemical_element ,02 engineering and technology ,010402 general chemistry ,021001 nanoscience & nanotechnology ,Machine learning ,computer.software_genre ,01 natural sciences ,0104 chemical sciences ,Improved performance ,Precipitation hardening ,chemistry ,Mechanics of Materials ,Aluminium ,Materials Chemistry ,General Materials Science ,Artificial intelligence ,0210 nano-technology ,business ,computer - Abstract
In this work, various machine learning (ML) techniques were employed to accelerate the designing of aluminum (Al) alloys with improved performance based on the age hardening concept. For this purpose, data of Al-Cu-Mg-x (x: Zn, Zr, etc.) alloys, including composition, aging condition (time and temperature), important physical and chemical properties, and hardness were collected from the literature to train the ML algorithms for predicting Al alloys with superior hardness. The results showed that the model obtained by the gradient boosted tree (GBT) could efficiently predict the hardness of unexplored alloys.
- Published
- 2021
- Full Text
- View/download PDF
36. A Fuzzy Ontology and SVM–Based Web Content Classification System
- Author
-
Kyung Sup Kwak, Daehan Kwak, Tamer AbuHmed, Pervez Khan, Daeyoung Park, Kashif Riaz, and Farman Ali
- Subjects
Information retrieval ,General Computer Science ,Computer science ,business.industry ,General Engineering ,020206 networking & telecommunications ,02 engineering and technology ,Ontology (information science) ,Blacklist ,Support vector machine ,Web page ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,The Internet ,Web content ,business ,Block (data storage) - Abstract
The volume of adult content on the world wide web is increasing rapidly. This makes an automatic detection of adult content a more challenging task, when eliminating access to ill-suited websites. Most pornographic webpage–filtering systems are based on n-gram, naive Bayes, K-nearest neighbor, and keyword-matching mechanisms, which do not provide perfect extraction of useful data from unstructured web content. These systems have no reasoning capability to intelligently filter web content to classify medical webpages from adult content webpages. In addition, it is easy for children to access pornographic webpages due to the freely available adult content on the Internet. It creates a problem for parents wishing to protect their children from such unsuitable content. To solve these problems, this paper presents a support vector machine (SVM) and fuzzy ontology–based semantic knowledge system to systematically filter web content and to identify and block access to pornography. The proposed system classifies URLs into adult URLs and medical URLs by using a blacklist of censored webpages to provide accuracy and speed. The proposed fuzzy ontology then extracts web content to find website type (adult content, normal, and medical) and block pornographic content. In order to examine the efficiency of the proposed system, fuzzy ontology, and intelligent tools are developed using Protege 5.1 and Java, respectively. Experimental analysis shows that the performance of the proposed system is efficient for automatically detecting and blocking adult content.
- Published
- 2017
- Full Text
- View/download PDF
37. Performance Analysis and Constellation Design for the Parallel Quadrature Spatial Modulation
- Author
-
Manar Mohaisen, Tasnim Holoubi, and Tamer AbuHmed
- Subjects
Computer science ,MIMO ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Topology ,Article ,spatial modulation (SM) ,0203 mechanical engineering ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Pairwise error probability ,parallel QSM (PQSM) ,Computer Science::Information Theory ,pairwise error probability ,Transmitter ,020302 automobile design & engineering ,020206 networking & telecommunications ,Keying ,Spectral efficiency ,lcsh:QC1-999 ,QAM ,quadrature SM (QSM) ,constellation design ,Modulation ,lcsh:Q ,Antenna (radio) ,lcsh:Physics - Abstract
Spatial modulation (SM) is a multiple-input multiple-output (MIMO) technique that achieves a MIMO capacity by conveying information through antenna indices, while keeping the transmitter as simple as that of a single-input system. Quadrature SM (QSM) expands the spatial dimension of the SM into in-phase and quadrature dimensions, which are used to transmit the real and imaginary parts of a signal symbol, respectively. A parallel QSM (PQSM) was recently proposed to achieve more gain in the spectral efficiency. In PQSM, transmit antennas are split into parallel groups, where QSM is performed independently in each group using the same signal symbol. In this paper, we analytically model the asymptotic pairwise error probability of the PQSM. Accordingly, the constellation design for the PQSM is formulated as an optimization problem of the sum of multivariate functions. We provide the proposed constellations for several values of constellation size, number of transmit antennas, and number of receive antennas. The simulation results show that the proposed constellation outperforms the phase-shift keying (PSK) constellation by more than 10 dB and outperforms the quadrature-amplitude modulation (QAM) schemes by approximately 5 dB for large constellations and number of transmit antennas.
- Published
- 2020
- Full Text
- View/download PDF
38. Large-Scale and Language-Oblivious Code Authorship Identification
- Author
-
Mohammed Abuhamad, Aziz Mohaisen, DaeHun Nyang, and Tamer AbuHmed
- Subjects
Java ,Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Python (programming language) ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Natural language processing ,computer.programming_language - Abstract
Efficient extraction of code authorship attributes is key for successful identification. However, the extraction of such attributes is very challenging, due to various programming language specifics, the limited number of available code samples per author, and the average code lines per file, among others. To this end, this work proposes a Deep Learning-based Code Authorship Identification System (DL-CAIS) for code authorship attribution that facilitates large-scale, language-oblivious, and obfuscation-resilient code authorship identification. The deep learning architecture adopted in this work includes TF-IDF-based deep representation using multiple Recurrent Neural Network (RNN) layers and fully-connected layers dedicated to authorship attribution learning. The deep representation then feeds into a random forest classifier for scalability to de-anonymize the author. Comprehensive experiments are conducted to evaluate DL-CAIS over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub. The results of our work show the high accuracy despite requiring a smaller number of files per author. Namely, we achieve an accuracy of 96% when experimenting with 1,600 authors for GCJ, and 94.38% for the real-world dataset for 745 C programmers. Our system also allows us to identify 8,903 authors, the largest-scale dataset used by far, with an accuracy of 92.3%. Moreover, our technique is resilient to language-specifics, and thus it can identify authors of four programming languages (e.g. C, C++, Java, and Python), and authors writing in mixed languages (e.g. Java/C++, Python/C++). Finally, our system is resistant to sophisticated obfuscation (e.g. using C Tigress) with an accuracy of 93.42% for a set of 120 authors.
- Published
- 2018
- Full Text
- View/download PDF
39. UOIT Keyboard: A Constructive Keyboard for Small Touchscreen Devices
- Author
-
DaeHun Nyang, Tamer AbuHmed, and Kyunghee Lee
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Mobile computing ,Total error rate ,Human Factors and Ergonomics ,Constructive ,Computer Science Applications ,law.invention ,Human-Computer Interaction ,Touchscreen ,Artificial Intelligence ,Control and Systems Engineering ,law ,Human–computer interaction ,Error analysis ,Signal Processing ,Typing - Abstract
Many techniques have been proposed for reducing errors during text input on touchscreens. However, the majority of these techniques suffer from the same limitation, i.e., the keyboard keys are overcrowded on a small screen, resulting in high error rates and slow text inputs. To address this situation and resolve the problems associated with overcrowdedness, we introduce a new text-entry method called the “UOIT keyboard.” The idea behind the UOIT keyboard is to compose letters using “drawing-like typing” on the UOIT keyboard, which has 13 large keys that replace the 26 small keys that exist in the QWERTY keyboard. We describe the design, keys, and mechanism of the UOIT keyboard. A 24-participant user study was conducted to evaluate the speed and accuracy of the proposed entry method as compared with the QWERTY and multitap entry methods. As part of the evaluation, a questionnaire was used to collect participants’ preferences. The UOIT keyboard has a mean entry speed of 11.3 words/min. The UOIT keyboard significantly reduces the typing errors with 3.8% total error rate comparing with 11.2% and 16.3% for QWERTY and multitap entry methods, respectively.
- Published
- 2015
- Full Text
- View/download PDF
40. Two-level Key Pool Design-based Random Key Pre-distribution in Wireless Sensor Networks
- Author
-
DaeHun Nyang, Abedelaziz Mohaisen, and Tamer AbuHmed
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Pre distribution ,Key distribution in wireless sensor networks ,Security association ,Protocol design ,Key (cryptography) ,Overhead (computing) ,business ,Wireless sensor network ,computer ,Information Systems ,Computer network ,computer.programming_language - Abstract
In this paper, the random key pre-distribution scheme introduced in ACM CCS'02 by Eschenauer and Gligor is reexamined, and a generalized form of key establishment is introduced. As the communication overhead is one of the most critical constraints of any successful protocol design, we introduce an alternative scheme in which the connectivity is maintained at the same level as in the original work, while the communication overhead is reduced by about 40% of the original overhead, for various carefully chosen parameters. The main modification relies on the use of a two-level key pool design and two round assignment/key establishment phases. Further analysis demonstrates the efficiency of our modification.
- Published
- 2008
- Full Text
- View/download PDF
41. Collaboration in social network-based information dissemination
- Author
-
Aziz Mohaisen, Manar Mohaisen, Ting Zhu, and Tamer AbuHmed
- Subjects
Routing protocol ,Social network ,Computer science ,business.industry ,Distributed computing ,Policy-based routing ,Geographic routing ,Routing domain ,Link-state routing protocol ,Multipath routing ,business ,Hierarchical routing ,Computer network ,Triangular routing - Abstract
Connectivity and trust within social networks have been exploited to build applications on top of these networks, including information dissemination, Sybil defenses, and anonymous communication systems. In these networks, and for such applications, connectivity ensures good performance of applications while trust is assumed to always hold, so as collaboration and good behavior are always guaranteed. In this paper, we study the impact of differential behavior of users on performance in typical social network-based information dissemination applications. We classify users into either collaborative or rational (probabilistically collaborative) and study the impact of this classification and the associated behavior of users on the performance on such applications. By experimenting with real-world social network traces, we make several interesting observations. First, we show that some of the existing social graphs have high routing costs, demonstrating poor structure that prevents their use in such applications. Second, we study the factors that make probabilistically collaborative nodes important for the performance of the routing protocol within the entire network and demonstrate that the importance of these nodes stems from their topological features rather than their percentage of all the nodes within the network.
- Published
- 2012
- Full Text
- View/download PDF
42. Computationally Efficient Cooperative Public Key Authentication Protocols in Ubiquitous Sensor Network
- Author
-
Tamer AbuHmed, Abedelaziz Mohaisen, and Daehun Nyang
- Subjects
Computer science ,business.industry ,Public key authentication ,business ,Computer security ,computer.software_genre ,Wireless sensor network ,computer ,Computer network - Abstract
The use of public key algorithms to sensor networks brings all merits of these algorithms to such networks: nodes do not need to encounter each other in advance in order to be able to communicate securely. However, this will not be possible unless “good” key management primitives that guarantee the functionality of these algorithms in the wireless sensor networks are provided. Among these primitives is public key authentication: before sensor nodes can use public keys of other nodes in the network to encrypt traffic to them, they need to make sure that the key provided for a particular node is authentic. In the near past, several researchers have addressed the problem and proposed solutions for it as well. In this chapter we review these solutions. We further discuss a new scheme which uses collaboration among sensor nodes for public key authentication. Unlike the existing solutions for public key authentication in sensor network, which demand a fixed, yet high amount of resources, the discussed work is dynamic; it meets a desirable security requirement at a given overhead constraints that need to be provided. It is scalable where the accuracy of the authentication and level of security are merely dependent upon the desirable level of resource consumption that the network operator wants to put into the authentication operation.
- Published
- 2011
- Full Text
- View/download PDF
43. Software-Based Remote Code Attestation in Wireless Sensor Network
- Author
-
DaeHun Nyang, Nandinbold Nyamaa, and Tamer AbuHmed
- Subjects
business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Network delay ,Key distribution in wireless sensor networks ,Embedded system ,Sensor node ,Mobile wireless sensor network ,business ,Communications protocol ,Wireless sensor network ,Tamper resistance ,Communication channel ,Computer network - Abstract
Sensor nodes are usually vulnerable to be compromised due to their unattended deployment. The low cost requirement of the sensor node precludes using an expensive tamper resistant hardware for sensor physical protection. Thus, the adversary can reprogram the compromised sensors and deviates sensor network functionality. In this paper, we propose two simple software-based remote code attestation schemes for different WSN criterion. Our schemes use different independent memory noise filling techniques called pre-deployment and post-deployment noise filling, and also different communication protocols for attestation purpose. The protocols are well-suited for wireless sensor networks, where external factors, such as channel collision, result in network delay. Hence, the success of our schemes of attestation does not depend on the accurate measurement of the execution time, which is the main drawback of previously proposed wireless sensor network attestation schemes.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.