244 results on '"Shaikh, Asadullah"'
Search Results
202. A Deep Learning CNN Approach Regarding Drone Surveillance in Fire-Fighting Scenarios
- Author
-
Travediu, Ana-Maria, Vladareanu, Luige, Munteanu, Radu, Niu, Jianye, Melinte, Daniel Octavian, Pușcașu, Ionel, Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, Shaikh, Asadullah, editor, Alghamdi, Abdullah, editor, Tan, Qing, editor, and El Emary, Ibrahiem M. M., editor
- Published
- 2024
- Full Text
- View/download PDF
203. Rethinking the E-HR Function: The Case of ERP at the Hassan II University Hospital Center
- Author
-
Chaaibi, Imane, Ouddasser, Abderrahmane, Baghdad, Achraf, Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, Shaikh, Asadullah, editor, Alghamdi, Abdullah, editor, Tan, Qing, editor, and El Emary, Ibrahiem M. M., editor
- Published
- 2024
- Full Text
- View/download PDF
204. Exploring the Impact of Digital Art Therapy on People with Dementia: A Framework and Research-Based Discussion
- Author
-
Shojaei, Fereshtehossadat, Shojaei, Fatemehalsadat, Bergvist, Erik Stolterman, Shih, Patrick C., Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, Shaikh, Asadullah, editor, Alghamdi, Abdullah, editor, Tan, Qing, editor, and El Emary, Ibrahiem M. M., editor
- Published
- 2024
- Full Text
- View/download PDF
205. Reinventing Public Health: From LEAN Management to Optimizing Hospital Logistics
- Author
-
Baghdad, Achraf, Ouddasser, Abderrahmane, Chaaibi, Imane, Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, Shaikh, Asadullah, editor, Alghamdi, Abdullah, editor, Tan, Qing, editor, and El Emary, Ibrahiem M. M., editor
- Published
- 2024
- Full Text
- View/download PDF
206. Heart patient health monitoring system using invasive and non-invasive measurement.
- Author
-
Mastoi, Qurat-ul-Ain, Alqahtani, Ali, Almakdi, Sultan, Sulaiman, Adel, Rajab, Adel, Shaikh, Asadullah, and Alqhtani, Samar M.
- Subjects
- *
MACHINE learning , *CARDIAC patients , *ARRHYTHMIA , *PATIENT monitoring , *HEART conduction system , *MEDICAL personnel , *SUPPORT vector machines , *HEART - Abstract
The abnormal heart conduction, known as arrhythmia, can contribute to cardiac diseases that carry the risk of fatal consequences. Healthcare professionals typically use electrocardiogram (ECG) signals and certain preliminary tests to identify abnormal patterns in a patient's cardiac activity. To assess the overall cardiac health condition, cardiac specialists monitor these activities separately. This procedure may be arduous and time-intensive, potentially impacting the patient's well-being. This study automates and introduces a novel solution for predicting the cardiac health conditions, specifically identifying cardiac morbidity and arrhythmia in patients by using invasive and non-invasive measurements. The experimental analyses conducted in medical studies entail extremely sensitive data and any partial or biased diagnoses in this field are deemed unacceptable. Therefore, this research aims to introduce a new concept of determining the uncertainty level of machine learning algorithms using information entropy. To assess the effectiveness of machine learning algorithms information entropy can be considered as a unique performance evaluator of the machine learning algorithm which is not selected previously any studies within the realm of bio-computational research. This experiment was conducted on arrhythmia and heart disease datasets collected from Massachusetts Institute of Technology-Berth Israel Hospital-arrhythmia (DB-1) and Cleveland Heart Disease (DB-2), respectively. Our framework consists of four significant steps: 1) Data acquisition, 2) Feature preprocessing approach, 3) Implementation of learning algorithms, and 4) Information Entropy. The results demonstrate the average performance in terms of accuracy achieved by the classification algorithms: Neural Network (NN) achieved 99.74%, K-Nearest Neighbor (KNN) 98.98%, Support Vector Machine (SVM) 99.37%, Random Forest (RF) 99.76 % and Naïve Bayes (NB) 98.66% respectively. We believe that this study paves the way for further research, offering a framework for identifying cardiac health conditions through machine learning techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
207. Vulnerability detection in Java source code using a quantum convolutional neural network with self-attentive pooling, deep sequence, and graph-based hybrid feature extraction.
- Author
-
Hussain, Shumaila, Nadeem, Muhammad, Baber, Junaid, Hamdi, Mohammed, Rajab, Adel, Al Reshan, Mana Saleh, and Shaikh, Asadullah
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *FEATURE extraction , *SOURCE code , *COMPUTER security vulnerabilities , *FLOWGRAPHS - Abstract
Software vulnerabilities pose a significant threat to system security, necessitating effective automatic detection methods. Current techniques face challenges such as dependency issues, language bias, and coarse detection granularity. This study presents a novel deep learning-based vulnerability detection system for Java code. Leveraging hybrid feature extraction through graph and sequence-based techniques enhances semantic and syntactic understanding. The system utilizes control flow graphs (CFG), abstract syntax trees (AST), program dependencies (PD), and greedy longest-match first vectorization for graph representation. A hybrid neural network (GCN-RFEMLP) and the pre-trained CodeBERT model extract features, feeding them into a quantum convolutional neural network with self-attentive pooling. The system addresses issues like long-term information dependency and coarse detection granularity, employing intermediate code representation and inter-procedural slice code. To mitigate language bias, a benchmark software assurance reference dataset is employed. Evaluations demonstrate the system's superiority, achieving 99.2% accuracy in detecting vulnerabilities, outperforming benchmark methods. The proposed approach comprehensively addresses vulnerabilities, including improper input validation, missing authorizations, buffer overflow, cross-site scripting, and SQL injection attacks listed by common weakness enumeration (CWE). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
208. Hybrid feature selection and classification technique for early prediction and severity of diabetes type 2.
- Author
-
Talari, Praveen, N, Bharathiraja, Kaur, Gaganpreet, Alshahrani, Hani, Al Reshan, Mana Saleh, Sulaiman, Adel, and Shaikh, Asadullah
- Subjects
- *
TYPE 2 diabetes , *MEDICAL specialties & specialists , *DECISION trees , *RECEIVER operating characteristic curves , *INSULIN resistance , *FORECASTING , *HEART , *FEATURE selection - Abstract
Diabetes prediction is an ongoing study topic in which medical specialists are attempting to forecast the condition with greater precision. Diabetes typically stays lethargic, and on the off chance that patients are determined to have another illness, like harm to the kidney vessels, issues with the retina of the eye, or a heart issue, it can cause metabolic problems and various complexities in the body. Various worldwide learning procedures, including casting a ballot, supporting, and sacking, have been applied in this review. The Engineered Minority Oversampling Procedure (Destroyed), along with the K-overlay cross-approval approach, was utilized to achieve class evening out and approve the discoveries. Pima Indian Diabetes (PID) dataset is accumulated from the UCI Machine Learning (UCI ML) store for this review, and this dataset was picked. A highlighted engineering technique was used to calculate the influence of lifestyle factors. A two-phase classification model has been developed to predict insulin resistance using the Sequential Minimal Optimisation (SMO) and SMOTE approaches together. The SMOTE technique is used to preprocess data in the model's first phase, while SMO classes are used in the second phase. All other categorization techniques were outperformed by bagging decision trees in terms of Misclassification Error rate, Accuracy, Specificity, Precision, Recall, F1 measures, and ROC curve. The model was created using a combined SMOTE and SMO strategy, which achieved 99.07% correction with 0.1 ms of runtime. The suggested system's result is to enhance the classifier's performance in spotting illness early. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
209. An intelligent LinkNet-34 model with EfficientNetB7 encoder for semantic segmentation of brain tumor.
- Author
-
Sulaiman, Adel, Anand, Vatsala, Gupta, Sheifali, Al Reshan, Mana Saleh, Alshahrani, Hani, Shaikh, Asadullah, and Elmagzoub, M. A.
- Subjects
- *
BRAIN tumors , *DEEP learning , *CONVOLUTIONAL neural networks , *NEUROLOGICAL disorders , *CANCER diagnosis , *MAGNETIC resonance imaging - Abstract
A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
210. Enhancing Breast Cancer Detection and Classification Using Advanced Multi-Model Features and Ensemble Machine Learning Techniques.
- Author
-
Reshan, Mana Saleh Al, Amin, Samina, Zeb, Muhammad Ali, Sulaiman, Adel, Alshahrani, Hani, Azar, Ahmad Taher, and Shaikh, Asadullah
- Subjects
- *
BREAST , *MACHINE learning , *BREAST cancer , *RECEIVER operating characteristic curves , *TUMOR classification , *EARLY detection of cancer , *NAIVE Bayes classification , *NEEDLE biopsy - Abstract
Breast cancer (BC) is the most common cancer among women, making it essential to have an accurate and dependable system for diagnosing benign or malignant tumors. It is essential to detect this cancer early in order to inform subsequent treatments. Currently, fine needle aspiration (FNA) cytology and machine learning (ML) models can be used to detect and diagnose this cancer more accurately. Consequently, an effective and dependable approach needs to be developed to enhance the clinical capacity to diagnose this illness. This study aims to detect and divide BC into two categories using the Wisconsin Diagnostic Breast Cancer (WDBC) benchmark feature set and to select the fewest features to attain the highest accuracy. To this end, this study explores automated BC prediction using multi-model features and ensemble machine learning (EML) techniques. To achieve this, we propose an advanced ensemble technique, which incorporates voting, bagging, stacking, and boosting as combination techniques for the classifier in the proposed EML methods to distinguish benign breast tumors from malignant cancers. In the feature extraction process, we suggest a recursive feature elimination technique to find the most important features of the WDBC that are pertinent to BC detection and classification. Furthermore, we conducted cross-validation experiments, and the comparative results demonstrated that our method can effectively enhance classification performance and attain the highest value in six evaluation metrics, including precision, sensitivity, area under the curve (AUC), specificity, accuracy, and F1-score. Overall, the stacking model achieved the best average accuracy, at 99.89%, and its sensitivity, specificity, F1-score, precision, and AUC/ROC were 1.00%, 0.999%, 1.00%, 1.00%, and 1.00%, respectively, thus generating excellent results. The findings of this study can be used to establish a reliable clinical detection system, enabling experts to make more precise and operative decisions in the future. Additionally, the proposed technology might be used to detect a variety of cancers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
211. An Intelligent Attention-Based Transfer Learning Model for Accurate Differentiation of Bone Marrow Stains to Diagnose Hematological Disorder.
- Author
-
Alshahrani, Hani, Sharma, Gunjan, Anand, Vatsala, Gupta, Sheifali, Sulaiman, Adel, Elmagzoub, M. A., Reshan, Mana Saleh Al, Shaikh, Asadullah, and Azar, Ahmad Taher
- Subjects
- *
BLOOD diseases , *BONE marrow cells , *BONE marrow , *HEMATOPOIETIC system , *MYELODYSPLASTIC syndromes , *BLOOD cells - Abstract
Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models—DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2—are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
212. Artificial Intelligence-Based Secured Power Grid Protocol for Smart City.
- Author
-
Sulaiman, Adel, Nagu, Bharathiraja, Kaur, Gaganpreet, Karuppaiah, Pradeepa, Alshahrani, Hani, Reshan, Mana Saleh Al, AlYami, Sultan, and Shaikh, Asadullah
- Subjects
- *
ARTIFICIAL intelligence , *SMART power grids , *SMART cities , *ELECTRIC power , *RECURRENT neural networks , *COMPUTER engineering - Abstract
Due to the modern power system's rapid development, more scattered smart grid components are securely linked into the power system by encircling a wide electrical power network with the underpinning communication system. By enabling a wide range of applications, such as distributed energy management, system state forecasting, and cyberattack security, these components generate vast amounts of data that automate and improve the efficiency of the smart grid. Due to traditional computer technologies' inability to handle the massive amount of data that smart grid systems generate, AI-based alternatives have received a lot of interest. Long Short-Term Memory (LSTM) and recurrent Neural Networks (RNN) will be specifically developed in this study to address this issue by incorporating the adaptively time-developing energy system's attributes to enhance the model of the dynamic properties of contemporary Smart Grid (SG) that are impacted by Revised Encoding Scheme (RES) or system reconfiguration to differentiate LSTM changes & real-time threats. More specifically, we provide a federated instructional strategy for consumer sharing of power data to Power Grid (PG) that is supported by edge clouds, protects consumer privacy, and is communication-efficient. They then design two optimization problems for Energy Data Owners (EDO) and energy service operations, as well as a local information assessment method in Federated Learning (FL) by taking non-independent and identically distributed (IID) effects into consideration. The test results revealed that LSTM had a longer training duration, four hidden levels, and higher training loss than other models. The provided method works incredibly well in several situations to identify FDIA. The suggested approach may successfully induce EDOs to employ high-quality local models, increase the payout of the ESP, and decrease task latencies, according to extensive simulations, which are the last points. According to the verification results, every assault sample could be effectively recognized utilizing the current detection methods and the LSTM RNN-based structure created by Smart. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
213. Automatic Identification of Glomerular in Whole-Slide Images Using a Modified UNet Model.
- Author
-
Kaur, Gurjinder, Garg, Meenu, Gupta, Sheifali, Juneja, Sapna, Rashid, Junaid, Gupta, Deepali, Shah, Asadullah, and Shaikh, Asadullah
- Subjects
- *
AUTOMATIC identification , *KIDNEY cortex , *CHRONIC kidney failure , *KIDNEY failure , *BLOOD filtration - Abstract
Glomeruli are interconnected capillaries in the renal cortex that are responsible for blood filtration. Damage to these glomeruli often signifies the presence of kidney disorders like glomerulonephritis and glomerulosclerosis, which can ultimately lead to chronic kidney disease and kidney failure. The timely detection of such conditions is essential for effective treatment. This paper proposes a modified UNet model to accurately detect glomeruli in whole-slide images of kidney tissue. The UNet model was modified by changing the number of filters and feature map dimensions from the first to the last layer to enhance the model's capacity for feature extraction. Moreover, the depth of the UNet model was also improved by adding one more convolution block to both the encoder and decoder sections. The dataset used in the study comprised 20 large whole-side images. Due to their large size, the images were cropped into 512 × 512-pixel patches, resulting in a dataset comprising 50,486 images. The proposed model performed well, with 95.7% accuracy, 97.2% precision, 96.4% recall, and 96.7% F1-score. These results demonstrate the proposed model's superior performance compared to the original UNet model, the UNet model with EfficientNetb3, and the current state-of-the-art. Based on these experimental findings, it has been determined that the proposed model accurately identifies glomeruli in extracted kidney patches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
214. Correction to: Contribution of Sustainable and Responsible Finance: Issues and Perspectives
- Author
-
Bellaali, Fatima, El Bouhadi, Abdelhamid, Zaryouhi, Mohammed, Bouchao, Najah, Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, Shaikh, Asadullah, editor, Alghamdi, Abdullah, editor, Tan, Qing, editor, and El Emary, Ibrahiem M. M., editor
- Published
- 2024
- Full Text
- View/download PDF
215. Improving in-text citation reason extraction and classification using supervised machine learning techniques.
- Author
-
Ihsan, Imran, Rahman, Hameedur, Shaikh, Asadullah, Sulaiman, Adel, Rajab, Khairan, and Rajab, Adel
- Subjects
- *
EXTRACTION (Linguistics) , *DIGITAL libraries , *COMPUTATIONAL linguistics , *SUPPORT vector machines , *MACHINE learning - Abstract
In the last decade, automatic extraction and classification of in-text citations have received immense popularity and have become one of the most frequently used techniques to evaluate research. Due to the large volume of in-text citations in various digital libraries such as Web of Science, Scopus, Google Scholar, Microsoft Academic, etc., machine learning models and natural language processing techniques are being used to extract, classify, and analyze them. Typical automatic in-text classification techniques use sentiment-based classes (Positive, Negative, and Neutral). However, there are cognitive-based schemes as well that classify in-text citations based on the author's perspective. In such schemes, extracting citation reasons with high recall is challenging. To address this challenge, we have used eight citations' context and reason classes defined by CCRO (Citation's Context and Reasons Ontology) to develop a machine learning model to achieve high recall without compromising on precision. We have worked on Association for Computational Linguistics Corpus with over 7000 in-text citations, randomly annotated by experts in CCRO classes. Afterwards, an array of machine-learning models is implemented on the annotated dataset: Support Vector Machine (SVM), Naïve Bayesian (NB), and Random Forest (RF). We have used various part-of-speech (Nouns, Verbs, Adverbs, and Adjectives) as novel features. Our results show that we have outperformed the three comparative models by achieving 91% accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
216. EfficientNetB0 cum FPN Based Semantic Segmentation of Gastrointestinal Tract Organs in MRI Scans.
- Author
-
Sharma, Neha, Gupta, Sheifali, Reshan, Mana Saleh Al, Sulaiman, Adel, Alshahrani, Hani, and Shaikh, Asadullah
- Subjects
- *
GASTROINTESTINAL system , *RADIATION exposure , *MAGNETIC resonance imaging , *RADIOTHERAPY , *CANCER treatment , *AKAIKE information criterion - Abstract
The segmentation of gastrointestinal (GI) organs is crucial in radiation therapy for treating GI cancer. It allows for developing a targeted radiation therapy plan while minimizing radiation exposure to healthy tissue, improving treatment success, and decreasing side effects. Medical diagnostics in GI tract organ segmentation is essential for accurate disease detection, precise differential diagnosis, optimal treatment planning, and efficient disease monitoring. This research presents a hybrid encoder–decoder-based model for segmenting healthy organs in the GI tract in biomedical images of cancer patients, which might help radiation oncologists treat cancer more quickly. Here, EfficientNet B0 is used as a bottom-up encoder architecture for downsampling to capture contextual information by extracting meaningful and discriminative features from input images. The performance of the EfficientNet B0 encoder is compared with that of three encoders: ResNet 50, MobileNet V2, and Timm Gernet. The Feature Pyramid Network (FPN) is a top-down decoder architecture used for upsampling to recover spatial information. The performance of the FPN decoder was compared with that of three decoders: PAN, Linknet, and MAnet. This paper proposes a segmentation model named as the Feature Pyramid Network (FPN), with EfficientNet B0 as the encoder. Furthermore, the proposed hybrid model is analyzed using Adam, Adadelta, SGD, and RMSprop optimizers. Four performance criteria are used to assess the models: the Jaccard and Dice coefficients, model loss, and processing time. The proposed model can achieve Dice coefficient and Jaccard index values of 0.8975 and 0.8832, respectively. The proposed method can assist radiation oncologists in precisely targeting areas hosting cancer cells in the gastrointestinal tract, allowing for more efficient and timely cancer treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
217. Count Me Too: Sentiment Analysis of Roman Sindhi Script.
- Author
-
Alvi, Muhammd Bux, Mahoto, Naeem Ahmed, Al Reshan, Mana Saleh, Unar, Mukhtiar, Elmagzoub, M. A., and Shaikh, Asadullah
- Subjects
- *
SENTIMENT analysis , *SOCIAL media , *PHONETICS , *LEXICAL access , *SINDHI language - Abstract
Social media has given voice to people around the globe. However, all voices are not counted due to the scarcity of lexical computational resources. Such resources could harness the torrent of social media text data. Computational resources for rich languages such as English are available. More are being developed, meanwhile strengthening and enhancing the current ones. However, Roman Sindhi, a resource-poor writing style, is a phonetically rich language lacking computational resources, creating a working space for researchers. This work attempts to develop lexical sentiment resources that will help calculate the public opinion expressed in Roman Sindhi and bring their point of view into the limelight. This work is one of the initial efforts to develop lexical Roman Sindhi sentiment dictionary resources to help detect sentiment orientation in a text. Furthermore, it also developed two interfaces to leverage the lexical resources--a Roman Sindhi to English translator (RoSET) that translates a Roman Sindhi feature into an equivalent English word and a Roman Sindhi rule-based sentiment scorer (RBRS3) that assigns sentiment score to a Roman Sindhi script features. The results obtained from the developed system accommodated the bilingual dataset (Roman Sindhi + English) more adequately. An increase of 20.8% was recorded for positive sentence detection, and a 16% increase was obtained for negative sentences, whereas neutral sentences were marginalized to a lower number (59.31% decrease). The resultant system makes those public voices expressed in the Roman Sindhi script get counted, which otherwise are in vain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
218. ResRandSVM: Hybrid Approach for Acute Lymphocytic Leukemia Classification in Blood Smear Images.
- Author
-
Sulaiman, Adel, Kaur, Swapandeep, Gupta, Sheifali, Alshahrani, Hani, Reshan, Mana Saleh Al, Alyami, Sultan, and Shaikh, Asadullah
- Subjects
- *
LYMPHOBLASTIC leukemia , *DEEP learning , *FEATURE selection , *SUPPORT vector machines , *LEUCOCYTES - Abstract
Acute Lymphocytic Leukemia is a type of cancer that occurs when abnormal white blood cells are produced in the bone marrow which do not function properly, crowding out healthy cells and weakening the immunity of the body and thus its ability to resist infections. It spreads quickly in children's bodies, and if not treated promptly it may lead to death. The manual detection of this disease is a tedious and slow task. Machine learning and deep learning techniques are faster than manual detection and more accurate. In this paper, a deep feature selection-based approach ResRandSVM is proposed for the detection of Acute Lymphocytic Leukemia in blood smear images. The proposed approach uses seven deep-learning models: ResNet152, VGG16, DenseNet121, MobileNetV2, InceptionV3, EfficientNetB0 and ResNet50 for deep feature extraction from blood smear images. After that, three feature selection methods are used to extract valuable and important features: analysis of variance (ANOVA), principal component analysis (PCA), and Random Forest. Then the selected feature map is fed to four different classifiers, Adaboost, Support Vector Machine, Artificial Neural Network and Naïve Bayes models, to classify the images into leukemia and normal images. The model performs best with a combination of ResNet50 as a feature extractor, Random Forest as feature selection and Support Vector Machine as a classifier with an accuracy of 0.900, precision of 0.902, recall of 0.957 and F1-score of 0.929. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
219. BBSF: Blockchain-Based Secure Weather Forecasting Information through Routing Protocol in Vanet.
- Author
-
Sohail, Hamza, Hassan, Mahmood ul, Elmagzoub, M. A., Rajab, Adel, Rajab, Khairan, Ahmed, Adeel, Shaikh, Asadullah, Ali, Abid, and Jamil, Harun
- Subjects
- *
VEHICULAR ad hoc networks , *NETWORK performance , *SENSE data , *WEATHER forecasting , *INFORMATION networks , *DEMAND forecasting - Abstract
A vehicular ad hoc network (VANET) is a technique that uses vehicles with the ability to sense data from the environment and use it for their safety measures. Flooding is a commonly used term used for sending network packets. VANET may cause redundancy, delay, collision, and the incorrect receipt of the messages to their destination. Weather information is one of the most important types of information used for network control and provides an enhanced version of the network simulation environments. The network traffic delay and packet losses are the main problems identified inside the network. In this research, we propose a routing protocol which can transmit the weather forecasting information on demand based on source vehicle to destination vehicles, with the minimum number of hop counts, and provide significant control over network performance parameters. We propose a BBSF-based routing approach. The proposed technique effectively enhances the routing information and provides the secure and reliable service delivery of the network performance. The results taken from the network are based on hop count, network latency, network overhead, and packet delivery ratio. The results effectively show that the proposed technique is reliable in reducing the network latency, and that the hop count is minimized when transferring the weather information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
220. A Convolutional Neural Network Architecture for Segmentation of Lung Diseases Using Chest X-ray Images.
- Author
-
Sulaiman, Adel, Anand, Vatsala, Gupta, Sheifali, Asiri, Yousef, Elmagzoub, M. A., Reshan, Mana Saleh Al, and Shaikh, Asadullah
- Subjects
- *
CONVOLUTIONAL neural networks , *X-ray imaging , *LUNG diseases , *DEEP learning , *THERAPEUTICS - Abstract
The segmentation of lungs from medical images is a critical step in the diagnosis and treatment of lung diseases. Deep learning techniques have shown great promise in automating this task, eliminating the need for manual annotation by radiologists. In this research, a convolution neural network architecture is proposed for lung segmentation using chest X-ray images. In the proposed model, concatenate block is embedded to learn a series of filters or features used to extract meaningful information from the image. Moreover, a transpose layer is employed in the concatenate block to improve the spatial resolution of feature maps generated by a prior convolutional layer. The proposed model is trained using k-fold validation as it is a powerful and flexible tool for evaluating the performance of deep learning models. The proposed model is evaluated on five different subsets of the data by taking the value of k as 5 to obtain the optimized model to obtain more accurate results. The performance of the proposed model is analyzed for different hyper-parameters such as the batch size as 32, optimizer as Adam and 40 epochs. The dataset used for the segmentation of disease is taken from the Kaggle repository. The various performance parameters such as accuracy, IoU, and dice coefficient are calculated, and the values obtained are 0.97, 0.93, and 0.96, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
221. Weighted Average Ensemble Deep Learning Model for Stratification of Brain Tumor in MRI Images.
- Author
-
Anand, Vatsala, Gupta, Sheifali, Gupta, Deepali, Gulzar, Yonis, Xin, Qin, Juneja, Sapna, Shah, Asadullah, and Shaikh, Asadullah
- Subjects
- *
BRAIN tumors , *MAGNETIC resonance imaging , *DEEP learning , *CONVOLUTIONAL neural networks , *CANCER diagnosis , *ARTIFICIAL intelligence , *STRENGTH training - Abstract
Brain tumor diagnosis at an early stage can improve the chances of successful treatment and better patient outcomes. In the biomedical industry, non-invasive diagnostic procedures, such as magnetic resonance imaging (MRI), can be used to diagnose brain tumors. Deep learning, a type of artificial intelligence, can analyze MRI images in a matter of seconds, reducing the time it takes for diagnosis and potentially improving patient outcomes. Furthermore, an ensemble model can help increase the accuracy of classification by combining the strengths of multiple models and compensating for their individual weaknesses. Therefore, in this research, a weighted average ensemble deep learning model is proposed for the classification of brain tumors. For the weighted ensemble classification model, three different feature spaces are taken from the transfer learning VGG19 model, Convolution Neural Network (CNN) model without augmentation, and CNN model with augmentation. These three feature spaces are ensembled with the best combination of weights, i.e., weight1, weight2, and weight3 by using grid search. The dataset used for simulation is taken from The Cancer Genome Atlas (TCGA), having a lower-grade glioma collection with 3929 MRI images of 110 patients. The ensemble model helps reduce overfitting by combining multiple models that have learned different aspects of the data. The proposed ensemble model outperforms the three individual models for detecting brain tumors in terms of accuracy, precision, and F1-score. Therefore, the proposed model can act as a second opinion tool for radiologists to diagnose the tumor from MRI images of the brain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
222. Detection of Multitemporal Changes with Artificial Neural Network-Based Change Detection Algorithm Using Hyperspectral Dataset.
- Author
-
Dahiya, Neelam, Singh, Sartajvir, Gupta, Sheifali, Rajab, Adel, Hamdi, Mohammed, Elmagzoub, M. A., Sulaiman, Adel, and Shaikh, Asadullah
- Subjects
- *
SURFACE of the earth , *HAZARD Analysis & Critical Control Point (Food safety system) , *FOREST monitoring , *K-nearest neighbor classification , *ALGORITHMS - Abstract
Monitoring the Earth's surface and objects is important for many applications, such as managing natural resources, crop yield predictions, and natural hazard analysis. Remote sensing is one of the most efficient and cost-effective solutions for analyzing land-use and land-cover (LULC) changes over the Earth's surface through advanced computer algorithms, such as classification and change detection. In the past literature, various developments were made to change detection algorithms to detect LULC multitemporal changes using optical or microwave imagery. The optical-based hyperspectral highlights the critical information, but sometimes it is difficult to analyze the dataset due to the presence of atmospheric distortion, radiometric errors, and misregistration. In this work, an artificial neural network-based post-classification comparison (ANPC) as change detection has been utilized to detect the muti-temporal LULC changes over a part of Uttar Pradesh, India, using the Hyperion EO-1 dataset. The experimental outcomes confirmed the effectiveness of ANPC (92.6%) as compared to the existing models, such as a spectral angle mapper (SAM) based post-classification comparison (SAMPC) (89.7%) and k-nearest neighbor (KNN) based post-classification comparison (KNNPC) (91.2%). The study will be beneficial in extracting critical information about the Earth's surface, analysis of crop diseases, crop diversity, agriculture, weather forecasting, and forest monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
223. Sustainability in Blockchain: A Systematic Literature Review on Scalability and Power Consumption Issues.
- Author
-
Alshahrani, Hani, Islam, Noman, Syed, Darakhshan, Sulaiman, Adel, Al Reshan, Mana Saleh, Rajab, Khairan, Shaikh, Asadullah, Shuja-Uddin, Jaweed, and Soomro, Aadar
- Subjects
- *
BLOCKCHAINS , *DIGITAL asset management , *SCALABILITY , *SUSTAINABILITY , *BITCOIN , *ELECTRONIC records - Abstract
Blockchain is a peer-to-peer trustless network that keeps records of digital assets without any central authority. With the passage of time, the sustainability issue of blockchain is rising. This paper discusses two major sustainability issues of blockchain: power consumption and scalability. It discusses the challenge of power consumption by analyzing various approaches to estimating power consumption in the literature. A case study of bitcoin is presented for this purpose. The study presents a review of the growing energy consumption of bitcoin along with a solution for immersion cooling in blockchain mining. The second challenge addressed in this research is scalability. With the increase in network size, scalability issues are also increasing as the number of transactions per second is decreasing. In other words, blockchain is observing low throughput with its increase in size. The paper discusses research studies and techniques proposed in the literature. The paper then investigates how to scale blockchain for better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
224. U-Net Model with Transfer Learning Model as a Backbone for Segmentation of Gastrointestinal Tract.
- Author
-
Sharma, Neha, Gupta, Sheifali, Koundal, Deepika, Alyami, Sultan, Alshahrani, Hani, Asiri, Yousef, and Shaikh, Asadullah
- Subjects
- *
SMALL intestine , *SPINE , *GASTROINTESTINAL system , *STOMACH , *GASTROINTESTINAL diseases , *INTESTINES - Abstract
The human gastrointestinal (GI) tract is an important part of the body. According to World Health Organization (WHO) research, GI tract infections kill 1.8 million people each year. In the year 2019, almost 5 million individuals were detected with gastrointestinal disease. Radiation therapy has the potential to improve cure rates in GI cancer patients. Radiation oncologists direct X-ray beams at the tumour while avoiding the stomach and intestines. The current objective is to direct the X-ray beam toward the malignancy while avoiding the stomach and intestines in order to improve dose delivery to the tumour. This study offered a technique for segmenting GI tract organs (small bowel, big intestine, and stomach) to assist radio oncologists to treat cancer patients more quickly and accurately. The suggested model is a U-Net model designed from scratch and used for the segmentation of a small size of images to extract the local features more efficiently. Furthermore, in the proposed model, six transfer learning models were employed as the backbone of the U-Net topology. The six transfer learning models used are Inception V3, SeResNet50, VGG19, DenseNet121, InceptionResNetV2, and EfficientNet B0. The suggested model was analysed with model loss, dice coefficient, and IoU. The results specify that the suggested model outperforms all transfer learning models, with performance parameter values as 0.122 model loss, 0.8854 dice coefficient, and 0.8819 IoU. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
225. An Efficient Machine Learning Model Based on Improved Features Selections for Early and Accurate Heart Disease Predication.
- Author
-
Ullah, Farhat, Chen, Xin, Rajab, Khairan, Al Reshan, Mana Saleh, Shaikh, Asadullah, Hassan, Muhammad Abul, Rizwan, Muhammad, and Davidekova, Monika
- Subjects
- *
HEART diseases , *CORONARY disease , *SUPPORT vector machines , *K-nearest neighbor classification , *DIAGNOSIS , *MACHINE learning - Abstract
Coronary heart disease has an intense impact on human life. Medical history-based diagnosis of heart disease has been practiced but deemed unreliable. Machine learning algorithms are more reliable and efficient in classifying, e.g., with or without cardiac disease. Heart disease detection must be precise and accurate to prevent human loss. However, previous research studies have several shortcomings, for example,take enough time to compute while other techniques are quick but not accurate. This research study is conducted to address the existing problem and to construct an accurate machine learning model for predicting heart disease. Our model is evaluated based on five feature selection algorithms and performance assessment matrix such as accuracy, precision, recall, F1-score, MCC, and time complexity parameters. The proposed work has been tested on all of the dataset'sfeatures as well as a subset of them. The reduction of features has an impact on theperformance of classifiers in terms of the evaluation matrix and execution time. Experimental results of the support vector machine, K-nearest neighbor, and logistic regression are 97.5%,95 %, and 93% (accuracy) with reduced computation timesof 4.4, 7.3, and 8seconds respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
226. Enhanced brain tumor detection and segmentation using densely connected convolutional networks with stacking ensemble learning.
- Author
-
Shaikh A, Amin S, Zeb MA, Sulaiman A, Al Reshan MS, and Alshahrani H
- Abstract
- Brain tumors (BT), both benign and malignant, pose a substantial impact on human health and need precise and early detection for successful treatment. Analysing magnetic resonance imaging (MRI) image is a common method for BT diagnosis and segmentation, yet misdiagnoses yield effective medical responses, impacting patient survival rates. Recent technological advancements have popularized deep learning-based medical image analysis, leveraging transfer learning to reuse pre-trained models for various applications. BT segmentation with MRI remains challenging despite advancements in image acquisition techniques. Accurate detection and segmentation are essential for proper diagnosis and treatment planning. This study aims to enhance BT detection and segmentation accuracy and effectiveness of categorization through the implementation of an advanced stacking ensemble learning (SEL) approach. This study explores the efficiency of SEL architecture in augmenting the precision of BT segmentation. SEL, a prominent approach within the machine learning paradigm, combines the predictions of base-level models and improves the overall performance of predictions in order to reduce the errors and biases of each model. The proposed approach involves designing a stacked DenseNet201 as the meta-model called SEL-DenseNet201, complemented by six diverse base models such as mobile network version 3 (MobileNet-v3), 3-dimensional convolutional neural network (3D-CNN), visual geometry group network with 16 and 19 layers (VGG-16 and VGG-19), residual network with 50 layers (ResNet50), and Alex network (AlexNet). The strengths of the base models are calculated to capture distinct aspects of the BT MRI, aiming for enhanced segmentation performance. The proposed SEL-DenseNet201 is trained using BT MRI datasets. The augmentation techniques are applied to MRI scans to balance and enhance the model performance through the application of image enhancement and segmentation techniques. The proposed SEL-DenseNet201 achieves impressive results with an accuracy of 99.65 % and a dice coefficient of 97.43 %. These outcomes underscore the superiority of the proposed model over existing approaches. This study holds the potential to be an initial screening approach for early BT detection, with a high success rate., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2025 Elsevier Ltd. All rights reserved.)
- Published
- 2025
- Full Text
- View/download PDF
227. Enhanced Grey Wolf Optimization (EGWO) and random forest based mechanism for intrusion detection in IoT networks.
- Author
-
Alqahtany SS, Shaikh A, and Alqazzaz A
- Abstract
Smart devices are enabled via the Internet of Things (IoT) and are connected in an uninterrupted world. These connected devices pose a challenge to cybersecurity systems due attacks in network communications. Such attacks have continued to threaten the operation of systems and end-users. Therefore, Intrusion Detection Systems (IDS) remain one of the most used tools for maintaining such flaws against cyber-attacks. The dynamic and multi-dimensional threat landscape in IoT network increases the challenge of Traditional IDS. The focus of this paper aims to find the key features for developing an IDS that is reliable but also efficient in terms of computation. Therefore, Enhanced Grey Wolf Optimization (EGWO) for Feature Selection (FS) is implemented. The function of EGWO is to remove unnecessary features from datasets used for intrusion detection. To test the new FS technique and decide on an optimal set of features based on the accuracy achieved and the feature taking filters, the most recent FS approach relies on the NF-ToN-IoT dataset. The selected features are evaluated by using the Random Forest (RF) algorithm to combine multiple decision trees and create an accurate result. The experimental outcomes against the most recent procedures demonstrate the capacity of the recommended FS and classification methods to determine attacks in the IDS. Analysis of the results presents that the recommended approach performs more effectively than the other recent techniques with optimized features (i.e., 23 out of 43 features), high accuracy of 99.93% and improved convergence., Competing Interests: Declarations. Competing interests: The authors declare no competing interests., (© 2024. The Author(s).)
- Published
- 2025
- Full Text
- View/download PDF
228. Attention based UNet model for breast cancer segmentation using BUSI dataset.
- Author
-
Sulaiman A, Anand V, Gupta S, Rajab A, Alshahrani H, Al Reshan MS, Shaikh A, and Hamdi M
- Subjects
- Humans, Female, Ultrasonography, Mammary methods, Algorithms, Image Interpretation, Computer-Assisted methods, Databases, Factual, Image Processing, Computer-Assisted methods, Breast Neoplasms diagnostic imaging, Breast Neoplasms pathology
- Abstract
Breast cancer, a prevalent and life-threatening disease, necessitates early detection for the effective intervention and the improved patient health outcomes. This paper focuses on the critical problem of identifying breast cancer using a model called Attention U-Net. The model is utilized on the Breast Ultrasound Image Dataset (BUSI), comprising 780 breast images. The images are categorized into three distinct groups: 437 cases classified as benign, 210 cases classified as malignant, and 133 cases classified as normal. The proposed model leverages the attention-driven U-Net's encoder blocks to capture hierarchical features effectively. The model comprises four decoder blocks which is a pivotal component in the U-Net architecture, responsible for expanding the encoded feature representation obtained from the encoder block and for reconstructing spatial information. Four attention gates are incorporated strategically to enhance feature localization during decoding, showcasing a sophisticated design that facilitates accurate segmentation of breast tumors in ultrasound images. It displays its efficacy in accurately delineating and segregating tumor borders. The experimental findings demonstrate outstanding performance, achieving an overall accuracy of 0.98, precision of 0.97, recall of 0.90, and a dice score of 0.92. It demonstrates its effectiveness in precisely defining and separating tumor boundaries. This research aims to make automated breast cancer segmentation algorithms by emphasizing the importance of early detection in boosting diagnostic capabilities and enabling prompt and targeted medical interventions., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
229. Analyzing anonymous activities using Interrupt-aware Anonymous User-System Detection Method (IAU-S-DM) in IoT.
- Author
-
Alshahrani H, Anjum M, Shahab S, Al Reshan MS, Sulaiman A, and Shaikh A
- Abstract
The intrusion detection process is important in various applications to identify unauthorized Internet of Things (IoT) network access. IoT devices are accessed by intermediators while transmitting the information, which causes security issues. Several intrusion detection systems are developed to identify intruders and unauthorized access in different software applications. Existing systems consume high computation time, making it difficult to identify intruders accurately. This research issue is mitigated by applying the Interrupt-aware Anonymous User-System Detection Method (IAU-S-DM). The method uses concealed service sessions to identify the anonymous interrupts. During this process, the system is trained with the help of different parameters such as origin, session access demands, and legitimate and illegitimate users of various sessions. These parameters help to recognize the intruder's activities with minimum computation time. In addition, the collected data is processed using the deep recurrent learning approach that identifies service failures and breaches, improving the overall intruder detection rate. The created system uses the TON-IoT dataset information that helps to identify the intruder activities while accessing the different data resources. This method's consistency is verified using the metrics of service failures of 10.65%, detection precision of 14.63%, detection time of 15.54%, and classification ratio of 20.51%., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
230. Paddy insect identification using deep features with lion optimization algorithm.
- Author
-
Elmagzoub MA, Rahman W, Roksana K, Islam MT, Sadi AHMS, Rahman MM, Rajab A, Rajab K, and Shaikh A
- Abstract
Pests are a significant challenge in paddy cultivation, resulting in a global loss of approximately 20 % of rice yield. Early detection of paddy insects can help to save these potential losses. Several ways have been suggested for identifying and categorizing insects in paddy fields, employing a range of advanced, noninvasive, and portable technologies. However, none of these systems have successfully incorporated feature optimization techniques with Deep Learning and Machine Learning. Hence, the current research provided a framework utilizing these techniques to detect and categorize images of paddy insects promptly. Initially, the suggested research will gather the image dataset and categorize it into two groups: one without paddy insects and the other with paddy insects. Furthermore, various pre-processing techniques, such as augmentation and image filtering, will be applied to enhance the quality of the dataset and eliminate any unwanted noise. To determine and analyze the deep characteristics of an image, the suggested architecture will incorporate 5 pre-trained Convolutional Neural Network models. Following that, feature selection techniques, including Principal Component Analysis (PCA), Recursive Feature Elimination (RFE), Linear Discriminant Analysis (LDA), and an optimization algorithm called Lion Optimization, were utilized in order to further reduce the redundant number of features that were collected for the study. Subsequently, the process of identifying the paddy insects will be carried out by employing 7 ML algorithms. Finally, a set of experimental data analysis has been conducted to achieve the objectives, and the proposed approach demonstrates that the extracted feature vectors of ResNet50 with Logistic Regression and PCA have achieved the highest accuracy, precisely 99.28 %. However, the present idea will significantly impact how paddy insects are diagnosed in the field., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Authors.)
- Published
- 2024
- Full Text
- View/download PDF
231. UMobileNetV2 model for semantic segmentation of gastrointestinal tract in MRI scans.
- Author
-
Sharma N, Gupta S, Gupta D, Gupta P, Juneja S, Shah A, and Shaikh A
- Subjects
- Humans, Semantics, Image Processing, Computer-Assisted methods, Female, Male, Stomach diagnostic imaging, Stomach pathology, Magnetic Resonance Imaging methods, Gastrointestinal Neoplasms diagnostic imaging, Gastrointestinal Neoplasms pathology, Gastrointestinal Tract diagnostic imaging
- Abstract
Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Sharma et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
232. Demand prediction for urban air mobility using deep learning.
- Author
-
Ahmed F, Memon MA, Rajab K, Alshahrani H, Abdalla ME, Rajab A, Houe R, and Shaikh A
- Abstract
Urban air mobility, also known as UAM, is currently being researched in a variety of metropolitan regions throughout the world as a potential new mode of transport for travelling shorter distances inside a territory. In this article, we investigate whether or not the market can back the necessary financial commitments to deploy UAM. A challenge in defining and addressing a critical phase of such guidance is called a demand forecast problem. To achieve this goal, a deep learning model for forecasting temporal data is proposed. This model is used to find and study the scientific issues involved. A benchmark dataset of 150,000 records was used for this purpose. Our experiments used different state-of-the-art DL models: LSTM, GRU, and Transformer for UAM demand prediction. The transformer showed a high performance with an RMSE of 0.64, allowing decision-makers to analyze the feasibility and viability of their investments., Competing Interests: The authors declare there are no competing interests., (©2024 Ahmed et al.)
- Published
- 2024
- Full Text
- View/download PDF
233. Extending user control for image stylization using hierarchical style transfer networks.
- Author
-
Khowaja SA, Almakdi S, Memon MA, Khuwaja P, Sulaiman A, Alqahtani A, Shaikh A, and Alghamdi A
- Abstract
The field of neural style transfer refers to the re-rendering of content image while fusing the features of a style image. The recent studies either focus on multiple style transfer or arbitrary style transfer while using perceptual and fixpoint content losses in their respective network architectures. The aforementioned losses provide notable stylization results but lack the liberty of style control to the user. Consequently, the stylization results also compromise the preservation of details with respect to the content image. This work proposes the hierarchical style transfer network (HSTN) for the image stylization task that could provide the user with the liberty to control the degree of incurred style via denoising parameter. The HSTN incorporates the proposed fixpoint control loss that preserves details from the content image and the addition of denoising CNN network (DnCNN) and denoising loss for allowing the user to control the level of stylization. The encoder-decoder block, the DnCNN block, and the loss network block make the basic building blocks of HSTN. Extensive experiments have been carried out, and the results are compared with existing works to demonstrate the effectiveness of HSTN. The subjective user evaluation shows that the HSTN's stylization represents the best fusion of style and generates unique stylization results while preserving the content image details, which is evident by acquiring 12% better results than the second-best performing method. It has also been observed that the proposed work is amongst the studies that achieve the best trade-off regarding content and style classification scores, i.e. 37.64% and 60.27%, respectively., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
234. A machine learning and deep learning-based integrated multi-omics technique for leukemia prediction.
- Author
-
Abbasi EY, Deng Z, Ali Q, Khan A, Shaikh A, Reshan MSA, Sulaiman A, and Alshahrani H
- Abstract
In recent years, scientific data on cancer has expanded, providing potential for a better understanding of malignancies and improved tailored care. Advances in Artificial Intelligence (AI) processing power and algorithmic development position Machine Learning (ML) and Deep Learning (DL) as crucial players in predicting Leukemia, a blood cancer, using integrated multi-omics technology. However, realizing these goals demands novel approaches to harness this data deluge. This study introduces a novel Leukemia diagnosis approach, analyzing multi-omics data for accuracy using ML and DL algorithms. ML techniques, including Random Forest (RF), Naive Bayes (NB), Decision Tree (DT), Logistic Regression (LR), Gradient Boosting (GB), and DL methods such as Recurrent Neural Networks (RNN) and Feedforward Neural Networks (FNN) are compared. GB achieved 97 % accuracy in ML, while RNN outperformed by achieving 98 % accuracy in DL. This approach filters unclassified data effectively, demonstrating the significance of DL for leukemia prediction. The testing validation was based on 17 different features such as patient age, sex, mutation type, treatment methods, chromosomes, and others. Our study compares ML and DL techniques and chooses the best technique that gives optimum results. The study emphasizes the implications of high-throughput technology in healthcare, offering improved patient care., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Authors.)
- Published
- 2024
- Full Text
- View/download PDF
235. Detection of offensive terms in resource-poor language using machine learning algorithms.
- Author
-
Raza MO, Mahoto NA, Hamdi M, Reshan MSA, Rajab A, and Shaikh A
- Abstract
The use of offensive terms in user-generated content on different social media platforms is one of the major concerns for these platforms. The offensive terms have a negative impact on individuals, which may lead towards the degradation of societal and civilized manners. The immense amount of content generated at a higher speed makes it humanly impossible to categorise and detect offensive terms. Besides, it is an open challenge for natural language processing (NLP) to detect such terminologies automatically. Substantial efforts are made for high-resource languages such as English. However, it becomes more challenging when dealing with resource-poor languages such as Urdu. Because of the lack of standard datasets and pre-processing tools for automatic offensive terms detection. This paper introduces a combinatorial pre-processing approach in developing a classification model for cross-platform (Twitter and YouTube) use. The approach uses datasets from two different platforms (Twitter and YouTube) the training and testing the model, which is trained to apply decision tree, random forest and naive Bayes algorithms. The proposed combinatorial pre-processing approach is applied to check how machine learning models behave with different combinations of standard pre-processing techniques for low-resource language in the cross-platform setting. The experimental results represent the effectiveness of the machine learning model over different subsets of traditional pre-processing approaches in building a classification model for automatic offensive terms detection for a low resource language, i.e. , Urdu, in the cross-platform scenario. In the experiments, when dataset D1 is used for training and D2 is applied for testing, the pre-processing approach named Stopword removal produced better results with an accuracy of 83.27%. Whilst, in this case, when dataset D2 is used for training and D1 is applied for testing, stopword removal and punctuation removal were observed as a better preprocessing approach with an accuracy of 74.54%. The combinatorial approach proposed in this paper outperformed the benchmark for the considered datasets using classical as well as ensemble machine learning with an accuracy of 82.9% and 97.2% for dataset D1 and D2, respectively., Competing Interests: The authors declare that there are no competing interests., (©2023 Raza et al.)
- Published
- 2023
- Full Text
- View/download PDF
236. Achieving model explainability for intrusion detection in VANETs with LIME.
- Author
-
Hassan F, Yu J, Syed ZS, Ahmed N, Reshan MSA, and Shaikh A
- Abstract
Vehicular ad hoc networks (VANETs) are intelligent transport subsystems; vehicles can communicate through a wireless medium in this system. There are many applications of VANETs such as traffic safety and preventing the accident of vehicles. Many attacks affect VANETs communication such as denial of service (DoS) and distributed denial of service (DDoS). In the past few years the number of DoS (denial of service) attacks are increasing, so network security and protection of the communication systems are challenging topics; intrusion detection systems need to be improved to identify these attacks effectively and efficiently. Many researchers are currently interested in enhancing the security of VANETs. Based on intrusion detection systems (IDS), machine learning (ML) techniques were employed to develop high-security capabilities. A massive dataset containing application layer network traffic is deployed for this purpose. Interpretability technique Local interpretable model-agnostic explanations (LIME) technique for better interpretation model functionality and accuracy. Experimental results demonstrate that utilizing a random forest (RF) classifier achieves 100% accuracy, demonstrating its capability to identify intrusion-based threats in a VANET setting. In addition, LIME is applied to the RF machine learning model to explain and interpret the classification, and the performance of machine learning models is evaluated in terms of accuracy, recall, and F1 score., Competing Interests: The authors declare that they have no competing interests., (© 2023 Hassan et al.)
- Published
- 2023
- Full Text
- View/download PDF
237. Detection of Pneumonia from Chest X-ray Images Utilizing MobileNet Model.
- Author
-
Reshan MSA, Gill KS, Anand V, Gupta S, Alshahrani H, Sulaiman A, and Shaikh A
- Abstract
Pneumonia has been directly responsible for a huge number of deaths all across the globe. Pneumonia shares visual features with other respiratory diseases, such as tuberculosis, which can make it difficult to distinguish between them. Moreover, there is significant variability in the way chest X-ray images are acquired and processed, which can impact the quality and consistency of the images. This can make it challenging to develop robust algorithms that can accurately identify pneumonia in all types of images. Hence, there is a need to develop robust, data-driven algorithms that are trained on large, high-quality datasets and validated using a range of imaging techniques and expert radiological analysis. In this research, a deep-learning-based model is demonstrated for differentiating between normal and severe cases of pneumonia. This complete proposed system has a total of eight pre-trained models, namely, ResNet50, ResNet152V2, DenseNet121, DenseNet201, Xception, VGG16, EfficientNet, and MobileNet. These eight pre-trained models were simulated on two datasets having 5856 images and 112,120 images of chest X-rays. The best accuracy is obtained on the MobileNet model with values of 94.23% and 93.75% on two different datasets. Key hyperparameters including batch sizes, number of epochs, and different optimizers have all been considered during comparative interpretation of these models to determine the most appropriate model.
- Published
- 2023
- Full Text
- View/download PDF
238. Performance comparison of machine learning driven approaches for classification of complex noises in quick response code images.
- Author
-
Waziry S, Wardak AB, Rasheed J, Shubair RM, Rajab K, and Shaikh A
- Abstract
Quick response codes (QRCs) are found on many consumer products and often encode security information. However, information retrieval at receiving end may become challenging due to the degraded clarity of QRC images. This degradation may occur because of the transmission of digital images over noise channels or limited printing technology. Although the ability to reduce noises is critical, it is just as important to define the type and quantity of noises present in QRC images. Therefore, this study proposed a simple deep learning-based architecture to segregate the image as either an original (normal) QRC or a noisy QRC and identifies the noise type present in the image. For this, the study is divided into two stages. Firstly, it generated a QRC image dataset of 80,000 images by introducing seven different noises (speckle, salt & pepper, Poisson, pepper, localvar, salt, and Gaussian) to the original QRC images. Secondly, the generated dataset is fed to train the proposed convolutional neural network (CNN)-based model, seventeen pre-trained deep learning models, and two classical machine learning algorithms (Naïve Bayes (NB) and Decision Tree (DT)). XceptionNet attained the highest accuracy (87.48%) and kappa (85.7%). However, it is worth noting that the proposed CNN network with few layers competes with the state-of-the-art models and attained near to best accuracy (86.75%). Furthermore, detailed analysis shows that all models failed to classify images having Gaussian and Localvar noises correctly., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
239. Early detection of student degree-level academic performance using educational data mining.
- Author
-
Meghji AF, Mahoto NA, Asiri Y, Alshahrani H, Sulaiman A, and Shaikh A
- Abstract
Higher educational institutes generate massive amounts of student data. This data needs to be explored in depth to better understand various facets of student learning behavior. The educational data mining approach has given provisions to extract useful and non-trivial knowledge from large collections of student data. Using the educational data mining method of classification, this research analyzes data of 291 university students in an attempt to predict student performance at the end of a 4-year degree program. A student segmentation framework has also been proposed to identify students at various levels of academic performance. Coupled with the prediction model, the proposed segmentation framework provides a useful mechanism for devising pedagogical policies to increase the quality of education by mitigating academic failure and encouraging higher performance. The experimental results indicate the effectiveness of the proposed framework and the applicability of classifying students into multiple performance levels using a small subset of courses being taught in the initial two years of the 4-year degree program., Competing Interests: The authors declare there are no competing interests., (©2023 Meghji et al.)
- Published
- 2023
- Full Text
- View/download PDF
240. Coding roles of long non-coding RNAs in breast cancer: Emerging molecular diagnostic biomarkers and potential therapeutic targets with special reference to chemotherapy resistance.
- Author
-
Kashyap D, Sharma R, Goel N, Buttar HS, Garg VK, Pal D, Rajab K, and Shaikh A
- Abstract
Dysregulation of epigenetic mechanisms have been depicted in several pathological consequence such as cancer. Different modes of epigenetic regulation (DNA methylation (hypomethylation or hypermethylation of promotor), histone modifications, abnormal expression of microRNAs (miRNAs), long non-coding RNAs, and small nucleolar RNAs), are discovered. Particularly, lncRNAs are known to exert pivot roles in different types of cancer including breast cancer. LncRNAs with oncogenic and tumour suppressive potential are reported. Differentially expressed lncRNAs contribute a remarkable role in the development of primary and acquired resistance for radiotherapy, endocrine therapy, immunotherapy, and targeted therapy. A wide range of molecular subtype specific lncRNAs have been assessed in breast cancer research. A number of studies have also shown that lncRNAs may be clinically used as non-invasive diagnostic biomarkers for early detection of breast cancer. Such molecular biomarkers have also been found in cancer stem cells of breast tumours. The objectives of the present review are to summarize the important roles of oncogenic and tumour suppressive lncRNAs for the early diagnosis of breast cancer, metastatic potential, and chemotherapy resistance across the molecular subtypes., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2023 Kashyap, Sharma, Goel, Buttar, Garg, Pal, Rajab and Shaikh.)
- Published
- 2023
- Full Text
- View/download PDF
241. Investigating the impact of IoT-Based smart laboratories on students' academic performance in higher education.
- Author
-
Asad MM, Naz A, Shaikh A, Alrizq M, Akram M, and Alghamdi A
- Abstract
The enormous developments in technology have transformed the way we interact with the world around us. Among the extensive yet advanced technological interventions, one of the sophisticated expansions is the appearance of the Internet of Things, a tool for developing connections of physical objects to the virtual world using small-sized sensors and certain internet protocols to lessen human interventions. The domain of education has also adopted these technological services to move from traditional methods to sophisticated and advanced teaching and learning approaches to cope with learning needs and raise quality. This paper intends to conceptualize the impact of integrating IoT in higher education to increase students' academic performance in the engineering domain through the integration of smart laboratories. Several international studies were selected and thoroughly reviewed using the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analysis to build sophisticated insights regarding the topic in terms of its conceptual as well as practical foundations. The key insights gathered through reviewed studies indicate that the Internet of things-based laboratories have significant advantages in uplifting students' academic performance through interaction, motivation, creativity, and practical learning. The integration of the Internet of things in higher educational institutes improves students' academic performance because it allows them to engage in authentic tasks and experience practical and active learning., Competing Interests: Conflict of interestThe authors declare that they have no conflicts of interest to report regarding this study., (© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.)
- Published
- 2022
- Full Text
- View/download PDF
242. Prediction Model of Adverse Effects on Liver Functions of COVID-19 ICU Patients.
- Author
-
Mashraqi A, Halawani H, Alelyani T, Mashraqi M, Makkawi M, Alasmari S, Shaikh A, and Alshehri A
- Subjects
- Bayes Theorem, Communicable Disease Control, Humans, Intensive Care Units, SARS-CoV-2, COVID-19, Drug-Related Side Effects and Adverse Reactions, Liver Diseases
- Abstract
SARS-CoV-2 is a recently discovered virus that poses an urgent threat to global health. The disease caused by this virus is termed COVID-19. Death tolls in different countries remain to rise, leading to continuous social distancing and lockdowns. Patients of different ages are susceptible to severe disease, in particular those who have been admitted to an ICU. Machine learning (ML) predictive models based on medical data patterns are an emerging topic in areas such as the prediction of liver diseases. Prediction models that combine several variables or features to estimate the risk of people being infected or experiencing a poor outcome from infection could assist medical staff in the treatment of patients, especially those that develop organ failure such as that of the liver. In this paper, we propose a model called the detecting model for liver damage (DMLD) that predicts the risk of liver damage in COVID-19 ICU patients. The DMLD model applies machine learning algorithms in order to assess the risk of liver failure based on patient data. To assess the DMLD model, collected data were preprocessed and used as input for several classifiers. SVM, decision tree (DT), Naïve Bayes (NB), KNN, and ANN classifiers were tested for performance. SVM and DT performed the best in terms of predicting illness severity based on laboratory testing., Competing Interests: The authors declare that they have no conflicts of interest., (Copyright © 2022 Aisha Mashraqi et al.)
- Published
- 2022
- Full Text
- View/download PDF
243. QoS Review: Smart Sensing in Wake of COVID-19, Current Trends and Specifications With Future Research Directions.
- Author
-
Adil M, Alshahrani H, Rajab A, Shaikh A, Song H, and Farouk A
- Abstract
Smart Sensing has shown notable contributions in the healthcare industry and revamps immense advancement. With this, the present smart sensing applications such as the Internet of Medical Things (IoMT) applications are elongated in the COVID-19 outbreak to facilitate the victims and alleviate the extensive contamination frequency of this pathogenic virus. Although, the existing IoMT applications are utilized productively in this pandemic, but somehow, the Quality of Service (QoS) metrics are overlooked, which is the basic need of these applications followed by patients, physicians, nursing staff, etc. In this review article, we will give a comprehensive assessment of the QoS of IoMT applications used in this pandemic from 2019 to 2021 to identify their requirements and current challenges by taking into account various network components and communication metrics. To claim the contribution of this work, we explored layer-wise QoS challenges in the existing literature to identify particular requirements, and set the footprint for future research. Finally, we compared each section with the existing review articles to acknowledge the uniqueness of this work followed by the answer of a question why this survey paper is needed in the presence of current state-of-the-art review papers.
- Published
- 2022
- Full Text
- View/download PDF
244. Secure Telemedicine System Design for COVID-19 Patients Treatment Using Service Oriented Architecture.
- Author
-
Shaikh A, Al Reshan MS, Sulaiman A, Alshahrani H, and Asiri Y
- Subjects
- Hospitals, Humans, Pandemics, SARS-CoV-2, COVID-19, Telemedicine
- Abstract
The coronavirus pandemic, also known as the COVID-19 pandemic, is an ongoing virus. It was first identified on December 2019 in Wuhan, China, and later spread to 192 countries. As of now, 251,266,207 people have been affected, and 5,070,244 deaths are reported. Due to the growing number of COVID-19 patients, the demand for COVID wards is increasing. Telemedicine applications are increasing drastically because of convenient treatment options. The healthcare sector is rapidly adopting telemedicine applications for the treatment of COVID-19 patients. Most telemedicine applications are developed for heterogeneous environments and due to their diverse nature, data transmission between similar and dissimilar telemedicine applications is a difficult task. In this paper, we propose a Tele-COVID system architecture design along with its security aspects to provide the treatment for COVID-19 patients from distance. Tele-COVID secure system architecture is designed to resolve the problem of data interchange between two different telemedicine applications, interoperability, and vendor lock-in. Tele-COVID is a web-based and Android telemedicine application that provides suitable treatment to COVID-19 patients. With the help of Tele-COVID, the treatment of patients at a distance is possible without the need for them to visit hospitals; in case of emergency, necessary services can also be provided. The application is tested on COVID-19 patients in the county hospital and shows the initial results.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.