125 results on '"Irfan Mehmood"'
Search Results
2. A deep learning-based framework for accurate identification and crop estimation of olive trees
- Author
-
Umair Khan, Muazzam Maqsood, Saira Gillani, Mehr Yahya Durrani, Irfan Mehmood, and Sanghyun Seo
- Subjects
Hardware and Architecture ,Software ,Information Systems ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
3. Role of deep learning models and analytics in industrial multimedia environment
- Author
-
Nawab Muhammad Faseeh Qureshi, Varun G. Menon, Ali Kashif Bashir, Shahid Mumtaz, and Irfan Mehmood
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Media Technology ,Software ,Information Systems - Published
- 2023
- Full Text
- View/download PDF
4. A Meta-Heuristic Optimization Based Less Imperceptible Adversarial Attack on Gait Based Surveillance Systems
- Author
-
Muazzam Maqsood, Mustansar Ali Ghazanfar, Irfan Mehmood, Eenjun Hwang, and Seungmin Rho
- Subjects
Hardware and Architecture ,Control and Systems Engineering ,Modeling and Simulation ,Signal Processing ,Information Systems ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
5. An Automated Real-Time Face Mask Detection System Using Transfer Learning with Faster-RCNN in the Era of the COVID-19 Pandemic
- Author
-
Maha Farouk S. Sabir, Irfan Mehmood, Wafaa Adnan Alsaggaf, Enas Fawai Khairullah, Samar Alhuraiji, Ahmed S. Alghamdi, and Ahmed A. Abd El-Latif
- Subjects
Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
6. Alzheimer Disease Detection Techniques and Methods: A Review
- Author
-
Sitara Afzal, Oh-Young Song, Farhan Aadil, Yunyoung Nam, Muazzam Maqsood, Hina Nawaz, Umair Khan, and Irfan Mehmood
- Subjects
Statistics and Probability ,Technology ,neuroimaging ,Computer Networks and Communications ,Computer science ,literature review ,IJIMAI ,deep learning ,Alzheimer's disease ,medicine.disease ,Computer Science Applications ,mild cognitive impairment ,machine learning ,classification ,Artificial Intelligence ,Signal Processing ,medicine ,alzheimer's disease ,Computer Vision and Pattern Recognition ,Neuroscience - Abstract
Brain pathological changes linked with Alzheimer's disease (AD) can be measured with Neuroimaging. In the past few years, these measures are rapidly integrated into the signatures of Alzheimer disease (AD) with the help of classification frameworks which are offering tools for diagnosis and prognosis. Here is the review study of Alzheimer's disease based on Neuroimaging and cognitive impairment classification. This work is a systematic review for the published work in the field of AD especially the computer-aided diagnosis. The imaging modalities include 1) Magnetic resonance imaging (MRI) 2) Functional MRI (fMRI) 3) Diffusion tensor imaging 4) Positron emission tomography (PET) and 5) amyloid-PET. The study revealed that the classification criterion based on the features shows promising results to diagnose the disease and helps in clinical progression. The most widely used machine learning classifiers for AD diagnosis include Support Vector Machine, Bayesian Classifiers, Linear Discriminant Analysis, and K-Nearest Neighbor along with Deep learning. The study revealed that the deep learning techniques and support vector machine give higher accuracies in the identification of Alzheimer’s disease. The possible challenges along with future directions are also discussed in the paper.
- Published
- 2021
7. Assessing the spatial distribution and impacts of recent oil spill along the Western Coast of Karachi, Pakistan
- Author
-
Majid Nazeer, Muhammad Imran Shahzad, Gomal Amin, Irfan Mehmood, Sundas Jaweria, and Ibrahim Zia
- Subjects
business.industry ,Satellite remote sensing ,Geography, Planning and Development ,Environmental resource management ,Oil spill ,Environmental science ,Ecosystem ,business ,Spatial distribution ,Natural (archaeology) ,Water Science and Technology - Abstract
Oil spill is an environmental challenthat influences various aspects of coastal zones including humans, health, economy, and coastal ecosystems. The source of the oil spill can be both natural and ...
- Published
- 2021
- Full Text
- View/download PDF
8. Toni Morrison as an African American Voice: A Marxist Analysis
- Author
-
M. K. Sangi, Irfan Mehmood, and Komal Ansari
- Subjects
Cultural Studies ,African american ,Proletariat ,Psychoanalysis ,Sociology and Political Science ,media_common.quotation_subject ,Gender Studies ,Literary theory ,Bourgeoisie ,Marxist philosophy ,Ideology ,Materialism ,Class conflict ,media_common - Abstract
This article will endeavor to discover the presence of Marxist ideology, in Morrison’s, novels, The Bluest Eye and Beloved. Marxism is a literary theory which focuses on class struggle and materialism. Toni Morrison writes about the culture in which she lives and from which she neither consciously nor emotionally escapes. The Marxist literary ideology focuses on the weak economic position of proletariat class and spotlights bourgeoisie class as a dominant capitalist. This article will explore that how Morrison has used Marxist ideology in her fictional work to highlight the suppression of Afro-American community.
- Published
- 2021
- Full Text
- View/download PDF
9. An Efficient Gait Recognition Method for Known and Unknown Covariate Conditions
- Author
-
Seungmin Rho, Khalid Bashir Bajwa, Maryam Bukhari, Muazzam Maqsood, Irfan Mehmood, Saira Gillani, Mehr Yahya Durrani, and Hassan Ugail
- Subjects
General Computer Science ,Computer science ,Local binary patterns ,Feature extraction ,02 engineering and technology ,Convolutional neural network ,Gait (human) ,Covariate ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Invariant (mathematics) ,020203 distributed computing ,business.industry ,Dimensionality reduction ,General Engineering ,covariate conditions ,Pattern recognition ,Linear discriminant analysis ,Gait ,Support vector machine ,Gait analysis ,Multilayer perceptron ,discriminative feature learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,Gait recognition ,lcsh:TK1-9971 ,FLDA - Abstract
Gait is a unique non-invasive biometric form that can be utilized to effectively recognize persons, even when they prove to be uncooperative. Computer-aided gait recognition systems usually use image sequences without considering covariates like clothing and possessions of carrier bags whilst on the move. Similarly, in gait recognition, there may exist unknown covariate conditions that may affect the training and testing conditions for a given individual. Consequently, common techniques for gait recognition and measurement require a degree of intervention leading to the introduction of unknown covariate conditions, and hence this significantly limits the practical use of the present gait recognition and analysis systems. To overcome these key issues, we propose a method of gait analysis accounting for both known and unknown covariate conditions. For this purpose, we propose two methods, i.e., a Convolutional Neural Network (CNN) based gait recognition and a discriminative features-based classification method for unknown covariate conditions. The first method can handle known covariate conditions efficiently while the second method focuses on identifying and selecting unique covariate invariant features from the gallery and probe sequences. The feature set utilized here includes Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), and Haralick texture features. Furthermore, we utilize the Fisher Linear Discriminant Analysis for dimensionality reduction and selecting the most discriminant features. Three classifiers, namely Random Forest, Support Vector Machine (SVM), and Multilayer Perceptron are used for gait recognition under strict unknown covariate conditions. We evaluated our results using CASIA and OUR-ISIR datasets for both clothing and speed variations. As a result, we report that on average we obtain an accuracy of 90.32% for the CASIA dataset with unknown covariates and similarly performed excellently on the ISIR dataset. Therefore, our proposed method outperforms existing methods for gait recognition under known and unknown covariate conditions.
- Published
- 2021
10. An LSTM Based Forecasting for Major Stock Sectors Using COVID Sentiment
- Author
-
Sitara Afzal, Irfan Mehmood, Yunyoung Nam, Muazzam Maqsood, Sadaf Yasmin, Muhammad Tabish Niaz, and Ayesha Jabeen
- Subjects
Stock market prediction ,Mean squared error ,Computer science ,business.industry ,Computer Science Applications ,Biomaterials ,Mechanics of Materials ,Moving average ,Modeling and Simulation ,Business decision mapping ,Business intelligence ,Market data ,Econometrics ,Stock market ,Electrical and Electronic Engineering ,business ,Stock (geology) - Abstract
Stock market forecasting is an important research area, especially for better business decision making Efficient stock predictions continue to be significant for business intelligence Traditional short-term stock market forecasting is usually based on historical market data analysis such as stock prices, moving averages, or daily returns However, major events’ news also contains significant information regarding market drivers An effective stock market forecasting system helps investors and analysts to use supportive information regarding the future direction of the stock market This research proposes an efficient model for stock market prediction The current proposed study explores the positive and negative effects of coronavirus events on major stock sectors like the airline, pharmaceutical, e-commerce, technology, and hospitality We use the Twitter dataset for calculating the coronavirus sentiment with a Long Short-Term Memory (LSTM) model to improve stock prediction The LSTM has the advantage of analyzing relationship between time-series data through memory functions The performance of the system is evaluated by Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE) The results show that performance improves by using coronavirus event sentiments along with the LSTM prediction model © 2021 Tech Science Press All rights reserved
- Published
- 2021
- Full Text
- View/download PDF
11. An Efficient False-Positive Reduction System for Cerebral Microbleeds Detection
- Author
-
Muazzam Maqsood, Sanghyun Seo, Sitara Afzal, Muhammad Tabish Niaz, and Irfan Mehmood
- Subjects
Biomaterials ,Reduction (complexity) ,medicine.medical_specialty ,Mechanics of Materials ,Computer science ,Modeling and Simulation ,Internal medicine ,medicine ,Cardiology ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2021
- Full Text
- View/download PDF
12. Hegemonic Domination of White over Black in 'The Bluest Eye' by Toni Morrison
- Author
-
Komal Ansari, Irfan Mehmood, and M. K. Sangi
- Subjects
Politics ,White (horse) ,Hegemony ,Power politics ,media_common.quotation_subject ,Beauty ,Wish ,Gender studies ,Ideology ,Sociology ,Safe delivery ,media_common - Abstract
Every human being is beautiful with his own colour and appearance. No colour makes one beautiful but the white people of America have propagated the idea of white beauty as a tool of their politics to show themselves superior to the blacks. They focused on the colour because to be white for a black is unattainable as it is biological. They also tried to create self-hatred among the blacks by spreading the white ideology. They hegemonized the blacks to accept the concept of white beauty by using advertisements, media, actors and education. They also forced the blacks to be considered as ugly creating the least opportunities in the work places for the black community of America; alienating them from the society and torturing them both mentally and physically. As in The Bluest Eye, Pecola and her family are the worst victims of white men’s politics. Pecola together with her family members is both mentally and physically tortured and tormented to accept the white ideology. However, Pecola and her mother have accepted the white ideology and Pecola has mostly desired to get the bluest eye. On the other hand, Claudia resisted against the white men and their ideology. At the end, Pecola has accepted the baby of Cholly Breedlove as a token of love and self-reliance and both Claudia and Frieda wish to have the safe delivery of it. Therefore, in this article I would like to show that how the white men employed their evil intention of using the colour for dominating the blacks in America as a part of power politics, and also show black people’s reaction toward the white ideology with reference to The Bluest Eye by Toni Morrison.
- Published
- 2020
- Full Text
- View/download PDF
13. A deep feature-based real-time system for Alzheimer disease stage detection
- Author
-
Irfan Mehmood, Sitara Afzal, Muazzam Maqsood, Seungmin Rho, Hina Nawaz, and Farhan Aadil
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Overfitting ,Convolutional neural network ,Random forest ,Support vector machine ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Feature (machine learning) ,Artificial intelligence ,business ,Transfer of learning ,Real-time operating system ,Software - Abstract
The origin of dementia can be largely attributed to Alzheimer’s disease (AD). The progressive nature of AD causes the brain cell deterioration that eventfully leads to physical dependency and mental disability which hinders a person’s normal life. A computer-aided diagnostic system is required that can aid physicians in diagnosing AD in real-time. The AD stages classification remains an important research area. To extract the deep-features, the traditional machine learning-based and deep learning-based methods often require large dataset and that leads to class imbalance and overfitting issues. To overcome this problem, the use an efficient transfer learning architecture to extract deep features which are further used for AD stage classification. In this study, an Alzheimer’s stage detection system is proposed based on deep features using a pre-trained AlexNet model, by transferring the initial layers from pre-trained AlexNet model and extract the deep features from the Convolutional Neural Network (CNN). For the classification of extracted deep-features, we have used the widely used machine learning algorithms including support vector machine (SVM), k-nearest neighbor (KNN), and Random Forest (RF). The evaluation results of the proposed scheme show that a deep feature-based model outperformed handcrafted and deep learning method with 99.21% accuracy. The proposed model also outperforms existing state-of-the-art methods.
- Published
- 2020
- Full Text
- View/download PDF
14. Analyzing the Stock Exchange Markets of EU Nations: A Case Study of Brexit Social Media Sentiment
- Author
-
Haider Maqsood, Muazzam Maqsood, Sadaf Yasmin, Irfan Mehmood, Jihoon Moon, and Seungmin Rho
- Subjects
Information Systems and Management ,data analytics ,stock prediction ,social media sentiment analysis ,Brexit event ,COVID-19 ,Computer Networks and Communications ,Control and Systems Engineering ,Modeling and Simulation ,Software - Abstract
Stock exchange analysis is regarded as a stochastic and demanding real-world setting in which fluctuations in stock prices are influenced by a wide range of aspects and events. In recent years, there has been a great deal of interest in social media-based data analytics for analyzing stock exchange markets. This is due to the fact that the sentiments around major global events like Brexit or COVID-19 significantly affect business decisions and investor perceptions, as well as transactional trading statistics and index values. Hence, in this research, we examined a case study from the Brexit event to assess the influence that feelings on the subject have had on the stock markets of European Union (EU) nations. Brexit has implications for Britain and other countries under the umbrella of the European Union (EU). However, a common point of debate is the EU’s contribution preferences and benefit imbalance. For this reason, the Brexit event and its impact on stock markets for major contributors and countries with minimum donations need to be evaluated accurately. As a result, to achieve accurate analysis of the stock exchanges of different EU nations from two different viewpoints, i.e., the major contributors and countries contributing least, in response to the Brexit event, we suggest an optimal deep learning and machine learning model that incorporates social media sentiment analysis regarding Brexit to perform stock market prediction. More precisely, the machine learning-based models include support vector machines (SVM) and linear regression (LR), while convolutional neural networks (CNNs) are used as a deep learning model. In addition, this method incorporates around 1.82 million tweets regarding the major contributors and countries contributing least to the EU budget. The findings show that sentiment analysis of Brexit events using a deep learning model delivers better results in comparison with machine learning models, in terms of root mean square values (RMSE). The outcomes of stock exchange analysis for the least contributing nations in relation to the Brexit event can aid them in making stock market judgments that will eventually benefit their country and improve their poor economies. Likewise, the results of stock exchange analysis for major contributing nations can assist in lowering the possibility of loss in relation to investments, as well as helping them to make effective decisions.
- Published
- 2022
- Full Text
- View/download PDF
15. Design and Development of a Vivaldi Antenna Array for Airborne X-Band Applications
- Author
-
Irfan Mehmood, Awais Munawar Qureshi, Muhammad Muaz, and Channa Babar Ali
- Published
- 2021
- Full Text
- View/download PDF
16. Towards smarter cities: Learning from Internet of Multimedia Things-generated big data
- Author
-
Paolo Bellavista, Kaoru Ota, Zhihan Lv, Seungmin Rho, Irfan Mehmood, Bellavista P., Ota K., Lv Z., Mehmood I., and Rho S.
- Subjects
IoT ,Smart city ,Multimedia ,Computer Networks and Communications ,Computer science ,business.industry ,Big data ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Field (computer science) ,Domain (software engineering) ,Hardware and Architecture ,Analytics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,business ,Internet of Things ,computer ,Multimedia sensor ,Software - Abstract
In today’s technological era, smart devices connected through the IoT and giant IoT infrastructures are playing a vital role in making daily life easier and simpler than it ever was. Numerous sensors including IoT-interconnected multimedia sensors communicating with each other generate a huge amount of data. In particular, IoT multimedia sensors play a vital role for green cities, providing secure and efficient analytics to monitor routine activities. Big data generated by these sensors contain dense information that needs to be processed for various applications such as summarization, security, and privacy. The heterogeneity and complexity of video data is the biggest hurdle and a pretty number of techniques are already developed for the efficient processing of big video data. IoT big data processing is an emerging field and many researchers are enthusiastic to contribute in making the cities smarter. Among all these methods, deep learning-based techniques are dominant over existing traditional multimedia data processing algorithms with convincing results emerged recently. This special issue targets the current problems in smart cities development and provides future challenges in this domain and invite researchers working in IoT domain to make cities smarter. It also focuses on some related technologies comprising Internet of Multimedia Things (IoMTs) and machine learning for big data. Furthermore, it covers deep learning-based solutions for real-time data processing, learning from big data, distributed learning paradigms with embedded processing, and efficient inference.
- Published
- 2020
- Full Text
- View/download PDF
17. Mobility Enabled Security for Optimizing IoT based Intelligent Applications
- Author
-
Khan Muhammad, Sandeep Pirbhulal, Irfan Mehmood, Wanqing Wu, Victor Hugo C. de Albuquerque, and Guanglin Li
- Subjects
Factor cost ,Computer Networks and Communications ,Computer science ,Distributed computing ,Reliability (computer networking) ,020206 networking & telecommunications ,02 engineering and technology ,Criticality ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,Baseline (configuration management) ,Mobility management ,Wireless sensor network ,Software ,Information Systems ,Data transmission - Abstract
The critical challenges for IoT-based intelligent applications for the adaptive environment are effective resource allocation, secure data transmission, and mobility management. It is highly significant that these mandatory requirements be considered in the overall IoT-enabled smart networks. In this research, we initially propose the resource allocation model for IoT-based systems by adopting the security, energy drain, and cost factors. Furthermore, MMSA, incorporating security with mobility, is developed. The experimental analysis demonstrates that the proposed MMSA outperforms conventional techniques in terms of mobility management, delay, security, and criticality factors with optimal resource allocation. The findings of this research demonstrate that the security level, energy optimization, and reliability for the proposed MMSA, traditional, and baseline methods are (15 percent, 13 percent, 11 percent), (20mJ, 14mJ, and 9.5mJ), and (9.78 percent, 6.8 percent, and 5.6 percent), respectively. Consequently, it can be concluded that the proposed approach can be widely used for intelligent IoT systems because it delivers better reliability and mobility-enabled security than its counterparts.
- Published
- 2020
- Full Text
- View/download PDF
18. Edge Intelligence-Assisted Smoke Detection in Foggy Surveillance Environments
- Author
-
Khan Muhammad, Irfan Mehmood, Salman Khan, Victor Hugo C. de Albuquerque, and Vasile Palade
- Subjects
Smoke ,Emergency management ,business.industry ,Computer science ,020208 electrical & electronic engineering ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Convolutional neural network ,Computer Science Applications ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Benchmark (computing) ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Information Systems - Abstract
Smoke detection in foggy surveillance environments is a challenging task and plays a key role in disaster management for industrial systems. The current smoke detection methods are applicable to only normal surveillance videos, providing unsatisfactory results for video streams captured from foggy environments, due to challenges related to clutter and unclear contents. In this paper, an energy-friendly edge intelligence-assisted smoke detection method is proposed using deep convolutional neural networks for foggy surveillance environments. Our method uses a light-weight architecture, considering all necessary requirements regarding accuracy, running time, and deployment feasibility for smoke detection in an industrial setting, compared to other complex and computationally expensive architectures including AlexNet, GoogleNet, and visual geometry group (VGG). Experiments are conducted on available benchmark smoke detection datasets, and the obtained results show better performance of the proposed method over state-of-the-art for early smoke detection in foggy surveillance.
- Published
- 2020
- Full Text
- View/download PDF
19. A Spectrogram-Based Deep Feature Assisted Computer-Aided Diagnostic System for Parkinson’s Disease
- Author
-
Laiba Zahid, Muazzam Maqsood, Mehr Yahya Durrani, Maheen Bakhtyar, Junaid Baber, Habibullah Jamal, Irfan Mehmood, and Oh-Young Song
- Subjects
Parkinson disease ,deep features ,classification ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,transfer learning ,lcsh:TK1-9971 ,speech signals - Abstract
Parkinson's disease is a neural degenerative disease. It slowly progresses from mild to severe stage, resulting in the degeneration of dopamine cells of neurons. Due to the deficiency of dopamine cells in the brain, it leads to a motor (tremor, slowness, impaired posture) and non-motor (speech, olfactory) defects in the body. Early detection of Parkinson's disease is a difficult chore as the symptoms of disease appear overtime. However, different diagnostic systems have contributed towards disease detection by considering gait, tremor and speech characteristics. Recent work has shown that speech impairments can be considered as a possible predictor for Parkinson's disease classification and remains an open research area. The speech signals show major differences and variations for Parkinson patients as compared to normal human beings. Therefore, variation in speech should be modeled using acoustic features to identify these variations. In this research, we propose three methodsthe first method employs a transfer learning-based approach using spectrograms of speech recordings, the second method evaluates deep features extracted from speech spectrograms using machine learning classifiers and the third method evaluates simple acoustic feature of recordings using machine learning classifiers. The proposed frameworks are evaluated on a Spanish dataset pc-Gita. The results show that the second framework shows promising results with deep features. The highest 99.7% accuracy on vowel \o\ and read text is observed using a multilayer perceptron. Whereas 99.1% accuracy observed on vowel \i\ deep features using random forest. The deep feature-based method performs better as compared to simple acoustic features and transfer learning approaches. The proposed methodology outperforms the existing techniques on the pc-Gita dataset for Parkinson's disease detection.
- Published
- 2020
20. Region-of-Interest Based Transfer Learning Assisted Framework for Skin Cancer Detection
- Author
-
Rehan Ashraf, Irfan Mehmood, Maheen Bakhtyar, Sarah Gul, Attiq Ur Rehman, Sitara Afzal, Muazzam Maqsood, Junaid Baber, and Oh-Young Song
- Subjects
General Computer Science ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,transfer learning ,Convolutional neural network ,Discriminative model ,Dermis ,Region of interest ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Nevus ,General Materials Science ,business.industry ,Melanoma ,Deep learning ,ROI ,General Engineering ,Cancer ,020206 networking & telecommunications ,Pattern recognition ,skin cancer detection ,medicine.disease ,ComputingMethodologies_PATTERNRECOGNITION ,medicine.anatomical_structure ,020201 artificial intelligence & image processing ,Melanoma detection ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Skin cancer ,business ,Transfer of learning ,lcsh:TK1-9971 ,Feature learning ,CNN - Abstract
Melanoma is considered the most serious type of skin cancer. All over the world, the mortality rate is much high for melanoma in contrast with other cancer. There are various computer-aided solutions proposed to correctly identify melanoma cancer. However, the difficult visual appearance of the nevus makes it very difficult to design a reliable Computer-Aided Diagnosis (CAD) system for accurate melanoma detection. Existing systems either uses traditional machine learning models and focus on handpicked suitable features or uses deep learning-based methods that use complete images for feature learning. The automatic and most discriminative feature extraction for skin cancer remains an important research problem that can further be used to better deep learning training. Furthermore, the availability of the limited available images also creates a problem for deep learning models. From this line of research, we propose an intelligent Region of Interest (ROI) based system to identify and discriminate melanoma with nevus cancer by using the transfer learning approach. An improved k-mean algorithm is used to extract ROIs from the images. These ROI based approach helps to identify discriminative features as the images containing only melanoma cells are used to train system. We further use a Convolutional Neural Network (CNN) based transfer learning model with data augmentation for ROI images of DermIS and DermQuest datasets. The proposed system gives 97.9% and 97.4% accuracy for DermIS and DermQuest respectively. The proposed ROI based transfer learning approach outperforms existing methods that use complete images for classification.
- Published
- 2020
- Full Text
- View/download PDF
21. Expert ranking techniques for online rated forums
- Author
-
Rabeeh Ayaz Abbasi, Naif Radi Aljohani, Ali Daud, Irfan Mehmood, Abubakr Usman Akram, and Muhammad Shahzad Faisal
- Subjects
Online discussion ,Information retrieval ,Computer science ,media_common.quotation_subject ,05 social sciences ,050301 education ,050801 communication & media studies ,Social web ,Knowledge sharing ,Human-Computer Interaction ,0508 media and communications ,Arts and Humanities (miscellaneous) ,Ranking ,Credibility ,Quality (business) ,Adaptation (computer science) ,0503 education ,General Psychology ,media_common ,Reputation - Abstract
Web 2.0 or social web applications such as online discussion forums, blogs and Wikipedia have improved knowledge sharing by providing an environment in which users can generate and find their favorite content in, a flexible way. With the passage of time, online discussion forums accumulate a huge amount of content and this can introduce issues of content quality and user credibility. A poor-quality answer in a discussion forum indicates the presence of unprofessional or unqualified users; therefore, a priority is to find experts or reputable users. Most of the existing expert-ranking approaches consider basic features, such as the total number of answers provided by a user, but ignore the quality and consistency of the user's answer. In this paper, expert-ranking techniques using g-index are proposed, and are applied to a StackOverflow forum dataset. Three techniques are proposed including Exp-PC, Rep-FS and Weighted Exp-PC. Exp-PC is an adaptation of g-index for ranking experts in StackOverflow forum. In Rep-FS, several features like voters reputation, vote ratio are proposed to measure users' expertise while Weighted Exp-PC computes user expertise by combining their Exp-PC and Rep-FS scores. We measure users' reputation and expertise according to both the quality of their answer and their consistency in providing quality answers. The experimental results of the proposed expert-ranking techniques, Exp-PC and Weighted Exp-PC in particular, validate that these methods identify genuine experts in a more effective way.
- Published
- 2019
- Full Text
- View/download PDF
22. Toward Generating Human-Centered Video Annotations
- Author
-
Khalid Mahmood Awan, Aniqa Dilawari, Zahoor ur Rehman, M. Usman Ghani Khan, Seungmin Rho, and Irfan Mehmood
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Applied Mathematics ,Deep learning ,Video sequence ,02 engineering and technology ,Task (project management) ,Annotation ,020901 industrial engineering & automation ,Manual annotation ,Minimum bounding box ,Signal Processing ,Computer vision algorithms ,Computer vision ,Artificial intelligence ,business - Abstract
In the past few decades, research has been carried out to automatically find humans in a video sequence. Automatically detecting humans in videos is gaining interest for numerous applications such as driver assistance system, security, people counting, human gait characterization, video annotations, retrieval, or crowd flow analysis. Manual annotation of a video is a time-consuming task that involves human annotators which varying biases. In this paper, we have presented three computer vision algorithms (contour-based, HOG-based and SURF-based) and proposed a deep learning technique that automatically extracts spatiotemporal annotations of human and represents it by a bounding box. We have performed experiments and the accuracy obtained for each method is 86%, 92.5%, 94%, and 95.5%, respectively. Results show that not only annotation accuracy has increased but the human effort has reduced with respect to manual annotations. We have also introduced a new dataset ASSVS_KICS which is captured through a high-quality stationary camera and contain scenarios based on our community for video surveillance research.
- Published
- 2019
- Full Text
- View/download PDF
23. Mobile cloud-assisted paradigms for management of multimedia big data in healthcare systems: Research challenges and opportunities
- Author
-
Muhammad Sajjad, Yudong Zhang, Kaoru Ota, Zhihan Lv, Irfan Mehmood, and Amit Singh
- Subjects
Multimedia ,Computer Networks and Communications ,Computer science ,Multimedia big data ,Mobile cloud ,Library and Information Sciences ,computer.software_genre ,computer ,Information Systems ,Healthcare system - Published
- 2019
- Full Text
- View/download PDF
24. Automated Gland Detection in Colorectal Histopathological Images
- Author
-
Irfan Mehmood, Hassan Ugail, and Maisun Mohamed Al Zorgani
- Subjects
Pathology ,medicine.medical_specialty ,business.industry ,Computer science ,Deep learning ,Haematoxylin ,Convolutional neural network ,chemistry.chemical_compound ,chemistry ,Morphological analysis ,medicine ,Automatic segmentation ,Colorectal adenocarcinoma ,Segmentation ,Artificial intelligence ,business - Abstract
Clinical morphological analysis of histopathological specimens is a successful manner for diagnosing benign and malignant diseases. Analysis of glandular architecture is a major challenge for colon histopathologists as a result of the difficulty of identifying morphological structures in glandular malignant tumours due to the distortion of glands boundaries, furthermore the variation in the appearance of staining specimens. For reliable analysis of colon specimens, several deep learning methods have exhibited encouraging performance in the glands automatic segmentation despite the challenges. In the histopathology field, the vast number of annotation images for training the deep learning algorithms is the major challenge. In this work, we propose a trainable Convolutional Neural Network (CNN) from end to end for detecting the glands automatically. More specifically, the Modified Res-U-Net is employed for segmenting the colorectal glands in Haematoxylin and Eosin (H&E) stained images for challenging Gland Segmentation (GlaS) dataset. The proposed Res-U-Net outperformed the prior methods that utilise U-Net architecture on the images of the GlaS dataset.
- Published
- 2021
- Full Text
- View/download PDF
25. Learning Transferable Features for Diagnosis of Breast Cancer from Histopathological Images
- Author
-
Irfan Mehmood, Maisun Mohamed Al Zorgani, and Hassan Ugail
- Subjects
Learning classifier system ,Invasive carcinoma ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,medicine.disease ,Convolutional neural network ,Support vector machine ,Breast cancer ,Robustness (computer science) ,medicine ,Artificial intelligence ,business ,Transfer of learning - Abstract
Nowadays, there is no argument that deep learning algorithms provide impressive results in many applications of medical image analysis. However, data scarcity problem and its consequences are challenges in implementation of deep learning for the digital histopathology domain. Deep transfer learning is one of the possible solutions for these challenges. The method of off-the-shelf features extraction from pre-trained convolutional neural networks (CNNs) is one of the common deep transfer learning approaches. The architecture of deep CNNs has a significant role in the choice of the optimal learning transferable features to adopt for classifying the cancerous histopathological image. In this study, we have investigated three pre-trained CNNs on ImageNet dataset; ResNet-50, DenseNet-201 and ShuffleNet models for classifying the Breast Cancer Histopathology (BACH) Challenge 2018 dataset. The extracted deep features from these three models were utilised to train two machine learning classifiers; namely, the K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) to classify the breast cancer grades. Four grades of breast cancer were presented in the BACH challenge dataset; these grades namely normal tissue, benign tumour, in-situ carcinoma and invasive carcinoma. The performance of the target classifiers was evaluated. Our experimental results showed that the extracted off-the-shelf features from DenseNet-201 model provide the best predictive accuracy using both SVM and KNN classifiers. They yielded the image-wise classification accuracy of 93.75% and 88.75% for SVM and KNN classifiers, respectively. These results indicate the high robustness of our proposed framework.
- Published
- 2021
- Full Text
- View/download PDF
26. Deep YOLO-Based Detection of Breast Cancer Mitotic-Cells in Histopathological Images
- Author
-
Maisun Mohamed Al Zorgani, Hassan Ugail, and Irfan Mehmood
- Subjects
Breast cancer ,Computer science ,business.industry ,Deep learning ,Pattern recognition (psychology) ,medicine ,CAD ,Pattern recognition ,Artificial intelligence ,medicine.disease ,business ,Cad system - Abstract
Coinciding with advances in whole-slide imaging scanners, it is become essential to automate the conventional image-processing techniques to assist pathologists with some tasks such as mitotic-cells detection. In histopathological images analysing, the mitotic-cells counting is a significant biomarker in the prognosis of the breast cancer grade and its aggressiveness. However, counting task of mitotic-cells is tiresome, tedious and time-consuming due to difficulty distinguishing between mitotic cells and normal cells. To tackle this challenge, several deep learning-based approaches of Computer-Aided Diagnosis (CAD) have been lately advanced to perform counting task of mitotic-cells in the histopathological images. Such CAD systems achieve outstanding performance, hence histopathologists can utilise them as a second-opinion system. However, improvement of CAD systems is an important with the progress of deep learning networks architectures. In this work, we investigate deep YOLO (You Only Look Once) v2 network for mitotic-cells detection on ICPR (International Conference on Pattern Recognition) 2012 dataset of breast cancer histopathology. The obtained results showed that proposed architecture achieves good result of 0.839 F1-measure.
- Published
- 2021
- Full Text
- View/download PDF
27. A Study of Deep Learning-Based Face Recognition Models for Sibling Identification
- Author
-
Irfan Mehmood, Hassan Ugail, and Rita Goel
- Subjects
Databases, Factual ,Computer science ,02 engineering and technology ,TP1-1185 ,Biochemistry ,Facial recognition system ,Article ,Analytical Chemistry ,VGGFace ,Deep Learning ,Similarity (network science) ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,FaceNet ,Electrical and Electronic Engineering ,Instrumentation ,business.industry ,Siblings ,Deep learning ,Chemical technology ,Cosine similarity ,Minkowski distance ,020207 software engineering ,Pattern recognition ,Atomic and Molecular Physics, and Optics ,Euclidean distance ,Identification (information) ,Face (geometry) ,sibling recognition ,020201 artificial intelligence & image processing ,VGG16 ,Neural Networks, Computer ,Artificial intelligence ,business ,Facial Recognition ,VGG19 ,face recognition - Abstract
Accurate identification of siblings through face recognition is a challenging task. This is predominantly because of the high degree of similarities among the faces of siblings. In this study, we investigate the use of state-of-the-art deep learning face recognition models to evaluate their capacity for discrimination between sibling faces using various similarity indices. The specific models examined for this purpose are FaceNet, VGGFace, VGG16, and VGG19. For each pair of images provided, the embeddings have been calculated using the chosen deep learning model. Five standard similarity measures, namely, cosine similarity, Euclidean distance, structured similarity, Manhattan distance, and Minkowski distance, are used to classify images looking for their identity on the threshold defined for each of the similarity measures. The accuracy, precision, and misclassification rate of each model are calculated using standard confusion matrices. Four different experimental datasets for full-frontal-face, eyes, nose, and forehead of sibling pairs are constructed using publicly available HQf subset of the SiblingDB database. The experimental results show that the accuracy of the chosen deep learning models to distinguish siblings based on the full-frontal-face and cropped face areas vary based on the face area compared. It is observed that VGGFace is best while comparing the full-frontal-face and eyes—the accuracy of classification being with more than 95% in this case. However, its accuracy degrades significantly when the noses are compared, while FaceNet provides the best result for classification based on the nose. Similarly, VGG16 and VGG19 are not the best models for classification using the eyes, but these models provide favorable results when foreheads are compared.
- Published
- 2021
- Full Text
- View/download PDF
28. A Smart Surveillance System for Uncooperative Gait Recognition Using Cycle Consistent Generative Adversarial Networks (CCGANs)
- Author
-
Maha Farouk S. Sabir, Wafaa Alsaggaf, Irfan Mehmood, Ahmed S. Alghamdi, Samar Alhuraiji, Enas F. Khairullah, and Ahmed A. Abd El-Latif
- Subjects
Biometry ,General Computer Science ,Article Subject ,Computer science ,General Mathematics ,media_common.quotation_subject ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Walking ,Machine learning ,computer.software_genre ,Domain (software engineering) ,Pattern Recognition, Automated ,Gait (human) ,Covariate ,Image Processing, Computer-Assisted ,Humans ,Function (engineering) ,Gait ,media_common ,business.industry ,General Neuroscience ,Deep learning ,General Medicine ,Identity (object-oriented programming) ,Unsupervised learning ,Public service ,Artificial intelligence ,business ,computer ,RC321-571 ,Research Article - Abstract
Surveillance remains an important research area, and it has many applications. Smart surveillance requires a high level of accuracy even when persons are uncooperative. Gait Recognition is the study of recognizing people by the way they walk even when they are unwilling to cooperate. It is another form of a behavioral biometric system in which unique attributes of an individual’s gait are analyzed to determine their identity. On the other hand, one of the big limitations of the gait recognition system is uncooperative environments in which both gallery and probe sets are made under different and unknown walking conditions. In order to tackle this problem, we propose a deep learning-based method that is trained on individuals with the normal walking condition, and to deal with an uncooperative environment and recognize the individual with any dynamic walking conditions, a cycle consistent generative adversarial network is used. This method translates a GEI disturbed from different covariate factors to a normal GEI. It works like unsupervised learning, and during its training, a GEI disrupts from different covariate factors of each individual and acts as a source domain while the normal walking conditions of individuals are our target domain to which translation is required. The cycle consistent GANs automatically find an individual pair with the help of the Cycle Loss function and generate the required GEI, which is tested by the CNN model to predict the person ID. The proposed system is evaluated over a publicly available data set named CASIA-B, and it achieved excellent results. Moreover, this system can be implemented in sensitive areas, like banks, seminar halls (events), airports, embassies, shopping malls, police stations, military areas, and other public service areas for security purposes.
- Published
- 2021
29. Multi-Modal Data Analysis Based Game Player Experience Modeling Using LSTM-DNN
- Author
-
Sehar Shahzad Farooq, Ali Kashif Bashir, Raheel Nawaz, Mustansar Fiaz, Irfan Mehmood, Soon Ki Jung, and Kyung-Joong Kim
- Subjects
Computational model ,Exploit ,Artificial neural network ,Computer science ,business.industry ,ComputingMilieux_PERSONALCOMPUTING ,Computer Science Applications ,Biomaterials ,Entertainment ,Cold start ,Mechanics of Materials ,Human–computer interaction ,Analytics ,Modeling and Simulation ,Electrical and Electronic Engineering ,Game Developer ,business ,Transfer of learning - Abstract
Game player modeling is a paradigm of computational models to exploit players’ behavior and experience using game and player analytics. Player modeling refers to descriptions of players based on frameworks of data derived from the interaction of a player’s behavior within the game as well as the player’s experience with the game. Player behavior focuses on dynamic and static information gathered at the time of gameplay. Player experience concerns the association of the human player during gameplay, which is based on cognitive and affective physiological measurements collected from sensors mounted on the player’s body or in the player’s surroundings. In this paper, player experience modeling is studied based on the board puzzle game “Candy Crush Saga” using cognitive data of players accessed by physiological and peripheral devices. Long Short-Term Memory-based Deep Neural Network (LSTM-DNN) is used to predict players’ effective states in terms of valence, arousal, dominance, and liking by employing the concept of transfer learning. Transfer learning focuses on gaining knowledge while solving one problem and using the same knowledge to solve different but related problems. The homogeneous transfer learning approach has not been implemented in the game domain before, and this novel study opens a new research area for the game industry where the main challenge is predicting the significance of innovative games for entertainment and players’ engagement. Relevant not only from a player’s point of view, it is also a benchmark study for game developers who have been facing problems of “cold start” for innovative games that strengthen the game industrial economy.
- Published
- 2021
30. An Efficient Liver Tumor Detection using Machine Learning
- Author
-
Irfan Mehmood, Anum Kalsoom, Muazzam Maqsood, Seungmin Rho, and Anam Moin
- Subjects
Liver tumor ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Machine learning ,computer.software_genre ,medicine.disease ,Support vector machine ,Task (computing) ,ComputingMethodologies_PATTERNRECOGNITION ,medicine ,Unsupervised learning ,Segmentation ,Artificial intelligence ,Cluster analysis ,business ,computer - Abstract
Liver Cancer is among the most commonly diagnosed diseases in today’s modern era. Liver tumor segmentation is a fundamental task to perform early diagnosis and recommend a treatment. Manual segmentation is the traditional approach to achieve the required results but it has always been a time-consuming process. There are some anomalies like ambiguous gray level color ranges similar to other neighboring organs, irregular tumor shapes, and various uneven tumor sizes which are overlooked. Due to these reasons, some semi-automated and even fully automated techniques have been put forward. However, the advancement of machine learning has been very accommodating for addressing this issue. In this paper, we propose an unsupervised machine learning technique combined with a supervised mechanism that accurately performs liver tumor segmentation. We perform clustering on our collected dataset and extract LBP features as well as HOG features from these clusters. Furthermore, we perform classification which is based on these extracted features using KNN. Furthermore, we have compared our results with two classifiers namely SVM and Ensemble to achieve a better understanding. Our proposed technique outperformed existing techniques and showed encouraging results when compared to other methods.
- Published
- 2020
- Full Text
- View/download PDF
31. A Trust Assisted Matrix Factorization based Improved Product Recommender System
- Author
-
Mucheol Kim, Muazzam Maqsood, Irfan Mehmood, Asma Rahim, and Khan Muhammad
- Subjects
Computer science ,business.industry ,Context (language use) ,Recommender system ,Machine learning ,computer.software_genre ,Pearson product-moment correlation coefficient ,Matrix decomposition ,Product (business) ,symbols.namesake ,Cold start ,Similarity (psychology) ,Metric (mathematics) ,symbols ,Artificial intelligence ,business ,computer - Abstract
Smart services is an efficient concept to provide services to the citizen in an efficient manner. The online shopping and recommender system play an important role in this scenario that provides efficient item recommendations to the citizens. Though, the majority of the latest recommender systems can't get effective and efficient prediction accuracy because of the sparsity of the item matrix against each user. Additionally, the recommendations are not reliable when tested upon larger datasets. To handle these problems, a trust-based technique is proposed, called trustasvd++, which fuses a user's trust data in the MF context. The offered strategy combines trust data and rating values to deal with the sparsity and cold start user’s issues. Matrix Factorization (MF) has been recognized as a persuasive method for the formation of an effective Recommender System. Pearson correlation coefficient (PCC) is used as a similarity metric in the proposed technique. To assess the efficiency of the offered strategy, numerous datasets have been done on datasets including Epinions, Filmtrust, and Ciao.
- Published
- 2020
- Full Text
- View/download PDF
32. An Activity Recognition Framework for Overlapping Activities using Transfer Learning
- Author
-
Muhammad Bilal, Seungmin Rho, Irfan Mehmood, Muazzam Maqsood, and Mubashir Javaid
- Subjects
business.industry ,Computer science ,Data stream mining ,Digital content ,Deep learning ,Video content analysis ,Machine learning ,computer.software_genre ,Convolutional neural network ,Activity recognition ,Analytics ,Artificial intelligence ,Transfer of learning ,business ,computer - Abstract
Activity recognition is gaining popularity with the increase in digital content. In video data, there is a lot of information hidden that needs to be explored. Human Activity Recognition (HAR) in video streams applies to many areas, such as video surveillance, patient health monitoring, and behavior analytics. Variation in the environment, view-point changes, occlusion, and illumination are some main challenges in HAR. Among other challenges, there is also a similar activity or overlapping activity issue that has not been explored much in past. Resolving overlapping activities classes issues can be a major contribution to overall Human Activity Recognition. Hand-crafted methods and traditional Machine Learning methods were extensively explored in past. Recently, many Deep Learning-based methods are achieved high accuracy. Convolutional Neural Network (CNN) and 3D CNN methods outperform other methods. In this paper, we proposed a Transfer Learning-based Human Activity Recognition (TLHAR) for video data streams. We used VGG16 and InceptionV3, two pre-trained CNN models, and utilized their prior training knowledge for efficient activity recognition. The purposed system outperformed existing activity recognition methods and showed state-of-the-art accuracy and less computational cost requirements than other techniques by taking the benefits of Transfer Learning.
- Published
- 2020
- Full Text
- View/download PDF
33. An Optimisation Model for Designing Social Distancing Enhanced Physical Spaces
- Author
-
Sarah Gleghorn, Andrés Iglesias, Khasrouf Taif, Riya Aggarwal, Hassan Ugail, Muazzam Maqsood, Farhan Aadil, Irfan Mehmood, Patricia Suárez, and Almudena Campuzano
- Subjects
Mathematical optimization ,Distancing ,Computer science ,Social distance ,02 engineering and technology ,03 medical and health sciences ,Nonlinear system ,0302 clinical medicine ,Development (topology) ,Circle packing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,030212 general & internal medicine ,Element (category theory) - Abstract
In the wake of the COVID-19 pandemic, social distancing has become an essential element of our daily lives. As a result, the development of technological solutions for the design and re-design of physical spaces with the necessary physical distancing measures is an important problem that must be addressed. In this paper, we show how automatic design optimisation can be used to simulate the layout of physical spaces subject to a given social distancing requirement. We use a well known mathematical technique based on the circle packing to address this challenge. Thus, given the dimensions and the necessary constraints on the physical space, we formulate the design as a solution to a constrained nonlinear optimisation problem. We then solve the optimisation problem to arrive at a number of feasible design solutions from which the user can pick the most desirable option. By way of examples, in this paper, we show how the proposed model can be practically applied.
- Published
- 2020
- Full Text
- View/download PDF
34. A New Chaotic Map with Dynamic Analysis and Encryption Application in Internet of Health Things
- Author
-
Nestor Tsafack, Oh-Young Song, K. C. Jithin, Akram Belazi, Ali Kashif Bashir, Jacques Kengne, Irfan Mehmood, Syam Sankar, Ahmed A. Abd El-Latif, and Bassem Abd-El-Atty
- Subjects
General Computer Science ,Computer science ,Stability (learning theory) ,02 engineering and technology ,Mandelbrot set ,Encryption ,lightweight security ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Cryptosystem ,General Materials Science ,encryption ,dynamics analysis ,Computer Science::Cryptography and Security ,business.industry ,General Engineering ,020207 software engineering ,Hamming distance ,Internet of health things ,chaotic systems ,Key (cryptography) ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,Algorithm ,lcsh:TK1-9971 - Abstract
© 2013 IEEE. In this paper, we report an effective cryptosystem aimed at securing the transmission of medical images in an Internet of Healthcare Things (IoHT) environment. This contribution investigates the dynamics of a 2-D trigonometric map designed using some well-known maps: Logistic-sine-cosine maps. Stability analysis reveals that the map has an infinite number of solutions. Lyapunov exponent, bifurcation diagram, and phase portrait are used to demonstrate the complex dynamic of the map. The sequences of the map are utilized to construct a robust cryptosystem. First, three sets of key streams are generated from the newly designed trigonometric map and are used jointly with the image components (R, G, B) for hamming distance calculation. The output distance-vector, corresponding to each component, is then Bit-XORed with each of the key streams. The output is saved for further processing. The decomposed components are again Bit-XORed with key streams to produce an output, which is then fed into the conditional shift algorithm. The Mandelbrot Set is used as the input to the conditional shift algorithm so that the algorithm efficiently applies confusion operation (complete shuffling of pixels). The resultant shuffled vectors are then Bit-XORed (Diffusion) with the saved outputs from the early stage, and eventually, the image vectors are combined to produce the encrypted image. Performance analyses of the proposed cryptosystem indicate high security and can be effectively incorporated in an IoHT framework for secure medical image transmission.
- Published
- 2020
35. An IoT based efficient hybrid recommender system for cardiovascular disease
- Author
-
Farhan Aadil, Irfan Mehmood, Muazzam Maqsood, Fouzia Jabeen, Mustansar Ali Ghazanfar, Salabat Khan, and Muhammad Fahad Khan
- Subjects
Acute coronary syndrome ,medicine.medical_specialty ,020205 medical informatics ,Heart disease ,Computer Networks and Communications ,Computer science ,Atrial fibrillation ,02 engineering and technology ,Disease ,medicine.disease ,Left ventricular hypertrophy ,Heart failure ,Internal medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Cardiology ,020201 artificial intelligence & image processing ,cardiovascular diseases ,Supraventricular tachycardia ,Myocardial infarction ,Software - Abstract
A fog-based IoT model can be helpful for patients from remote areas with cardiovascular disease. An expert cardiologist is usually not available in such remote areas. There are some systems available to classify heart disease and provide recommendations but these existing systems only use classification for recommendations. From this line of research, we propose an IoT based efficient community-based recommender system that diagnoses cardiac disease and its type and provides recommendations related to the physical and dietary plan. The first part intent to collect the data from the patient remotely by using the bio sensors. The IoT based environment is used to transmit the data to the server. Afterward, heart disease prediction model is implemented, that can diagnose the cardiovascular disease and classify into eight available cardiovascular classes i.e. Myocardial Infarction (MI stable), Myocardial Infarction (MI unstable), Acute Coronary Syndrome (ACS), Atrial Fibrillation (AF), Hypertension (HTN), Ischemic Heart Disease (IHD), Left Ventricular Hypertrophy (LVH), Chronic Heart Failure/ Left Ventricle Function (CCF/LVF), Supraventricular Tachycardia (SVT). The second part pursues to provide physical and dietary plan recommendation to the cardiac patient according to gender and age groups. A dataset for diseases and corresponding recommendations is collected from a well-renowned hospital with the help of an expert cardiologist. The performance of the system is evaluated in terms of precision, recall and Mean absolute error and achieves 98% accuracy.
- Published
- 2019
- Full Text
- View/download PDF
36. A Data Augmentation-Based Framework to Handle Class Imbalance Problem for Alzheimer’s Stage Detection
- Author
-
Sitara Afzal, Muazzam Maqsood, Faria Nazir, Umair Khan, Farhan Aadil, Khalid M Awan, Irfan Mehmood, and Oh-Young Song
- Subjects
augmentation ,convolutional neural network ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Alzheimer’s disease ,lcsh:TK1-9971 ,Transfer learning ,AlexNet - Abstract
Alzheimer's Disease (AD) is the most common form of dementia. It gradually increases from mild stage to severe, affecting the ability to perform common daily tasks without assistance. It is a neurodegenerative illness, presently having no specified cure. Computer-Aided Diagnostic Systems have played an important role to help physicians to identify AD. However, the diagnosis of AD into its four stages; No Dementia, Very Mild Dementia, Mild Dementia, and Moderate Dementia remains an open research area. Deep learning assisted computer-aided solutions are proved to be more useful because of their high accuracy. However, the most common problem with deep learning architecture is that large training data is required. Furthermore, the samples should be evenly distributed among the classes to avoid the class imbalance problem. The publicly available dataset (OASIS) has serious class imbalance problem. In this research, we employed a transfer learning-based technique using data augmentation for 3D Magnetic Resonance Imaging (MRI) views from OASIS dataset. The accuracy of the proposed model utilizing a single view of the brain MRI is 98.41% while using 3D-views is 95.11%. The proposed system outperformed the existing techniques for Alzheimer disease stages.
- Published
- 2019
37. IEEE ACCESS SPECIAL SECTION EDITORIAL: MULTIMEDIA ANALYSIS FOR INTERNET-OF-THINGS
- Author
-
Irfan Mehmood, Mario Vento, Minh-Son Dao, Zhihan Lv, Alessia Saggese, and Kaoru Ota
- Subjects
General Computer Science ,Multimedia ,Association rule learning ,Computer science ,business.industry ,Deep learning ,Data management ,General Engineering ,020206 networking & telecommunications ,02 engineering and technology ,Predictive analytics ,computer.software_genre ,Knowledge modeling ,Knowledge extraction ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,020201 artificial intelligence & image processing ,General Materials Science ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Transfer of learning ,business ,computer ,lcsh:TK1-9971 - Abstract
Big data processing includes both data management and data analytics. The data management step requires efficient cleaning, knowledge extraction, and integration and aggregation methods, whereas Internet-of-Multimedia-Things (IoMT) analysis is based on knowledge modeling and interpretation, which is more often performed by exploiting deep learning architectures. In the past couple of years, merging conventional and deep learning methodologies has exhibited great promise in ingesting multimedia big data, exploring the paradigm of transfer learning, association rule mining, and predictive analytics etc.
- Published
- 2019
38. Kernel Context Recommender System (KCR): A Scalable Context-Aware Recommender System Algorithm
- Author
-
Mustansar Ali Ghazanfar, Muazzam Maqsood, Salabat Khan, Misbah Iqbal, Sung Wook Baik, Asma Sattar, and Irfan Mehmood
- Subjects
Context model ,General Computer Science ,Computer science ,recommender system kernel ,General Engineering ,Context ,Context (language use) ,02 engineering and technology ,Recommender system ,Mood ,Kernel method ,020204 information systems ,Kernel (statistics) ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Social circle ,context-aware kernel mapping recommender systems ,Algorithm ,lcsh:TK1-9971 - Abstract
Recommender systems are intelligent data mining applications that deal with the issue of information overload significantly. The available literature discusses several methodologies to generate recommendations and proposes different techniques in accordance with users’ needs. The majority of the work in the recommender system domain focuses on increasing the recommendation accuracy by employing several proposed approaches where the main motive remains to maximize the accuracy of recommendations while ignoring other design objectives, such as a user’s an item’s context. The biggest challenge for a recommender system is to produce meaningful recommendations by using contextual user-item rating information. A context is a vast term that may consider various aspects; for example, a user’s social circle, time, mood, location, weather, company, day type, an item’s genre, location, and language. Typically, the rating behavior of users varies under different contexts. From this line of research, we have proposed a new algorithm, namely Kernel Context Recommender System, which is a flexible, fast, and accurate kernel mapping framework that recognizes the importance of context and incorporates the contextual information using kernel trick while making predictions. We have benchmarked our proposed algorithm with pre- and post-filtering approaches as they have been the favorite approaches in the literature to solve the context-aware recommendation problem. Our experiments reveal that considering the contextual information can increase the performance of a system and provide better, relevant, and meaningful results on various evaluation metrics.
- Published
- 2019
39. A Data Augmentation-Based Framework to Handle Class Imbalance Problem for Alzheimer’s Stage Detection
- Author
-
Khalid Mahmood Awan, Irfan Mehmood, Muazzam Maqsood, Oh-Young Song, Sitara Afzal, Umair Khan, Farhan Aadil, and Faria Nazir
- Subjects
General Computer Science ,Computer science ,02 engineering and technology ,Disease ,Machine learning ,computer.software_genre ,03 medical and health sciences ,Class imbalance ,0302 clinical medicine ,Open research ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Dementia ,General Materials Science ,Stage (cooking) ,business.industry ,Deep learning ,General Engineering ,020207 software engineering ,medicine.disease ,Artificial intelligence ,Alzheimer's disease ,business ,Transfer of learning ,computer ,030217 neurology & neurosurgery - Abstract
Alzheimer's Disease (AD) is the most common form of dementia. It gradually increases from mild stage to severe, affecting the ability to perform common daily tasks without assistance. It is a neurodegenerative illness, presently having no specified cure. Computer-Aided Diagnostic Systems have played an important role to help physicians to identify AD. However, the diagnosis of AD into its four stages; No Dementia, Very Mild Dementia, Mild Dementia, and Moderate Dementia remains an open research area. Deep learning assisted computer-aided solutions are proved to be more useful because of their high accuracy. However, the most common problem with deep learning architecture is that large training data is required. Furthermore, the samples should be evenly distributed among the classes to avoid the class imbalance problem. The publicly available dataset (OASIS) has serious class imbalance problem. In this research, we employed a transfer learning-based technique using data augmentation for 3D Magnetic Resonance Imaging (MRI) views from OASIS dataset. The accuracy of the proposed model utilizing a single view of the brain MRI is 98.41% while using 3D-views is 95.11%. The proposed system outperformed the existing techniques for Alzheimer disease stages.
- Published
- 2019
- Full Text
- View/download PDF
40. Image steganography using uncorrelated color space and its application for security of visual contents in online social networks
- Author
-
Seungmin Rho, Khan Muhammad, Irfan Mehmood, Muhammad Sajjad, and Sung Wook Baik
- Subjects
Channel (digital image) ,Steganography ,Computer Networks and Communications ,business.industry ,Image quality ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,YCbCr ,02 engineering and technology ,HSL and HSV ,Color space ,Encryption ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
Image steganography is a growing research field, where sensitive contents are embedded in images, keeping their visual quality intact. Researchers have used correlated color space such as RGB, where modification to one channel affects the overall quality of stego-images, hence decreasing its suitability for steganographic algorithms. Therefore, in this paper, we propose an adaptive LSB substitution method using uncorrelated color space, increasing the property of imperceptibility while minimizing the chances of detection by the human vision system. In the proposed scheme, the input image is passed through an image scrambler, resulting in an encrypted image, preserving the privacy of image contents, and then converted to HSV color space for further processing. The secret contents are encrypted using an iterative magic matrix encryption algorithm (IMMEA) for better security, producing the cipher contents. An adaptive LSB substitution method is then used to embed the encrypted data inside the V-plane of HSV color model based on secret key-directed block magic LSB mechanism. The idea of utilizing HSV color space for data hiding is inspired from its properties including de-correlation, cost-effectiveness in processing, better stego image quality, and suitability for steganography as verified by our experiments, compared to other color spaces such as RGB, YCbCr, HSI, and Lab. The quantitative and qualitative experimental results of the proposed framework and its application for addressing the security and privacy of visual contents in online social networks (OSNs), confirm its effectiveness in contrast to state-of-the-art methods.
- Published
- 2018
- Full Text
- View/download PDF
41. Social media signal detection using tweets volume, hashtag, and sentiment analysis
- Author
-
Muazzam Maqsood, Farhan Aadil, Faria Nazir, Irfan Mehmood, Mustansar Ali Ghazanfar, and Seungmin Rho
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Gaussian ,Sentiment analysis ,Volume (computing) ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,symbols.namesake ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Key (cryptography) ,symbols ,The Internet ,Detection theory ,Social media ,Data mining ,business ,computer ,Software - Abstract
Social Media is a well-known platform for users to create, share and check the new information. The world becomes a global village because of the utilization of internet and social media. The data present on Twitter contains information of great importance. There is a strong need to extract valuable information from this huge amount of data. A key research challenge in this area is to analyze and process this huge data and detect the signals or spikes. Existing work includes sentiment analysis for Twitter, hashtag analysis, and event detection but spikes/signal detection from Twitter remains an open research area. From this line of research, we propose a signal detection approach using sentiment analysis from Twitter data (tweets volume, top hashtag and sentiment analysis). In this paper, we propose three algorithms for signal detection in tweets volume, tweets sentiment and top hashtag. The algorithms are the- Average moving threshold algorithm, Gaussian algorithm, and hybrid algorithm. The hybrid algorithm is a combination of the average moving threshold algorithm and Gaussian algorithm. The proposed algorithms are tested over real-time data extracted from Twitter and two large publically available datasets- Saudi Aramco dataset and BP America dataset. Experimental results show that hybrid algorithm outperforms the Gaussian and average moving threshold algorithm and achieve a precision of 89% on real-time tweets data, 88% on Saudi Aramco dataset and 81% on BP America dataset with the recall of 100%.
- Published
- 2018
- Full Text
- View/download PDF
42. Grey wolf optimization based clustering algorithm for vehicular ad-hoc networks
- Author
-
Zahoor-ur Rehman, Jong Weon Lee, Muhammad Fahad, Salabat Khan, Peer Azmat Shah, Khan Muhammad, Farhan Aadil, Irfan Mehmood, Haoxiang Wang, and Jaime Lloret
- Subjects
Routing protocol ,General Computer Science ,Computer science ,Wireless ad hoc network ,Distributed computing ,020206 networking & telecommunications ,Topology (electrical circuits) ,02 engineering and technology ,Social behaviour ,Wireless resources ,Control and Systems Engineering ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Random topology ,Electrical and Electronic Engineering ,Cluster analysis - Abstract
In vehicular ad-hoc network (VANETs), frequent topology changes occur due to fast moving nature of mobile nodes. This random topology creates instability that leads to scalability issues. To overcome this problem, clustering can be performed. Existing approaches for clustering in VANETs generate large number of cluster-heads which utilize the scarce wireless resources resulting in degraded performance. In this article, grey wolf optimization based clustering algorithm for VANETs is proposed, that replicates the social behaviour and hunting mechanism of grey wolfs for creating efficient clusters. The linearly decreasing factor of grey wolf nature enforces to converge earlier, which provides the optimized number of clusters. The proposed method is compared with well- known meta-heuristics from literature and results show that it provides optimal outcomes that lead to a robust routing protocol for clustering of VANETs, which is appropriate for highways and can accomplish quality communication, confirming reliable delivery of information to each vehicle.
- Published
- 2018
- Full Text
- View/download PDF
43. Utilizing text recognition for the defects extraction in sewers CCTV inspection videos
- Author
-
Suhyeon Im, Irfan Mehmood, L. Minh Dang, Syed Ibrahim Hassan, and Hyeonjoon Moon
- Subjects
Maximally stable extremal regions ,General Computer Science ,Channel (digital image) ,Computer science ,business.industry ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,General Engineering ,02 engineering and technology ,Video processing ,HSL and HSV ,Grayscale ,Set (abstract data type) ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Line (text file) ,business - Abstract
This paper proposed a novel automated framework for analyzing and tracking sewer inspection close-circuit television (CCTV) videos. The proposed model mainly supports the off-site examination and quality management process of the videos and enables efficient revaluation of CCTV videos to extract sewer condition data. The study discusses an important module for any automated analysis and defect detection in CCTV video. It includes two main modules: text recognition and cracks extraction. In the first module, multi-frame integration (MFI) was applied to reduce the background complexity, time and computational requirements needed for the video processing. Then maximally stable extremal regions (MSER) was used on the grayscale channel and HSV channel to effectively detect all the text edges. Saturation color channel was also applied to verify the detected text line and remove false alarms. In the second module, by utilizing the text information on each frame, the operator’s operation during the inspection is simulated which would indicate valuable clues about the location and severity of the cracks. The proposed methodology was validated using a set of video provided by the Korea Institute of Construction Technology.
- Published
- 2018
- Full Text
- View/download PDF
44. Egocentric visual scene description based on human-object interaction and deep spatial relations among objects
- Author
-
Sanghyun Seo, Gulraiz Khan, Sung Wook Baik, Irfan Mehmood, Muhammad Usman Ghani, Aiman Siddiqi, and Zahoor-ur-Rehman
- Subjects
Interpretation (logic) ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Natural language generation ,020207 software engineering ,02 engineering and technology ,Object (computer science) ,Convolutional neural network ,Variety (cybernetics) ,Spatial relation ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Sequential minimal optimization ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
Visual Scene interpretation is one of the major areas of research in the recent past. Recognition of human object interaction is a fundamental step towards understanding visual scenes. Videos can be described via a variety of human-object interaction scenarios such as when both human and object are static (static-static), one is static while other is dynamic (static-dynamic) and both are dynamic (dynamic-dynamic). This paper presents a unified framework for the explanation of these interactions between humans and a variety of objects using deep learning as a pivot methodology. Human-object interaction is extracted through native machine learning techniques, while spatial relations are captured by training a model through convolution neural network. We also address the recognition of human posture in detail to provide egocentric visual description. After extracting visual features, sequential minimal optimization is employed for training our model. Extracted inter-action, spatial relations and posture information are fed into natural language generation module along with interacting object label to generate scene understanding. Evaluation of the proposed framework is done for two state of the art datasets i.e., MSCOCO and MSR3D Daily activity dataset; where achieved results are 78 and 91.16% accurate, respectively.
- Published
- 2018
- Full Text
- View/download PDF
45. Ensemble-classifiers-assisted detection of cerebral microbleeds in brain MRI
- Author
-
Muazzam Maqsood, Shuihua Wang, Tayyab Ateeq, Irfan Mehmood, Khan Muhammad, Sung Wook Baik, Muhammad Nadeem Majeed, Syed Muhammad Anwar, Zahoor-ur Rehman, and Jong Weon Lee
- Subjects
General Computer Science ,business.industry ,Computer science ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Quadratic classifier ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,Control and Systems Engineering ,Ischemic stroke ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Brain mri ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,030217 neurology & neurosurgery - Abstract
Cerebral Microbleeds (CMBs) are considered as an essential indicator in the diagnosis of critical cerebrovascular diseases such as ischemic stroke and dementia. Manual detection of CMBs is prone to errors due to complex morphological nature of CMBs. In this paper, an efficient method is presented for CMB detection in Susceptibility-Weighted Imaging (SWI) scans. The proposed framework consists of three phases: i) brain extraction, ii) extraction of initial candidates based on threshold and size based filtering, and iii) feature extraction and classification of CMBs from other healthy tissues in order to remove false positives using Support Vector Machine, Quadratic Discriminant Analysis (QDA) and ensemble classifiers. The proposed technique is validated on a dataset of 20 subjects with CMBs that consists of 14 subjects for training and 6 subjects for testing. QDA classifier achieved the best sensitivity of 93.7% with 56 false positives per patient and 5.3 false positives per CMB.
- Published
- 2018
- Full Text
- View/download PDF
46. Machine learning-assisted signature and heuristic-based detection of malwares in Android devices
- Author
-
Irfan Mehmood, Peer Azmat Shah, Khalid Mahmood Awan, Sidra Khan, Zahoor-ur Rehman, Jong Weon Lee, Khan Muhammad, Zhihan Lv, and Sung Wook Baik
- Subjects
Reverse engineering ,General Computer Science ,business.industry ,Computer science ,Decision tree ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Machine learning ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Malware ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Android (operating system) ,business ,computer - Abstract
Malware detection is an important factor in the security of the smart devices. However, currently utilized signature-based methods cannot provide accurate detection of zero-day attacks and polymorphic viruses. In this context, an efficient hybrid framework is presented for detection of malware in Android Apps. The proposed framework considers both signature and heuristic-based analysis for Android Apps. We have reverse engineered the Android Apps to extract manifest files, and binaries, and employed state-of-the-art machine learning algorithms to efficiently detect malwares. For this purpose, a rigorous set of experiments are performed using various classifiers such as SVM, Decision Tree, W-J48 and KNN. It has been observed that SVM in case of binaries and KNN in case of manifest.xml files are the most suitable options in robustly detecting the malware in Android devices. The proposed framework is tested on benchmark datasets and results show improved accuracy in malware detection.
- Published
- 2018
- Full Text
- View/download PDF
47. An efficient computerized decision support system for the analysis and 3D visualization of brain tumor
- Author
-
Muhammad Shoaib, Arun Kumar Sangaiah, Sung Wook Baik, Syed Inayat Ali Shah, Khan Muhammad, Muhammad Sajjad, and Irfan Mehmood
- Subjects
Decision support system ,Computer Networks and Communications ,Computer science ,business.industry ,Feature extraction ,Brain tumor ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,medicine.disease ,Visualization ,Support vector machine ,Hardware and Architecture ,Bag-of-words model ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Medical imaging ,medicine ,Brain magnetic resonance imaging ,Segmentation ,Artificial intelligence ,business ,computer ,Software - Abstract
The quality of health services provided by medical centers varies widely, and there is often a large gap between the optimal standard of services when judged based on the locality of patients (rural or urban environments). This quality gap can have serious health consequences and major implications for patient’s timely and correct treatment. These deficiencies can manifest, for example, as a lack of quality services, misdiagnosis, medication errors, and unavailability of trained professionals. In medical imaging, MRI analysis assists radiologists and surgeons in developing patient treatment plans. Accurate segmentation of anomalous tissues and its correct 3D visualization plays an important role inappropriate treatment. In this context, we aim to develop an intelligent computer-aided diagnostic system focusing on human brain MRI analysis. We present brain tumor detection, segmentation, and its 3D visualization system, providing quality clinical services, regardless of geographical location, and level of expertise of medical specialists. In this research, brain magnetic resonance (MR) images are segmented using a semi-automatic and adaptive threshold selection method. After segmentation, the tumor is classified into malignant and benign based on a bag of words (BoW) driven robust support vector machine (SVM) classification model. The BoW feature extraction method is further amplified via speeded up robust features (SURF) incorporating its procedure of interest point selection. Finally, 3D visualization of the brain and tumor is achieved using volume marching cube algorithm which is used for rendering medical data. The effectiveness of the proposed system is verified over a dataset collected from 30 patients and achieved 99% accuracy. A subjective comparative analysis is also carried out between the proposed method and two state-of-the-art tools ITK-SNAP and 3D-Doctor. Experimental results indicate that the proposed system performed better than existing systems and assists radiologist determining the size, shape, and location of the tumor in the human brain.
- Published
- 2018
- Full Text
- View/download PDF
48. Lexical paraphrasing and pseudo relevance feedback for biomedical document retrieval
- Author
-
Seungmin Rho, Zahoor ur Rehman, Irfan Mehmood, Muhammad Usman Ghani, Muhammad Wasim, and Muhammad Nabeel Asim
- Subjects
Computer Networks and Communications ,Intersection (set theory) ,business.industry ,Computer science ,Relevance feedback ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Term (time) ,Domain (software engineering) ,Ranking ,Hardware and Architecture ,Noun ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Selection (linguistics) ,Artificial intelligence ,Document retrieval ,business ,computer ,Software ,Natural language processing - Abstract
Term mismatch is a serious problem effecting the performance of information retrieval systems. The problem is more severe in biomedical domain where lot of term variations, abbreviations and synonyms exist. We present query paraphrasing and various term selection combination techniques to overcome this problem. To perform paraphrasing, we use noun words to generate synonyms from Metathesaurus. The new synthesized paraphrases are ranked using statistical information derived from the corpus and relevant documents are retrieved based on top n selected paraphrases. We compare the results with state-of-the-art pseudo relevance feedback based retrieval techniques. In quest of enhancing the results of pseudo relevance feedback approach, we introduce two term selection combination techniques namely Borda Count and Intersection. Surprisingly, combinational techniques performed worse than single term selection techniques. In pseudo relevance feedback approach best algorithms are IG, Rochio and KLD which are performing 33%, 30% and 20% better than other techniques respectively. However, the performance of paraphrasing technique is 20% better than pseudo relevance feedback approach.
- Published
- 2018
- Full Text
- View/download PDF
49. Multi-scale contrast and relative motion-based key frame extraction
- Author
-
Hammad Majeed, Hangbae Chang, Irfan Mehmood, Sung Wook Baik, and Naveed Ejaz
- Subjects
Visual saliency ,Computer science ,lcsh:TK7800-8360 ,Context (language use) ,Key frame extraction ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Video summary evaluation ,Electrical and Electronic Engineering ,Visual attention model ,business.industry ,Orientation (computer vision) ,Video summarization ,Search engine indexing ,lcsh:Electronics ,020207 software engineering ,Automatic summarization ,Signal Processing ,Pattern recognition (psychology) ,Key (cryptography) ,Key frame ,Fusion mechanism ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Information Systems ,Semantic gap - Abstract
The huge amount of video data available these days requires effective management techniques for storage, indexing, and retrieval. Video summarization, a method to manage video data, provides concise versions of the videos for efficient browsing and retrieval. Key frame extraction is a form of video summarization which selects only the most salient frames from a given video. Since the automatic semantic understanding of the video contents is not possible so far, most of the existing works employ low level index features for extracting key frames. However, the usage of low level features results in loss of semantic details, thus leading to a semantic gap. In this context, the saliency-based user attention modeling technique can be used to bridge this semantic gap. In this paper, a key frame extraction scheme based on a visual attention mechanism is proposed. The proposed scheme builds static visual attention method based on multi-scale contrast instead of usual color contrast. The dynamic visual attention model is developed based on novel relative motion intensity and relative motion orientation. An efficient fusion scheme for combining three visual attention values is then proposed. A flexible technique is then used for key frame extraction. The experimental results demonstrate that the proposed mechanism provides excellent results as compared to the some of the other prominent techniques in the literature.
- Published
- 2018
- Full Text
- View/download PDF
50. Visual features based boosted classification of weeds for real-time selective herbicide sprayer systems
- Author
-
Khan Muhammad, Deepak Kumar Jain, Melvyn L. Smith, Imran Ahmad, Wakeel Ahmad, Jamil Ahmad, Irfan Mehmood, Haoxiang Wang, and Lyndon N. Smith
- Subjects
General Computer Science ,Sprayer ,Computer science ,business.industry ,010401 analytical chemistry ,Feature extraction ,General Engineering ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,0104 chemical sciences ,Naive Bayes classifier ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,False positive rate ,Artificial intelligence ,AdaBoost ,business ,Weed - Abstract
Recent years have shown enthusiastic research interest in weed classification for selective herbicide sprayer systems which are helpful in eradicating unwanted plants such as weeds from fields, minimizing the side effects of chemicals on the environment and crops. Two commonly found weeds are monocots (thin leaf) and dicots (broad leaf), requiring separate chemical herbicides for eradication. Researchers have used various computer vision-assisted techniques for eradication of these weeds. However, the changing and un-predictive lighting conditions in fields make the process of weed detection and identification very challenging. Therefore, in this paper, we present an efficient weed classification framework for real-time selective herbicide sprayer systems, exploiting boosted visual features of images, containing weeds. The proposed method effectively represents the image using local shape and texture features which are extracted during the leaf growth stage using an efficient method, preserving the discrimination between various weed species. Such effective representation allows accurate recognition at early growth stages. Furthermore, the various illumination problems prior to feature extraction are minimized using an adaptive segmentation algorithm. AdaBoost with Naive Bayes as a base classifier discriminates the two weed species. The proposed method achieves an overall accuracy 98.40%, with true positive rate of 0.983 and false positive rate of 0.0121 for the original dataset and achieved 94.72% accuracy with the expanded dataset. The execution time of the proposed method is about 35 millisecond per image, which is less than state-of-the-art methods.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.