9 results on '"Hazem Abdelkawy"'
Search Results
2. MonoCInIS: Camera Independent Monocular 3D Object Detection using Instance Segmentation.
- Author
-
Jonas Heylen, Mark De Wolf, Bruno Dawagne, Marc Proesmans, Luc Van Gool, Wim Abbeloos, Hazem Abdelkawy, and Daniel Olmeda Reino
- Published
- 2021
- Full Text
- View/download PDF
3. Towards Semantic Multimodal Emotion Recognition for Enhancing Assistive Services in Ubiquitous Robotics.
- Author
-
Naouel Ayari, Hazem Abdelkawy, Abdelghani Chibani, and Yacine Amirat
- Published
- 2017
4. Hybrid Model-Based Emotion Contextual Recognition for Cognitive Assistance Services
- Author
-
Abdelghani Chibani, Naouel Ayari, Yacine Amirat, and Hazem Abdelkawy
- Subjects
Knowledge representation and reasoning ,Computer science ,Emotions ,Cognition ,Context (language use) ,Ontology (information science) ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Human–computer interaction ,Multilayer perceptron ,Pattern recognition (psychology) ,Humans ,Learning ,Upper ontology ,Robot ,Neural Networks, Computer ,Electrical and Electronic Engineering ,Software ,Information Systems - Abstract
Endowing ubiquitous robots with cognitive capabilities for recognizing emotions, sentiments, affects, and moods of humans in their context is an important challenge, which requires sophisticated and novel approaches of emotion recognition. Most studies explore data-driven pattern recognition techniques that are generally highly dependent on learning data and insufficiently effective for emotion contextual recognition. In this article, a hybrid model-based emotion contextual recognition approach for cognitive assistance services in ubiquitous environments is proposed. This model is based on: 1) a hybrid-level fusion exploiting a multilayer perceptron (MLP) neural-network model and the possibilistic logic and 2) an expressive emotional knowledge representation and reasoning model to recognize nondirectly observable emotions; this model exploits jointly the emotion upper ontology (EmUO) and the n-ary ontology of events HTemp supported by the NKRL language. For validation purposes of the proposed approach, experiments were carried out using a YouTube dataset, and in a real-world scenario dedicated to the cognitive assistance of visitors in a smart devices showroom. Results demonstrated that the proposed multimodal emotion recognition model outperforms all baseline models. The real-world scenario corroborates the effectiveness of the proposed approach in terms of emotion contextual recognition and management and in the creation of emotion-based assistance services.
- Published
- 2022
5. Leveraging Recent Advances in Deep Learning for Audio-Visual Emotion Recognition
- Author
-
Alice Othmani, Hazem Abdelkawy, and Liam Schoneveld
- Subjects
FOS: Computer and information sciences ,Sound (cs.SD) ,Computer Science - Machine Learning ,Computer science ,Speech recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Feature extraction ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,01 natural sciences ,Computer Science - Sound ,Machine Learning (cs.LG) ,Artificial Intelligence ,Audio and Speech Processing (eess.AS) ,ComputerApplications_MISCELLANEOUS ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,FOS: Electrical engineering, electronic engineering, information engineering ,Emotional expression ,010306 general physics ,Affective computing ,Facial expression ,business.industry ,Deep learning ,Recurrent neural network ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Gesture ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets., 8 pages, 3 figures, Pattern Recognition Letters
- Published
- 2021
6. MonoCInIS: Camera Independent Monocular 3D Object Detection using Instance Segmentation
- Author
-
Wim Abbeloos, Marc Proesmans, Daniel Olmeda Reino, Mark De Wolf, Jonas Heylen, Bruno Dawagne, Luc Van Gool, and Hazem Abdelkawy
- Subjects
FOS: Computer and information sciences ,Monocular ,Pixel ,business.industry ,Computer science ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Intrinsics ,Object (computer science) ,Object detection ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Pose ,Robotics (cs.RO) ,Independence (probability theory) - Abstract
Monocular 3D object detection has recently shown promising results, however there remain challenging problems. One of those is the lack of invariance to different camera intrinsic parameters, which can be observed across different 3D object datasets. Little effort has been made to exploit the combination of heterogeneous 3D object datasets. In contrast to general intuition, we show that more data does not automatically guarantee a better performance, but rather, methods need to have a degree of 'camera independence' in order to benefit from large and heterogeneous training data. In this paper we propose a category-level pose estimation method based on instance segmentation, using camera independent geometric reasoning to cope with the varying camera viewpoints and intrinsics of different datasets. Every pixel of an instance predicts the object dimensions, the 3D object reference points projected in 2D image space and, optionally, the local viewing angle. Camera intrinsics are only used outside of the learned network to lift the predicted 2D reference points to 3D. We surpass camera independent methods on the challenging KITTI3D benchmark and show the key benefits compared to camera dependent methods., Comment: Accepted to ICCV2021 Workshop on 3D Object Detection from Images
- Published
- 2021
- Full Text
- View/download PDF
7. Risk factors and management of different types of biliary injuries in blunt abdominal trauma: Single-center retrospective cohort study
- Author
-
Talaat Zakareya, Nahla K. Gaballa, Ahmed Oteem, Hesham Abdeldayem, Hazem Omar, Ali Nada, Emad Hamdy Gad, O. Hegazy, Hazem Abdelkawy, and Hazem M Zakaria
- Subjects
medicine.medical_specialty ,Percutaneous ,Blunt liver trauma ,030230 surgery ,Biliary injury ,ERCP ,03 medical and health sciences ,0302 clinical medicine ,Blunt ,medicine ,Bile leak ,Original Research ,Endoscopic retrograde cholangiopancreatography ,medicine.diagnostic_test ,business.industry ,Haemobilia ,Retrospective cohort study ,General Medicine ,medicine.disease ,Surgery ,Abdominal trauma ,Blunt trauma ,030220 oncology & carcinogenesis ,business - Abstract
Background Biliary injuries after blunt abdominal traumas are uncommon and difficult to be predicted for early management. The aim of this study is to analyze the risk factors and management of biliary injuries with blunt abdominal trauma. Method Patients with blunt liver trauma in the period between 2009 to May 2019 were included in the study. Patients were divided into 2 groups for comparison; a group of liver parenchymal injury and group with traumatic biliary injuries (TBI). Results One hundred and eight patients had blunt liver trauma (46 patients with liver parenchymal injury and 62 patients with TBI). TBI were; 55 patients with bile leak, 3 patients with haemobilia, and 4 patients with late obstructive jaundice. Eight patients with major bile leak and 12 patients with minor bile leak had been resolved with a surgical drain or percutaneous pigtail drainage. Nineteen patients (34.5%) with major and minor bile leak underwent successful endoscopic retrograde cholangiopancreatography (ERCP). Sixteen patients (29.1%) underwent surgical repair for bile leak. In Multivariate analysis, the possible risk factors for prediction of biliary injuries were central liver injuries (P = 0.032), high grades liver trauma (P = 0.046), elevated serum level of bilirubin at time of admission (P = 0.019), and elevated gamma glutamyl transferase (GGT) at time of admission (P = 0.017). Conclusion High-grade liver trauma, central parenchymal laceration and elevated serum level of bilirubin and GGT are possible risk factors for the prediction of TBI. Bile leak after blunt trauma can be treated conservatively, while ERCP is indicated after failure of external drainage., Highlights • In most of the published series they discussed the iatrogenic biliary injuries or injuries after sharp trauma. • To our knowledge it is the largest series to discuss the biliary injuries with blunt liver trauma. • We can predict the possible risk factors for bile duct injury after blunt liver trauma. • So we can diagnose and treat it properly and early before sepsis and biliary complications. • We can approach to the ideal treatment modality for each type of biliary injuries with prober timing.
- Published
- 2020
8. Age estimation from faces using deep learning: A comparative analysis
- Author
-
Abdenour Hadid, Hazem Abdelkawy, Alice Othmani, and Abdul Rahman Taleb
- Subjects
comparative analysis ,Computer science ,convolutional neural network ,02 engineering and technology ,Convolutional neural network ,Facial recognition system ,deep ageing patterns learning ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,business.industry ,Deep learning ,020207 software engineering ,Pattern recognition ,knowledge transfer ,Expression (mathematics) ,cross-domain age estimation ,Face (geometry) ,Automatic age estimation ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Noise (video) ,business ,Transfer of learning ,Software - Abstract
Automatic Age Estimation (AAE) has attracted attention due to the wide variety of possible applications. However, it is a challenging task because of the large variation of facial appearance and several other extrinsic and intrinsic factors. Most of the proposed approaches in the literature use hand-crafted features to encode ageing patterns. Deeply learned features extracted by Convolutional Neural Networks (CNNs) algorithms usually perform better than hand-crafted features. The main contribution of this paper is an extensive comparative analysis of several frameworks for real AAE based on deep learning architectures. Different well-known CNN architectures are considered and their performances are compared. MORPH, FG-NET, FACES, PubFig and CASIA-web Face datasets are used in our experiments. The robustness of the best deep estimator is evaluated under noise, expression changes, “crossing” ethnicity and “crossing” gender. The experimental results demonstrate the high performances of the popular CNNs frameworks against the state-of-art methods of automatic age estimation. A Layer-wise transfer learning evaluation is done to study the optimal number of layers to fine-tune on AAE task. An evaluation framework of Knowledge transfer from face recognition task across AAE is performed. We have made our best-performing CNNs models publicly available that would allow one to duplicate the results and for further research on the use of CNNs for AAE from face images.
- Published
- 2020
9. Living Donor Liver Transplantation for Patients with Pre-existent Portal Vein Thrombosis
- Author
-
Nahla K. Gaballa, T. Ibrahim, Emad Hamdy Gad, O. Hegazy, Doha Maher, Rasha Abdelhafiz, Mohammad Taha, H. Soliman, Hazem Abdelkawy, Dina Elazab, Talaat Zakareya, Khaled Abou El-Ella, Mohamed Abbasy, and Hazem M Zakaria
- Subjects
medicine.medical_specialty ,Cirrhosis ,genetic structures ,business.industry ,medicine.medical_treatment ,Perioperative ,Anastomosis ,Liver transplantation ,medicine.disease ,behavioral disciplines and activities ,Dysphagia ,Surgery ,Portal vein thrombosis ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Percutaneous endoscopic gastrostomy ,mental disorders ,Medicine ,030211 gastroenterology & hepatology ,medicine.symptom ,business ,Living donor liver transplantation ,human activities ,psychological phenomena and processes - Abstract
Background: Portal vein thrombosis (PVT) in living donor liver transplantation (LDLT) is a surgical challenge with technical difficulty. The aim of this study was to analyze the operative planning for management of PVT in LDLT and the impact of PVT on the outcome in comparison to patients without PVT. Methods: Between July 2003 to August 2016, 213 patients underwent LDLT. The patients were divided into two groups with and without PVT. The preoperative, operative, and postoperative data were analysed. Results: Thirty six patients (16.9%) had different grades of PVT at time of liver transplantation (LT); grades I, II, III and IV were 18 (50%), 14 (38.9%), 3 (8.3%) and 1 patient (2.8%) respectively. The management of PVT was by; thrombectomy in 31 patients (86%), bypass graft in 2 patients (5.6%), portal replacement graft in 1 patient (2.8%), anastomosis with the left renal vein in 1 patient (2.8%) and with large collateral vein in 1 patient (2.8%). Overall postoperative PVT occurred in 10 patients (4.7%), 4 patients of them had preoperative PVT. The perioperative mortality in patients with PVT, and patients without PVT was 33.3%, and 20.3%, respectively (P=0.17). The 1-, 3-, 5-, and 7y survival in patients with PVT was 49.7%, 46.2%, 46.2%, 46.2% respectively and in patients without PVT it was 65%, 53.7%, 50.8%, 49% respectively (P=0.29). Conclusions: Preoperative PVT may not keep a patient from undergoing successful LT with comparable outcome to patients without PVT specially with partial PVT.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.