325,676 results on '"Mohamad IS"'
Search Results
402. Properties, Purification, and Applications of Phosphogypsum: A Comprehensive Review Towards Circular Economy
- Author
-
Awad, Said, Essam, Mohamad, Boukhriss, Aicha, Kamar, Mohamed, and Midani, Mohamad
- Published
- 2024
- Full Text
- View/download PDF
403. A Review on Teaching and Learning in Decision-Making Post-Pandemic COVID-19
- Author
-
Noor Maizura Mohamad Noor, Mohamad Jazli Shafizan Jaafar, Rosmayati Mohemad, and Noor Azliza Che Mat
- Abstract
Universities around the globe have begun implementing outcome-based learning (OBE). The curriculum will be revised, evaluated, and the outcomes of the assessments will be reported as part of this OBE implementation. Due to the importance of the evaluation method, several lecturers are searching for innovative approaches for assessing the effectiveness of the Program Learning Outcome (PLO) and Course Learning Outcome (CLO). This study examined outcome-based education for decision-making instruction and learning in the context of the COVID-19 pandemic. This pandemic has effects on the healthcare industry, such as the exhaustion of the healthcare system, disruption of the educational system, and harm to the economy and several other industries. E-learning platforms played a crucial role in helping schools and universities throughout the pandemic by allowing student learning while they were closed. There is a larger need for lifelong learning as a result of the rising need for qualified professionals in education. On the other hand, current trends favour the paradigms of social and practice-based learning. As a result of digitization, our methods of communication and education are changing. The teaching and learning that took place during COVID-19 had a substantial impact on outcome-based learning. We find it fascinating to show how COVID-19 affects outcome-based education (OBE) in a risk decision support system. [For the complete proceedings, see ED655360.]
- Published
- 2023
404. Cybersecurity Pathways Towards CE-Certified Autonomous Forestry Machines
- Author
-
Mohamad, Mazen, Avula, Ramana Reddy, Folkesson, Peter, Kleberger, Pierre, Mirzai, Aria, Skoglund, Martin, and Damschen, Marvin
- Subjects
Computer Science - Software Engineering - Abstract
The increased importance of cybersecurity in autonomous machinery is becoming evident in the forestry domain. Forestry worksites are becoming more complex with the involvement of multiple systems and system of systems. Hence, there is a need to investigate how to address cybersecurity challenges for autonomous systems of systems in the forestry domain. Using a literature review and adapting standards from similar domains, as well as collaborative sessions with domain experts, we identify challenges towards CE-certified autonomous forestry machines focusing on cybersecurity and safety. Furthermore, we discuss the relationship between safety and cybersecurity risk assessment and their relation to AI, highlighting the need for a holistic methodology for their assurance.
- Published
- 2024
405. Thermal Performance of a Liquid-cooling Assisted Thin Wickless Vapor Chamber
- Author
-
Mukhopadhyay, Arani, Pal, Anish, Gukeh, Mohamad Jafari, and Megaridis, Constantine M.
- Subjects
Physics - Applied Physics ,Computer Science - Hardware Architecture ,Electrical Engineering and Systems Science - Systems and Control - Abstract
The ever-increasing need for power consumption in electronic devices, coupled with the requirement for thinner size, calls for the development of efficient heat spreading components. Vapor chambers (VCs), because of their ability to effectively spread heat over a large area by two-phase heat transfer, seem ideal for such applications. However, creating thin and efficient vapor chambers that work over a wide range of power inputs is a persisting challenge. VCs that use wicks for circulating the phase changing media, suffer from capillary restrictions, dry-out, clogging, increase in size and weight, and can often be costly. Recent developments in wick-free wettability patterned vapor chambers replace traditional wicks with laser-fabricated wickless components. An experimental setup allows for fast testing and experimental evaluation of water-charged VCs with liquid-assisted cooling. The sealed chamber can maintain vacuum for long durations, and can be used for testing of very thin wick-free VCs. This work extends our previous study by decreasing overall thickness of the wick-free VC down to 3 mm and evaluates its performance. Furthermore, the impact of wettability patterns on VC performance is investigated, by carrying out experiments both in non-patterned and patterned VCs. Experiments are first carried out on a wick-free VC with no wettability patterns and comprising of an entirely superhydrophilic evaporator coupled with a hydrophobic condenser. Thereafter, wettability patterns that aid the rapid return of water to the heated site on the evaporator and improve condensation on the condenser of the vapor chamber are implemented. The thermal characteristics show that the patterned VCs outperform the non-patterned VCs under all scenarios. The patterned VCs exhibit low thermal resistance independent of fluid charging ratio withstanding higher power inputs without thermal dry-outs., Comment: Presented at IEEE ITherm (Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems) 2023. Orlando, FL, US. Corresponding: cmm@uic.edu
- Published
- 2024
- Full Text
- View/download PDF
406. Evaluation of Thermal Performance of a Wick-free Vapor Chamber in Power Electronics Cooling
- Author
-
Mukhopadhyay, Arani, Pal, Anish, Bao, Congbo, Gukeh, Mohamad Jafari, Mazumder, Sudip K., and Megaridis, Constantine M.
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Hardware Architecture ,Physics - Applied Physics - Abstract
Efficient thermal management in high-power electronics cooling can be achieved using phase-change heat transfer devices, such as vapor chambers. Traditional vapor chambers use wicks to transport condensate for efficient thermal exchange and to prevent "dry-out" of the evaporator. However, wicks in vapor chambers present significant design challenges arising out of large pressure drops across the wicking material, which slows down condensate transport rates and increases the chances for dry-out. Thicker wicks add to overall thermal resistance, while deterring the development of thinner devices by limiting the total thickness of the vapor chamber. Wickless vapor chambers eliminate the use of metal wicks entirely, by incorporating complementary wettability-patterned flat plates on both the evaporator and the condenser side. Such surface modifications enhance fluid transport on the evaporator side, while allowing the chambers to be virtually as thin as imaginable, thereby permitting design of thermally efficient thin electronic cooling devices. While wick-free vapor chambers have been studied and efficient design strategies have been suggested, we delve into real-life applications of wick-free vapor chambers in forced air cooling of high-power electronics. An experimental setup is developed wherein two Si-based MOSFETs of TO-247-3 packaging having high conduction resistance, are connected in parallel and switched at 100 kHz, to emulate high frequency power electronics operations. A rectangular copper wick-free vapor chamber spreads heat laterally over a surface 13 times larger than the heating area. This chamber is cooled externally by a fan that circulates air at room temperature. The present experimental setup extends our previous work on wick-free vapor chambers, while demonstrating the effectiveness of low-cost air cooling in vapor-chamber enhanced high-power electronics applications., Comment: Presented at IEEE ITherm (Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems) 2023, Orlando FL. Corresponding author: cmm@uic.edu
- Published
- 2024
- Full Text
- View/download PDF
407. Managing Security Evidence in Safety-Critical Organizations
- Author
-
Mohamad, Mazen, Steghöfer, Jan-Philipp, Knauss, Eric, and Scandariato, Riccardo
- Subjects
Computer Science - Software Engineering ,Computer Science - Cryptography and Security - Abstract
With the increasing prevalence of open and connected products, cybersecurity has become a serious issue in safety-critical domains such as the automotive industry. As a result, regulatory bodies have become more stringent in their requirements for cybersecurity, necessitating security assurance for products developed in these domains. In response, companies have implemented new or modified processes to incorporate security into their product development lifecycle, resulting in a large amount of evidence being created to support claims about the achievement of a certain level of security. However, managing evidence is not a trivial task, particularly for complex products and systems. This paper presents a qualitative interview study conducted in six companies on the maturity of managing security evidence in safety-critical organizations. We find that the current maturity of managing security evidence is insufficient for the increasing requirements set by certification authorities and standardization bodies. Organisations currently fail to identify relevant artifacts as security evidence and manage this evidence on an organizational level. One part of the reason are educational gaps, the other a lack of processes. The impact of AI on the management of security evidence is still an open question
- Published
- 2024
408. Adapting Open-Source Large Language Models for Cost-Effective, Expert-Level Clinical Note Generation with On-Policy Reinforcement Learning
- Author
-
Wang, Hanyin, Gao, Chufan, Liu, Bolun, Xu, Qiping, Hussein, Guleid, Labban, Mohamad El, Iheasirim, Kingsley, Korsapati, Hariprasad, Outcalt, Chuck, and Sun, Jimeng
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as "acceptable" or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging "Assessment and Plan" section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). Our cost analysis for inference shows that our LLaMA-Clinic model achieves a 3.75-fold cost reduction compared to an external generic LLM service. Additionally, we highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice. We have made our newly created synthetic clinic dialogue-note dataset and the physician feedback dataset publicly available to foster future research.
- Published
- 2024
409. Does carrier localization affect the anomalous Hall effect?
- Author
-
Chowdhury, Prasanta, Numan, Mohamad, Gupta, Shuvankar, Chatterjee, Souvik, Giri, Saurav, and Majumdar, Subham
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
The effect of carrier localization due to electron-electron interaction in anomalous Hall effect is elusive and there are contradictory results in the literature. To address the issue, we report here the detailed transport study including the Hall measurements on $\beta$-Mn type cubic compound Co$_7$Zn$_7$Mn$_6$ with chiral crystal structure, which lacks global mirror symmetry. The alloy orders magnetically below $T_c$ = 204 K, and reported to show spin glass state at low temperature. The longitudinal resistivity ($\rho_{xx}$) shows a pronounced upturn below $T_{min}$ = 75 K, which is found to be associated with carrier localization due to quantum interference effect. The upturn in $\rho_{xx}$ shows a $T^{1/2}$ dependence and it is practically insensitive to the externally applied magnetic field, which indicate that electron-electron interaction is primarily responsible for the low-$T$ upturn. The studied sample shows considerable value of anomalous Hall effect below $T_c$. We found that the localization effect is present in the ordinary Hall coefficient ($R_0$), but we failed to observe any signature of localization in the anomalous Hall resistivity or conductivity. The absence of localization effect in the anomalous Hall effect in Co$_7$Zn$_7$Mn$_6$ may be due to large carrier density, and it warrants further theoretical investigations, particularly with systems having broken mirror symmetry., Comment: 9 pages, 5 figures
- Published
- 2024
- Full Text
- View/download PDF
410. Lessons from the Use of Natural Language Inference (NLI) in Requirements Engineering Tasks
- Author
-
Fazelnia, Mohamad, Koscinski, Viktoria, Herzog, Spencer, and Mirakhorli, Mehdi
- Subjects
Computer Science - Software Engineering ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
We investigate the use of Natural Language Inference (NLI) in automating requirements engineering tasks. In particular, we focus on three tasks: requirements classification, identification of requirements specification defects, and detection of conflicts in stakeholders' requirements. While previous research has demonstrated significant benefit in using NLI as a universal method for a broad spectrum of natural language processing tasks, these advantages have not been investigated within the context of software requirements engineering. Therefore, we design experiments to evaluate the use of NLI in requirements analysis. We compare the performance of NLI with a spectrum of approaches, including prompt-based models, conventional transfer learning, Large Language Models (LLMs)-powered chatbot models, and probabilistic models. Through experiments conducted under various learning settings including conventional learning and zero-shot, we demonstrate conclusively that our NLI method surpasses classical NLP methods as well as other LLMs-based and chatbot models in the analysis of requirements specifications. Additionally, we share lessons learned characterizing the learning settings that make NLI a suitable approach for automating requirements engineering tasks.
- Published
- 2024
411. Joint Soil and Above-Ground Biomass Characterization Using Radars
- Author
-
Jacobs, Luke, Alipour, Mohamad, Watts, Adam, and Soltanaghai, Elahe
- Subjects
Electrical Engineering and Systems Science - Signal Processing - Abstract
Soil moisture sensing through biomass or vegetation canopy has challenged researchers, even those who use SAR sensors with penetration capabilities. This is mainly due to the imposed extra time and phase offsets on Radio Frequency (RF) signals as they travel through the canopy. These offsets depend on the vegetation canopy moisture and height, both of which are typically unknown in agricultural and forest fields. In this paper, we leverage the mobility of an unmanned aerial system (UAS) to collect spatially-diverse radar measurements, enabling the joint estimation of soil moisture, above-ground biomass moisture, and biomass height, all without assuming any calibration steps. We leverage the changes in time-of-flight (ToF) and angle-of-arrival (AoA) measurements of reflected radar signals as the UAS flies above a reflector buried under the soil. We demonstrate the effectiveness of our algorithm by simulating its performance under realistic measurement noises as well as conducting lab experiments with different types of above-ground biomass. Our simulation results conclude that our algorithm is capable of estimating volumetric soil moisture to less than 1% median absolute error (MAE), vegetation height to 11.1cm MAE, and vegetation relative permittivity to 0.32 MAE. Our experimental results demonstrate the effectiveness of the proposed method in practical scenarios for varying biomass moistures and heights.
- Published
- 2024
412. Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice
- Author
-
Khojah, Ranim, Mohamad, Mazen, Leitner, Philipp, and Neto, Francisco Gomes de Oliveira
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Human-Computer Interaction ,Computer Science - Machine Learning - Abstract
Large Language Models (LLMs) are frequently discussed in academia and the general public as support tools for virtually any use case that relies on the production of text, including software engineering. Currently there is much debate, but little empirical evidence, regarding the practical usefulness of LLM-based tools such as ChatGPT for engineers in industry. We conduct an observational study of 24 professional software engineers who have been using ChatGPT over a period of one week in their jobs, and qualitatively analyse their dialogues with the chatbot as well as their overall experience (as captured by an exit survey). We find that, rather than expecting ChatGPT to generate ready-to-use software artifacts (e.g., code), practitioners more often use ChatGPT to receive guidance on how to solve their tasks or learn about a topic in more abstract terms. We also propose a theoretical framework for how (i) purpose of the interaction, (ii) internal factors (e.g., the user's personality), and (iii) external factors (e.g., company policy) together shape the experience (in terms of perceived usefulness and trust). We envision that our framework can be used by future research to further the academic discussion on LLM usage by software engineering practitioners, and to serve as a reference point for the design of future empirical LLM research in this domain., Comment: Accepted at the ACM International Conference on the Foundations of Software Engineering (FSE) 2024
- Published
- 2024
413. Empirical stability criteria for 3D hierarchical triple systems I: Circumbinary planets
- Author
-
Georgakarakos, Nikolaos, Eggl, Siegfried, Ali-Dib, Mohamad, and Dobbs-Dixon, Ian
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
In this work we revisit the problem of the dynamical stability of hierarchical triple systems with applications to circumbinary planetary orbits. We carry out more than 3 10^8 numerical simulations of planets between the size of Mercury and the lower fusion boundary (13 Jupiter masses) which revolve around the center of mass of a stellar binary over long timescales. For the first time, three dimensional and eccentric planetary orbits are considered. We explore systems with a variety of binary and planetary mass ratios, binary and planetary eccentricities from 0 to 0.9 and orbital mutual inclinations ranging from 0 to 180 degrees. The simulation time is set to 10^6 planetary orbital periods. We classify the results of those long term numerical integrations into three categories: stable, unstable and mixed. We provide empirical expressions in the form of multidimensional, parameterized fits for the two borders that separate the three dynamical domains . In addition, we train a machine learning model on our data set in order to have an alternative tool of predicting the stability of circumbinary planets. Both the empirical fits and the machine learning model are tested against randomly generated circumbinary systems with very good results regarding the predictions of orbital stability. The empirical formulae are also applied to the Kepler and TESS circumbinary systems, confirming the stability of the planets in these systems. Finally, we present a REST API with a web based application for convenient access of our simulation data set., Comment: Accepted for publication in AJ
- Published
- 2024
414. High-efficiency perovskite-organic blend light-emitting diodes featuring self-assembled monolayers as hole-injecting interlayers
- Author
-
Gedda, Murali, Gkeka, Despoina, Nugraha, Mohamad Insan, Scaccabarozzi, Alberto D., Yengel, Emre, Khan, Jafar I., Hamilton, Iain, Lin, Yuanbao, Deconinck, Marielle, Vaynzof, Yana, Laquai, Frédéric, Bradley, Donal D. C., and Anthopoulos, Thomas D.
- Subjects
Physics - Applied Physics ,Condensed Matter - Materials Science - Abstract
The high photoluminescence efficiency, color purity, extended gamut, and solution processability make low-dimensional hybrid perovskites attractive for light-emitting diode (PeLED) applications. However, controlling the microstructure of these materials to improve the device performance remains challenging. Here, the development of highly efficient green PeLEDs based on blends of the quasi-2D (q2D) perovskite, PEA2Cs4Pb5Br16, and the wide bandgap organic semiconductor 2,7 dioctyl[1] benzothieno[3,2-b]benzothiophene (C8-BTBT) is reported. The presence of C8-BTBT enables the formation of single-crystal-like q2D PEA2Cs4Pb5Br16 domains that are uniform and highly luminescent. Combining the PEA2Cs4Pb5Br16:C8-BTBT with self-assembled monolayers (SAMs) as hole-injecting layers (HILs), yields green PeLEDs with greatly enhanced performance characteristics, including external quantum efficiency up to 18.6%, current efficiency up to 46.3 cd/A, the luminance of 45 276 cd m^-2, and improved operational stability compared to neat PeLEDs. The enhanced performance originates from multiple synergistic effects, including enhanced hole-injection enabled by the SAM HILs, the single crystal-like quality of the perovskite phase, and the reduced concentration of electronic defects. This work highlights perovskite:organic blends as promising systems for use in LEDs, while the use of SAM HILs creates new opportunities toward simpler and more stable PeLEDs.
- Published
- 2024
415. Test Code Generation for Telecom Software Systems using Two-Stage Generative Model
- Author
-
Nabeel, Mohamad, Nimara, Doumitrou Daniil, and Zanouda, Tahar
- Subjects
Computer Science - Software Engineering ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
In recent years, the evolution of Telecom towards achieving intelligent, autonomous, and open networks has led to an increasingly complex Telecom Software system, supporting various heterogeneous deployment scenarios, with multi-standard and multi-vendor support. As a result, it becomes a challenge for large-scale Telecom software companies to develop and test software for all deployment scenarios. To address these challenges, we propose a framework for Automated Test Generation for large-scale Telecom Software systems. We begin by generating Test Case Input data for test scenarios observed using a time-series Generative model trained on historical Telecom Network data during field trials. Additionally, the time-series Generative model helps in preserving the privacy of Telecom data. The generated time-series software performance data are then utilized with test descriptions written in natural language to generate Test Script using the Generative Large Language Model. Our comprehensive experiments on public datasets and Telecom datasets obtained from operational Telecom Networks demonstrate that the framework can effectively generate comprehensive test case data input and useful test code., Comment: 6 pages, 5 figures, Accepted at 1st Workshop on The Impact of Large Language Models on 6G Networks - IEEE International Conference on Communications (ICC) 2024
- Published
- 2024
416. Fault Detection in Mobile Networks Using Diffusion Models
- Author
-
Nabeel, Mohamad, Nimara, Doumitrou Daniil, and Zanouda, Tahar
- Subjects
Computer Science - Machine Learning ,Computer Science - Networking and Internet Architecture - Abstract
In today's hyper-connected world, ensuring the reliability of telecom networks becomes increasingly crucial. Telecom networks encompass numerous underlying and intertwined software and hardware components, each providing different functionalities. To ensure the stability of telecom networks, telecom software, and hardware vendors developed several methods to detect any aberrant behavior in telecom networks and enable instant feedback and alerts. These approaches, although powerful, struggle to generalize due to the unsteady nature of the software-intensive embedded system and the complexity and diversity of multi-standard mobile networks. In this paper, we present a system to detect anomalies in telecom networks using a generative AI model. We evaluate several strategies using diffusion models to train the model for anomaly detection using multivariate time-series data. The contributions of this paper are threefold: (i) A proposal of a framework for utilizing diffusion models for time-series anomaly detection in telecom networks, (ii) A proposal of a particular Diffusion model architecture that outperforms other state-of-the-art techniques, (iii) Experiments on a real-world dataset to demonstrate that our model effectively provides explainable results, exposing some of its limitations and suggesting future research avenues to enhance its capabilities further., Comment: 6 pages, 4 figures, Accepted at Sixth International Workshop on Data Driven Intelligence for Networks and Systems (DDINS) - IEEE International Conference on Communications (ICC) 2024
- Published
- 2024
417. RIS-Assisted OTFS Communications: Phase Configuration via Received Energy Maximization
- Author
-
Dinan, Mohamad H. and Farhang, Arman
- Subjects
Computer Science - Information Theory ,Electrical Engineering and Systems Science - Signal Processing - Abstract
In this paper, we explore the integration of two revolutionary technologies, reconfigurable intelligent surfaces (RISs) and orthogonal time frequency space (OTFS) modulation, to enhance high-speed wireless communications. We introduce a novel phase shift design algorithm for RIS-assisted OTFS, optimizing energy reception and channel gain in dynamic environments. The study evaluates the proposed approach in a downlink scenario, demonstrating significant performance improvements compared to benchmark schemes in the literature, particularly in terms of bit error rate (BER). Our results showcase the potential of RIS to enhance the system's performance. Specifically, our proposed phase shift design technique outperforms the benchmark solutions by over 4 dB. Furthermore, even greater gains can be obtained as the number of RIS elements increases., Comment: 6 pages (double column), 5 figures, conference paper
- Published
- 2024
418. CDAD-Net: Bridging Domain Gaps in Generalized Category Discovery
- Author
-
Rongali, Sai Bhargav, Mehrotra, Sarthak, Jha, Ankit, C, Mohamad Hassan N, Bose, Shirsha, Gupta, Tanisha, Singha, Mainak, and Banerjee, Biplab
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In Generalized Category Discovery (GCD), we cluster unlabeled samples of known and novel classes, leveraging a training dataset of known classes. A salient challenge arises due to domain shifts between these datasets. To address this, we present a novel setting: Across Domain Generalized Category Discovery (AD-GCD) and bring forth CDAD-NET (Class Discoverer Across Domains) as a remedy. CDAD-NET is architected to synchronize potential known class samples across both the labeled (source) and unlabeled (target) datasets, while emphasizing the distinct categorization of the target data. To facilitate this, we propose an entropy-driven adversarial learning strategy that accounts for the distance distributions of target samples relative to source-domain class prototypes. Parallelly, the discriminative nature of the shared space is upheld through a fusion of three metric learning objectives. In the source domain, our focus is on refining the proximity between samples and their affiliated class prototypes, while in the target domain, we integrate a neighborhood-centric contrastive learning mechanism, enriched with an adept neighborsmining approach. To further accentuate the nuanced feature interrelation among semantically aligned images, we champion the concept of conditional image inpainting, underscoring the premise that semantically analogous images prove more efficacious to the task than their disjointed counterparts. Experimentally, CDAD-NET eclipses existing literature with a performance increment of 8-15% on three AD-GCD benchmarks we present., Comment: Accepted in L3D-IVU, CVPR Workshop, 2024
- Published
- 2024
419. On the Uniqueness and Orbital Stability of Slow and Fast Solitary Wave Solutions of the Benjamin Equation
- Author
-
Abdallah, May, Darwich, Mohamad, and Molinet, Luc
- Subjects
Mathematics - Analysis of PDEs ,Mathematical Physics - Abstract
This paper is devoted to the study of existence and properties of solitary waves of the Benjamin equation. The studied equation includes a parameter $\gamma$ in front of the Benjamin-Ono term. We show the existence, uniqueness, decay and orbital stability of solitary wave solutions obtained as a solution to a certain minimization problem, associated either with high speeds without a sign condition on the parameter $\gamma$ or with low speeds for the appropriate sign.
- Published
- 2024
420. Z-Splat: Z-Axis Gaussian Splatting for Camera-Sonar Fusion
- Author
-
Qu, Ziyuan, Vengurlekar, Omkar, Qadri, Mohamad, Zhang, Kevin, Kaess, Michael, Metzler, Christopher, Jayasuriya, Suren, and Pediredla, Adithya
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics ,Computer Science - Machine Learning - Abstract
Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view ($360^{\circ}$ viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known 'missing cone' problem, which results in poor reconstruction along the depth axis. In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance).
- Published
- 2024
421. Integrating AI in NDE: Techniques, Trends, and Further Directions
- Author
-
Pérez, Eduardo, Ardic, Cemil Emre, Çakıroğlu, Ozan, Jacob, Kevin, Kodera, Sayako, Pompa, Luca, Rachid, Mohamad, Wang, Han, Zhou, Yiming, Zimmer, Cyril, Römer, Florian, and Osman, Ahmad
- Subjects
Electrical Engineering and Systems Science - Signal Processing - Abstract
The digital transformation is fundamentally changing our industries, affecting planning, execution as well as monitoring of production processes in a wide range of application fields. With product line-ups becoming more and more versatile and diverse, the necessary inspection and monitoring sparks significant novel requirements on the corresponding Nondestructive Evaluation (NDE) systems. The establishment of increasingly powerful approaches to incorporate Artificial Intelligence (AI) may provide just the needed innovation to solve some of these challenges. In this paper we provide a comprehensive survey about the usage of AI methods in NDE in light of the recent innovations towards NDE 4.0. Since we cannot discuss each NDE modality in one paper, we limit our attention to magnetic methods, ultrasound, thermography, as well as optical inspection. In addition to reviewing recent AI developments in each field, we draw common connections by pointing out NDE-related tasks that have a common underlying mathematical problem and categorizing the state of the art according to the corresponding sub-tasks. In so doing, interdisciplinary connections are drawn that provide a more complete overall picture.
- Published
- 2024
422. Dual-Frequency Radar Wave-Inversion for Sub-Surface Material Characterization
- Author
-
Aziz, Ishfaq, Soltanaghai, Elahe, Watts, Adam, and Alipour, Mohamad
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Statistics - Applications - Abstract
Moisture estimation of sub-surface soil and the overlaying biomass layer is pivotal in precision agriculture and wildfire risk assessment. However, the characterization of layered material is nontrivial due to the radar penetration-resolution tradeoff. Here, a waveform inversion-based method was proposed for predicting the dielectric permittivity (as a moisture proxy) of the bottom soil layer and the top biomass layer from radar signals. Specifically, the use of a combination of a higher and a lower frequency radar compared to a single frequency in predicting the permittivity of both the soil and the overlaying layer was investigated in this study. The results show that each layer was best characterized via one of the frequencies. However, for the simultaneous prediction of both layers permittivity, the most consistent results were achieved by inversion of data from a combination of both frequencies, showing better correlation with in situ permittivity and reduced prediction errors., Comment: There are 5 pages, 5 figures and 1 table. This study has been accepted at 2024 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
- Published
- 2024
423. Massive MIMO CSI Feedback using Channel Prediction: How to Avoid Machine Learning at UE?
- Author
-
Shehzad, Muhammad Karam, Rose, Luca, and Assaad, Mohamad
- Subjects
Computer Science - Information Theory ,Electrical Engineering and Systems Science - Signal Processing - Abstract
In the literature, machine learning (ML) has been implemented at the base station (BS) and user equipment (UE) to improve the precision of downlink channel state information (CSI). However, ML implementation at the UE can be infeasible for various reasons, such as UE power consumption. Motivated by this issue, we propose a CSI learning mechanism at BS, called CSILaBS, to avoid ML at UE. To this end, by exploiting channel predictor (CP) at BS, a light-weight predictor function (PF) is considered for feedback evaluation at the UE. CSILaBS reduces over-the-air feedback overhead, improves CSI quality, and lowers the computation cost of UE. Besides, in a multiuser environment, we propose various mechanisms to select the feedback by exploiting PF while aiming to improve CSI accuracy. We also address various ML-based CPs, such as NeuralProphet (NP), an ML-inspired statistical algorithm. Furthermore, inspired to use a statistical model and ML together, we propose a novel hybrid framework composed of a recurrent neural network and NP, which yields better prediction accuracy than individual models. The performance of CSILaBS is evaluated through an empirical dataset recorded at Nokia Bell-Labs. The outcomes show that ML elimination at UE can retain performance gains, for example, precoding quality., Comment: 14 pages, 11 figures
- Published
- 2024
- Full Text
- View/download PDF
424. FUELVISION: A Multimodal Data Fusion and Multimodel Ensemble Algorithm for Wildfire Fuels Mapping
- Author
-
Shaik, Riyaaz Uddien, Alipour, Mohamad, Rowell, Eric, Balaji, Bharathan, Watts, Adam, and Taciroglu, Ertugrul
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,I.4.9 - Abstract
Accurate assessment of fuel conditions is a prerequisite for fire ignition and behavior prediction, and risk management. The method proposed herein leverages diverse data sources including Landsat-8 optical imagery, Sentinel-1 (C-band) Synthetic Aperture Radar (SAR) imagery, PALSAR (L-band) SAR imagery, and terrain features to capture comprehensive information about fuel types and distributions. An ensemble model was trained to predict landscape-scale fuels such as the 'Scott and Burgan 40' using the as-received Forest Inventory and Analysis (FIA) field survey plot data obtained from the USDA Forest Service. However, this basic approach yielded relatively poor results due to the inadequate amount of training data. Pseudo-labeled and fully synthetic datasets were developed using generative AI approaches to address the limitations of ground truth data availability. These synthetic datasets were used for augmenting the FIA data from California to enhance the robustness and coverage of model training. The use of an ensemble of methods including deep learning neural networks, decision trees, and gradient boosting offered a fuel mapping accuracy of nearly 80\%. Through extensive experimentation and evaluation, the effectiveness of the proposed approach was validated for regions of the 2021 Dixie and Caldor fires. Comparative analyses against high-resolution data from the National Agriculture Imagery Program (NAIP) and timber harvest maps affirmed the robustness and reliability of the proposed approach, which is capable of near-real-time fuel mapping., Comment: 40 pages
- Published
- 2024
425. The Journey to Trustworthy AI- Part 1: Pursuit of Pragmatic Frameworks
- Author
-
Nasr-Azadani, Mohamad M and Chatelain, Jean-Luc
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction - Abstract
This paper reviews Trustworthy Artificial Intelligence (TAI) and its various definitions. Considering the principles respected in any society, TAI is often characterized by a few attributes, some of which have led to confusion in regulatory or engineering contexts. We argue against using terms such as Responsible or Ethical AI as substitutes for TAI. And to help clarify any confusion, we suggest leaving them behind. Given the subjectivity and complexity inherent in TAI, developing a universal framework is deemed infeasible. Instead, we advocate for approaches centered on addressing key attributes and properties such as fairness, bias, risk, security, explainability, and reliability. We examine the ongoing regulatory landscape, with a focus on initiatives in the EU, China, and the USA. We recognize that differences in AI regulations based on geopolitical and geographical reasons pose an additional challenge for multinational companies. We identify risk as a core factor in AI regulation and TAI. For example, as outlined in the EU-AI Act, organizations must gauge the risk level of their AI products to act accordingly (or risk hefty fines). We compare modalities of TAI implementation and how multiple cross-functional teams are engaged in the overall process. Thus, a brute force approach for enacting TAI renders its efficiency and agility, moot. To address this, we introduce our framework Set-Formalize-Measure-Act (SFMA). Our solution highlights the importance of transforming TAI-aware metrics, drivers of TAI, stakeholders, and business/legal requirements into actual benchmarks or tests. Finally, over-regulation driven by panic of powerful AI models can, in fact, harm TAI too. Based on GitHub user-activity data, in 2023, AI open-source projects rose to top projects by contributor account. Enabling innovation in TAI hinges on the independent contributions of the open-source community., Comment: Updates: Fixed typos. Fixed NIST checkmarks in table 1. Added new subsections: copyright (4.6) and risks on webcrawled datasets (5.2.1). Updated figure 3 to show EU-AI Act passing
- Published
- 2024
426. Micro-Raman spectroscopy of graphene defects and tracing the oxidation process caused by UV exposure
- Author
-
Gholipour, Somayeh, Bahreini, Maryam, and Jafarfard, Mohamad Reza
- Subjects
Physics - Optics - Abstract
Raman spectroscopy is one of the widely used methods in the analysis of various samples including carbon-based materials. This study aimed to identify the number of layers and defects in graphene using micro-Raman spectroscopy. More specifically, the study examined tracing the oxidation process of graphene under UV exposure. Investigation of the effect of the power density of the Raman excitation laser reveals a linear dependence between the ratio of I2D/IG and the power density of the excitation laser. Also, the absence of peak D due to the increase in power density provides evidence for the non-destructive nature of micro-Raman spectroscopy. Given the value of I2D/IG, one of the parameters for determining the number of layers in graphene which has reached 1.39 at the edge, the findings indicate the possibility of an edge fold of single-layer graphene. During the oxidation process, the intensity and position of the D peak increase as a function of exposure time. Alterations in the graphene Raman spectrum, comprising the disappearance of the 2D peak and the appearance of the D peak, trace and confirm the oxidation process of the sample.
- Published
- 2024
427. COVID-19 detection from pulmonary CT scans using a novel EfficientNet with attention mechanism
- Author
-
Farag, Ramy, Upadhyay, Parth, Gao, Yixiang, Demby, Jacket, Montoya, Katherin Garces, Tousi, Seyed Mohamad Ali, Omotara, Gbenga, and DeSouza, Guilherme
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Manual analysis and diagnosis of COVID-19 through the examination of Computed Tomography (CT) images of the lungs can be time-consuming and result in errors, especially given high volume of patients and numerous images per patient. So, we address the need for automation of this task by developing a new deep learning model-based pipeline. Our motivation was sparked by the CVPR Workshop on "Domain Adaptation, Explainability and Fairness in AI for Medical Image Analysis", more specifically, the "COVID-19 Diagnosis Competition (DEF-AI-MIA COV19D)" under the same Workshop. This challenge provides an opportunity to assess our proposed pipeline for COVID-19 detection from CT scan images. The same pipeline incorporates the original EfficientNet, but with an added Attention Mechanism: EfficientNet-AM. Also, unlike the traditional/past pipelines, which relied on a pre-processing step, our pipeline takes the raw selected input images without any such step, except for an image-selection step to simply reduce the number of CT images required for training and/or testing. Moreover, our pipeline is computationally efficient, as, for example, it does not incorporate a decoder for segmenting the lungs. It also does not combine different backbones nor combine RNN with a backbone, as other pipelines in the past did. Nevertheless, our pipeline still outperforms all approaches presented by other teams in last year's instance of the same challenge, at least based on the validation subset of the competition dataset.
- Published
- 2024
428. Ricci flow-based brain surface covariance descriptors for diagnosing Alzheimer's disease
- Author
-
Ahmadi, Fatemeh, Shiri, Mohamad Ebrahim, Bidabad, Behroz, Sedaghat, Maral, and Memari, Pooran
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Automated feature extraction from MRI brain scans and diagnosis of Alzheimer's disease are ongoing challenges. With advances in 3D imaging technology, 3D data acquisition is becoming more viable and efficient than its 2D counterpart. Rather than using feature-based vectors, in this paper, for the first time, we suggest a pipeline to extract novel covariance-based descriptors from the cortical surface using the Ricci energy optimization. The covariance descriptors are components of the nonlinear manifold of symmetric positive-definite matrices, thus we focus on using the Gaussian radial basis function to apply manifold-based classification to the 3D shape problem. Applying this novel signature to the analysis of abnormal cortical brain morphometry allows for diagnosing Alzheimer's disease. Experimental studies performed on about two hundred 3D MRI brain models, gathered from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate the effectiveness of our descriptors in achieving remarkable classification accuracy., Comment: Accepted for publication in Biomedical Signal Processing and Control journal
- Published
- 2024
429. Optimal Denial-of-Service Attacks Against Status Updating
- Author
-
Kriouile, Saad, Assaad, Mohamad, Gündüz, Deniz, and Soleymani, Touraj
- Subjects
Computer Science - Information Theory ,Mathematics - Optimization and Control - Abstract
In this paper, we investigate denial-of-service attacks against status updating. The target system is modeled by a Markov chain and an unreliable wireless channel, and the performance of status updating in the target system is measured based on two metrics: age of information and age of incorrect information. Our objective is to devise optimal attack policies that strike a balance between the deterioration of the system's performance and the adversary's energy. We model the optimal problem as a Markov decision process and prove rigorously that the optimal jamming policy is a threshold-based policy under both metrics. In addition, we provide a low-complexity algorithm to obtain the optimal threshold value of the jamming policy. Our numerical results show that the networked system with the age-of-incorrect-information metric is less sensitive to jamming attacks than with the age-of-information metric. Index Terms-age of incorrect information, age of information, cyber-physical systems, status updating, remote monitoring.
- Published
- 2024
430. Machine learning predicts long-term mortality after acute myocardial infarction using systolic time intervals and routinely collected clinical data
- Author
-
Roudini, Bijan, Khajehpiri, Boshra, Moghaddam, Hamid Abrishami, and Forouzanfar, Mohamad
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Precise estimation of cardiac patients' current and future comorbidities is an important factor in prioritizing continuous physiological monitoring and new therapies. ML models have shown satisfactory performance in short-term mortality prediction of patients with heart disease, while their utility in long-term predictions is limited. This study aims to investigate the performance of tree-based ML models on long-term mortality prediction and the effect of two recently introduced biomarkers on long-term mortality. This study utilized publicly available data from CCHIA at the Ministry of Health and Welfare, Taiwan, China. Medical records were used to gather demographic and clinical data, including age, gender, BMI, percutaneous coronary intervention (PCI) status, and comorbidities such as hypertension, dyslipidemia, ST-segment elevation myocardial infarction (STEMI), and non-STEMI. Using medical and demographic records as well as two recently introduced biomarkers, brachial pre-ejection period (bPEP) and brachial ejection time (bET), collected from 139 patients with acute myocardial infarction, we investigated the performance of advanced ensemble tree-based ML algorithms (random forest, AdaBoost, and XGBoost) to predict all-cause mortality within 14 years. The developed ML models achieved significantly better performance compared to the baseline LR (C-Statistic, 0.80 for random forest, 0.79 for AdaBoost, and 0.78 for XGBoost, vs 0.77 for LR) (P-RF<0.001, PAdaBoost<0.001, PXGBoost<0.05). Adding bPEP and bET to our feature set significantly improved the algorithms' performance, leading to an absolute increase in C-Statistic of up to 0.03 (C-Statistic, 0.83 for random forest, 0.82 for AdaBoost, and 0.80 for XGBoost, vs 0.74 for LR) (P-RF<0.001, PAdaBoost<0.001, PXGBoost<0.05). This advancement may enable better treatment prioritization for high-risk individuals., Comment: Accepted for publication in "Intelligent Medicine"
- Published
- 2024
431. The Presence and the State-of-Practice of Software Architects in the Brazilian Industry -- A Survey
- Author
-
Neto, Valdemar Vicente Graciano, Santos, Diana Lorena, França, Andrey Gonçalves, Frantz, Rafael Z., de Oliveira-Jr, Edson, Mohsin, Ahmad, and Kassab, Mohamad
- Subjects
Computer Science - Software Engineering - Abstract
Context: Software architecture intensely impacts the software quality. Therefore, the professional assigned to carry out the design, maintenance and evolution of architectures needs to have certain knowledge and skills in order not to compromise the resulting application. Objective: The aim of this work is to understand the characteristics of the companies regarding the presence or absence of software architects in Brazil. Method: This work uses the Survey research as a means to collect evidence from professionals with the software architect profile, besides descriptive statistics and thematic analysis to analyze the results. Results: The study collected data from 105 professionals distributed in 24 Brazilian states. Results reveal that (i) not all companies have a software architect, (ii) in some cases, other professionals perform the activities of a software architect and (iii) there are companies that, even having a software architecture professional, have other roles also performing the duties of such a professional. Conclusions: Professionals hired as software architects have higher salaries than those hired in other roles that carry out such activity, although many of those other professionals still have duties that are typical of software architects.
- Published
- 2024
432. Observability for Nonlinear Systems: Connecting Variational Dynamics, Lyapunov Exponents, and Empirical Gramians
- Author
-
Kazma, Mohamad H. and Taha, Ahmad F.
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
Observability quantification is a key problem in dynamic network sciences. While it has been thoroughly studied for linear systems, observability quantification for nonlinear networks is less intuitive and more cumbersome. One common approach to quantify observability for nonlinear systems is via the Empirical Gramian (Empr-Gram) -- a generalized form of the Gramian of linear systems. In this paper, we produce three new results. First, we establish that a variational form of discrete-time autonomous nonlinear systems (computed via perturbing initial conditions) yields a so-called Variational Gramian (Var-Gram) that is equivalent to the classic Empr-Gram; the former being easier to compute than the latter. Via Lyapunov exponents derived from Lyapunov's direct method, the paper's second result derives connections between existing observability measures and Var-Gram. The third result demonstrates the applicability of these new notions for sensor selection/placement in nonlinear systems. Numerical case studies demonstrate these three developments and their merits.
- Published
- 2024
433. Is Open-Source There Yet? A Comparative Study on Commercial and Open-Source LLMs in Their Ability to Label Chest X-Ray Reports
- Author
-
Dorfner, Felix J., Jürgensen, Liv, Donle, Leonhard, Mohamad, Fares Al, Bodenmann, Tobias R., Cleveland, Mason C., Busch, Felix, Adams, Lisa C., Sato, James, Schultz, Thomas, Kim, Albert E., Merkow, Jameson, Bressem, Keno K., and Bridge, Christopher P.
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Introduction: With the rapid advances in large language models (LLMs), there have been numerous new open source as well as commercial models. While recent publications have explored GPT-4 in its application to extracting information of interest from radiology reports, there has not been a real-world comparison of GPT-4 to different leading open-source models. Materials and Methods: Two different and independent datasets were used. The first dataset consists of 540 chest x-ray reports that were created at the Massachusetts General Hospital between July 2019 and July 2021. The second dataset consists of 500 chest x-ray reports from the ImaGenome dataset. We then compared the commercial models GPT-3.5 Turbo and GPT-4 from OpenAI to the open-source models Mistral-7B, Mixtral-8x7B, Llama2-13B, Llama2-70B, QWEN1.5-72B and CheXbert and CheXpert-labeler in their ability to accurately label the presence of multiple findings in x-ray text reports using different prompting techniques. Results: On the ImaGenome dataset, the best performing open-source model was Llama2-70B with micro F1-scores of 0.972 and 0.970 for zero- and few-shot prompts, respectively. GPT-4 achieved micro F1-scores of 0.975 and 0.984, respectively. On the institutional dataset, the best performing open-source model was QWEN1.5-72B with micro F1-scores of 0.952 and 0.965 for zero- and few-shot prompting, respectively. GPT-4 achieved micro F1-scores of 0.975 and 0.973, respectively. Conclusion: In this paper, we show that while GPT-4 is superior to open-source models in zero-shot report labeling, the implementation of few-shot prompting can bring open-source models on par with GPT-4. This shows that open-source models could be a performant and privacy preserving alternative to GPT-4 for the task of radiology report classification.
- Published
- 2024
- Full Text
- View/download PDF
434. Modified Gravity Model $f(Q,T)$ and Wormhole Solution
- Author
-
Sadatian, S. Davood and Hosseini, S. Mohamad Reza
- Subjects
General Relativity and Quantum Cosmology ,High Energy Physics - Theory - Abstract
We investigate wormhole solutions using the modified gravity model $f(Q,T)$ with viscosity and aim to find a solution for the existence of wormholes mathematically without violating the energy conditions. We show that there is no need to define a wormhole from exotic matter and analyze the equations with numerical analysis to establish weak energy conditions. In the numerical analysis, we found that the appropriate values of the parameters can maintain the weak energy conditions without the need for exotic matter. Adjusting the parameters of the model can increase or decrease the rate of positive energy density or radial and tangential pressures. According to the numerical analysis conducted in this paper, the weak energy conditions are established in the whole space if $\alpha< 0$, $12.56 < \beta < 25.12$ or $\alpha > 0$, $\beta > 25.12$. The analysis also showed that the supporting matter of the wormhole is near normal matter, indicating that the generalized $f(Q,T)$ model with viscosity has an acceptable parameter space to describe a wormhole without the need for exotic matter., Comment: 14 pages, 8 figures, accepted for publication in Advances in High Energy Physics journal
- Published
- 2024
- Full Text
- View/download PDF
435. Thompson Sampling in Partially Observable Contextual Bandits
- Author
-
Park, Hongju and Faradonbeh, Mohamad Kazem Shirani
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Contextual bandits constitute a classical framework for decision-making under uncertainty. In this setting, the goal is to learn the arms of highest reward subject to contextual information, while the unknown reward parameters of each arm need to be learned by experimenting that specific arm. Accordingly, a fundamental problem is that of balancing exploration (i.e., pulling different arms to learn their parameters), versus exploitation (i.e., pulling the best arms to gain reward). To study this problem, the existing literature mostly considers perfectly observed contexts. However, the setting of partial context observations remains unexplored to date, despite being theoretically more general and practically more versatile. We study bandit policies for learning to select optimal arms based on the data of observations, which are noisy linear functions of the unobserved context vectors. Our theoretical analysis shows that the Thompson sampling policy successfully balances exploration and exploitation. Specifically, we establish the followings: (i) regret bounds that grow poly-logarithmically with time, (ii) square-root consistency of parameter estimation, and (iii) scaling of the regret with other quantities including dimensions and number of arms. Extensive numerical experiments with both real and synthetic data are presented as well, corroborating the efficacy of Thompson sampling. To establish the results, we introduce novel martingale techniques and concentration inequalities to address partially observed dependent random variables generated from unspecified distributions, and also leverage problem-dependent information to sharpen probabilistic bounds for time-varying suboptimality gaps. These techniques pave the road towards studying other decision-making problems with contextual information as well as partial observations., Comment: 43 pages
- Published
- 2024
436. Contrastive Learning for Regression on Hyperspectral Data
- Author
-
Dhaini, Mohamad, Berar, Maxime, Honeine, Paul, and Van Exem, Antonin
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations., Comment: Accepted in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024
- Published
- 2024
437. AMEND: A Mixture of Experts Framework for Long-tailed Trajectory Prediction
- Author
-
Mercurius, Ray Coden, Ahmadi, Ehsan, Shabestary, Soheil Mohamad Alizadeh, and Rasouli, Amir
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,Computer Science - Robotics - Abstract
Accurate prediction of pedestrians' future motions is critical for intelligent driving systems. Developing models for this task requires rich datasets containing diverse sets of samples. However, the existing naturalistic trajectory prediction datasets are generally imbalanced in favor of simpler samples and lack challenging scenarios. Such a long-tail effect causes prediction models to underperform on the tail portion of the data distribution containing safety-critical scenarios. Previous methods tackle the long-tail problem using methods such as contrastive learning and class-conditioned hypernetworks. These approaches, however, are not modular and cannot be applied to many machine learning architectures. In this work, we propose a modular model-agnostic framework for trajectory prediction that leverages a specialized mixture of experts. In our approach, each expert is trained with a specialized skill with respect to a particular part of the data. To produce predictions, we utilise a router network that selects the best expert by generating relative confidence scores. We conduct experimentation on common pedestrian trajectory prediction datasets and show that our method improves performance on long-tail scenarios. We further conduct ablation studies to highlight the contribution of different proposed components.
- Published
- 2024
438. Protoplanet collisions: new scaling laws from SPH simulations
- Author
-
Crespi, Samuele, Ali-Dib, Mohamad, and Dobbs-Dixon, Ian
- Subjects
Astrophysics - Earth and Planetary Astrophysics - Abstract
One common approach for solving collisions between protoplanets in simulations of planet formation is to employ analytical scaling laws. The most widely used one was developed by Leinhardt & Stewart (2012) from a catalog of ~ 180 N-body simulations of rubble-pile collisions. In this work, we use a new catalogue of more than 20,000 SPH simulations to test the validity and the prediction capability of Leinhardt & Stewart (2012) scaling laws. We find that these laws overestimate the fragmentation efficiency in the merging regime and they are not able to properly reproduce the collision outcomes in the super-catastrophic regime. In the merging regime, we also notice a significant dependence between the collision outcome, in terms of the largest remnant mass, and the relative mass of the colliding protoplanets. Here, we present a new set of scaling laws that are able to better predict the collision outcome in all regimes and it is also able to reproduce the observed dependence on the mass ratio. We compare our new scaling laws against a machine learning approach and obtain similar prediction efficiency., Comment: 8 pages, 4 figures, accepted for publication in A&A
- Published
- 2024
- Full Text
- View/download PDF
439. Show Me How It's Done: The Role of Explanations in Fine-Tuning Language Models
- Author
-
Ballout, Mohamad, Krumnack, Ulf, Heidemann, Gunther, and Kuehnberger, Kai-Uwe
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Our research demonstrates the significant benefits of using fine-tuning with explanations to enhance the performance of language models. Unlike prompting, which maintains the model's parameters, fine-tuning allows the model to learn and update its parameters during a training phase. In this study, we applied fine-tuning to various sized language models using data that contained explanations of the output rather than merely presenting the answers. We found that even smaller language models with as few as 60 million parameters benefited substantially from this approach. Interestingly, our results indicated that the detailed explanations were more beneficial to smaller models than larger ones, with the latter gaining nearly the same advantage from any form of explanation, irrespective of its length. Additionally, we demonstrate that the inclusion of explanations enables the models to solve tasks that they were not able to solve without explanations. Lastly, we argue that despite the challenging nature of adding explanations, samples that contain explanations not only reduce the volume of data required for training but also promote a more effective generalization by the model. In essence, our findings suggest that fine-tuning with explanations significantly bolsters the performance of large language models.
- Published
- 2024
440. Optimizing Uterine Synchronization Analysis in Pregnancy and Labor through Window Selection and Node Optimization
- Author
-
Dine, Kamil Bader El, Nader, Noujoud, Khalil, Mohamad, and Marque, Catherine
- Subjects
Quantitative Biology - Quantitative Methods ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Preterm labor (PL) has globally become the leading cause of death in children under the age of 5 years. To address this problem, this paper will provide a new approach by analyzing the EHG signals, which are recorded on the abdomen of the mother during labor and pregnancy. The EHG signal reflects the electrical activity that induces the mechanical contraction of the myometrium. Because EHGs are known to be non-stationary signals, and because we anticipate connectivity to alter during contraction, we applied the windowing approach on real signals to help us identify the best windows and the best nodes with the most significant data to be used for classification. The suggested pipeline includes i) divide the 16 EHG signals that are recorded from the abdomen of pregnant women in N windows; ii) apply the connectivity matrices on each window; iii) apply the Graph theory-based measures on the connectivity matrices on each window; iv) apply the consensus Matrix on each window in order to retrieve the best windows and the best nodes. Following that, several neural network and machine learning methods are applied to the best windows and best nodes to categorize pregnancy and labor contractions, based on the different input parameters (connectivity method alone, connectivity method plus graph parameters, best nodes, all nodes, best windows, all windows). Results showed that the best nodes are nodes 8, 9, 10, 11, and 12; while the best windows are 2, 4, and 5. The classification results obtained by using only these best nodes are better than when using the whole nodes. The results are always better when using the full burst, whatever the chosen nodes. Thus, the windowing approach proved to be an innovative technique that can improve the differentiation between labor and pregnancy EHG signals., Comment: 10 pages, 6 figures
- Published
- 2024
441. Opinion models, data, and politics
- Author
-
Gsänger, Matthias, Hösel, Volker, Mohamad-Klotzbach, Christoph, and Müller, Johannes
- Subjects
Physics - Physics and Society ,91D30, 60J70 - Abstract
We investigate the connection between Potts (Curie-Weiss) models and stochastic opinion models in the view of the Boltzmann distribution and stochastic Glauber dynamics. We particularly find that the q-voter model can be considered as a natural extension of the Zealot model which is adapted by Lagrangian parameters. We also discuss weak and strong effects continuum limits for the models. We then fit four models (Curie-Weiss, strong and weak effects limit for the q-voter model, and the reinforcement model) to election data from United States, United Kingdom, France and Germany. We find that particularly the weak effects models are able to fit the data (Kolmogorov-Smirnov test), where the weak effects reinforcement model performs best (AIC). The resulting estimates are interpreted in the view of political sciences, and also the importance of this kind of model-based approaches to election data for the political sciences is discussed.
- Published
- 2024
442. AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion
- Author
-
Qadri, Mohamad, Zhang, Kevin, Hinduja, Akshay, Kaess, Michael, Pediredla, Adithya, and Metzler, Christopher A.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring. Treacherous operating conditions, fragile surroundings, and limited navigation control often dictate that submersibles restrict their range of motion and, thus, the baseline over which they can capture measurements. In the context of 3D scene reconstruction, it is well-known that smaller baselines make reconstruction more challenging. Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework (AONeuS) capable of effectively integrating high-resolution RGB measurements with low-resolution depth-resolved imaging sonar measurements. By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines. Through extensive simulations and in-lab experiments, we demonstrate that AONeuS dramatically outperforms recent RGB-only and sonar-only inverse-differentiable-rendering--based surface reconstruction methods. A website visualizing the results of our paper is located at this address: https://aoneus.github.io/, Comment: SIGGRAPH 2024 (conference track full paper). First two authors contributed equally. Paper website: https://aoneus.github.io/
- Published
- 2024
443. Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement Learning
- Author
-
Huang, Shengyi, Gallouédec, Quentin, Felten, Florian, Raffin, Antonin, Dossa, Rousslan Fernand Julien, Zhao, Yanxiao, Sullivan, Ryan, Makoviychuk, Viktor, Makoviichuk, Denys, Danesh, Mohamad H., Roumégous, Cyril, Weng, Jiayi, Chen, Chufan, Rahman, Md Masudur, Araújo, João G. M., Quan, Guorui, Tan, Daniel, Klein, Timo, Charakorn, Rujikorn, Towers, Mark, Berthelot, Yann, Mehta, Kinal, Chakraborty, Dipam, KG, Arjun, Charraut, Valentin, Ye, Chang, Liu, Zichen, Alegre, Lucas N., Nikulin, Alexander, Hu, Xiao, Liu, Tianlin, Choi, Jongwook, and Yi, Brent
- Subjects
Computer Science - Machine Learning - Abstract
In many Reinforcement Learning (RL) papers, learning curves are useful indicators to measure the effectiveness of RL algorithms. However, the complete raw data of the learning curves are rarely available. As a result, it is usually necessary to reproduce the experiments from scratch, which can be time-consuming and error-prone. We present Open RL Benchmark, a set of fully tracked RL experiments, including not only the usual data such as episodic return, but also all algorithm-specific and system metrics. Open RL Benchmark is community-driven: anyone can download, use, and contribute to the data. At the time of writing, more than 25,000 runs have been tracked, for a cumulative duration of more than 8 years. Open RL Benchmark covers a wide range of RL libraries and reference implementations. Special care is taken to ensure that each experiment is precisely reproducible by providing not only the full parameters, but also the versions of the dependencies used to generate it. In addition, Open RL Benchmark comes with a command-line interface (CLI) for easy fetching and generating figures to present the results. In this document, we include two case studies to demonstrate the usefulness of Open RL Benchmark in practice. To the best of our knowledge, Open RL Benchmark is the first RL benchmark of its kind, and the authors hope that it will improve and facilitate the work of researchers in the field., Comment: Under review
- Published
- 2024
444. Wideband, Efficient AlScN-Si Acousto-Optic Modulator in a Commercially Available Silicon Photonics Process
- Author
-
Erdil, Mertcan, Izhar, Deng, Yang, Tang, Zichen, Idjadi, Mohamad Hossein, Ashtiani, Farshid, Aflatouni, Firooz, and Olsson III, Roy
- Subjects
Physics - Optics ,Physics - Applied Physics - Abstract
Acousto-optic integration offers numerous applications including low-loss microwave signal processing, nonreciprocal light propagation, frequency comb generation, and broadband acousto-optic modulation. State-of-the-art acousto-optic systems are mainly implemented entirely using in-house fabrication processes, which despite excellent performance typically suffer from low yield and are not compatible with mass production through foundry processes. Here, we demonstrate a highly efficient wideband acousto-optic modulator (AOM) implemented on a silicon photonics foundry process enabling high-yield low-cost mass production of AOMs with other photonic and electronic devices on the same substrate. In the reported structure, a 150 ${\mu}$m long AlScN-based acoustic transducer launches surface acoustic waves (SAW), which modulate the light passing through a silicon optical waveguide. A modulation efficiency of -18.3 dB over a bandwidth of 112 MHz is achieved, which to our knowledge is the highest reported efficiency and bandwidth combination among silicon based AOMs, resulting in about an order of magnitude $BW(V_{\pi}L)^{-1}$ figure-of-merit improvement compared to the state-of-the-art CMOS compatible AOMs. The monolithically integrated acousto-optic platform developed in this work will pave the way for low-cost, miniature microwave filters, true time delays, frequency combs, and other signal processors with the advanced functionality offered by foundry-integrated photonic circuits.
- Published
- 2024
445. Parameters Affecting Dust Collector Efficiency for Pneumatic Conveying: A Review
- Author
-
Philippe Beaulac, Mohamad Issa, Adrian Ilinca, and Jean Brousseau
- Subjects
fluid mechanics ,pneumatic conveying ,multiphase flows ,dust collector ,industrial control ,energy efficiency ,Technology - Abstract
In a context of energy abundance for industrial applications, industrial systems are exploited with minimal attention to their actual energy consumption requirements to meet the loads imposed on them. As a result, most of them are used at maximal capacity, regardless of the varying operational conditions. First, the paper studies pneumatic conveying systems and thoroughly reviews previously published work. Then, we overview simulations and operating data of the experimental parameters and their effects on the flow characteristics and transport efficiency. Finally, we summarize with a conclusion and some suggestions for further work. The primary goal of this study is to identify the parameters that influence the energy consumption of industrial dust collector systems. It is differentiated from previously published overviews by being concentrated on wood particles collection systems. The results will permit a better selection of an appropriate methodology or solution for reducing an industrial system’s power requirements and energy consumption through more precise control. The anticipated benefits are not only on power requirement and energy consumption but also in reducing greenhouse gas emissions. This aspect shows more impacts in regions that rely on electricity supplied by thermal power stations, especially those that use petrol or coal.
- Published
- 2022
- Full Text
- View/download PDF
446. Correlation Between Pre-Operative Diffusion Tensor Imaging Indices and Post-Operative Outcome in Degenerative Cervical Myelopathy: A Systematic Review and Meta-Analysis.
- Author
-
Mohammadi, Mohammad, Roohollahi, Faramarz, Mahmoudi, Mohamad, Mohammadi, Aynaz, Mohamadi, Mobin, Kankam, Samuel, Ghamari Khameneh, Afshar, Baghdasaryan, Davit, Farahbakhsh, Farzin, Martin, Allan, Harrop, James, and Rahimi-Movaghar, Vafa
- Subjects
clinical outcome ,degenerative cervical myelopathy ,diffusion tensor imaging ,prognosis - Abstract
STUDY DESIGN: Systematic review. OBJECTIVES: The correlation between pre-operative diffusion tensor imaging (DTI) metrics and post-operative clinical outcomes in patients with degenerative cervical myelopathy (DCM) has been widely investigated with different studies reporting varied findings. We conducted a systematic review to determine the association between DTI metric and clinical outcomes after surgery. METHODS: We identified relevant articles that investigated the relationship between pre-operative DTI indices and post-operative outcome in DCM patients by searching PubMed/MEDLINE, Web of Science, Scopus, and EMBASE from inception until October 2023. In addition, quantitative synthesis and meta-analyses were performed. RESULTS: FA was significantly correlated with postoperative JOA or mJOA across all age and follow up subgroups, changes observed in JOA or mJOA from preoperative to postoperative stages (Δ JOA or Δ mJOA) in subgroups aged 65 and above and in those with a follow-up period of 6 months or more, as well as recovery rate in all studies pooled together and also in the under-65 age bracket. Additionally, a significant correlation was demonstrated between recovery rate and ADC across all age groups. No other significant correlations were discovered between DTI parameters (MD, AD, and ADC) and post-operative outcomes. CONCLUSION: DTI is a quantitative noninvasive evaluation tool that correlates with severity of DCM. However, the current evidence is still elusive regarding whether DTI metric is a validated tool for predicting the degree of post-operative recovery, which could potentially be useful in patient selection for surgery.
- Published
- 2024
447. Diffusion Tensor Imaging in Diagnosing and Evaluating Degenerative Cervical Myelopathy: A Systematic Review and Meta-Analysis.
- Author
-
Mohammadi, Mohammad, Roohollahi, Faramarz, Farahbakhsh, Farzin, Mohammadi, Aynaz, Mortazavi Mamaghani, Elham, Kankam, Samuel, Moarrefdezfouli, Azin, Ghamari Khameneh, Afshar, Mahmoudi, Mohamad, Baghdasaryan, Davit, Martin, Allan, Harrop, James, and Rahimi-Movaghar, Vafa
- Subjects
degenerative cervical myelopathy ,diagnosis ,diffusion tensor imaging ,meta-analysis - Abstract
STUDY DESIGN: Systematic review. OBJECTIVE: Degenerative cervical myelopathy (DCM) is a common spinal cord disorder necessitating surgery. We aim to explore how effectively diffusion tensor imaging (DTI) can distinguish DCM from healthy individuals and assess the relationship between DTI metrics and symptom severity. METHODS: We included studies with adult DCM patients who had not undergone decompressive surgery and implemented correlation analyses between DTI parameters and severity, or compared healthy controls and DCM patients. RESULTS: 57 studies were included in our meta-analysis. At the maximal compression (MC) level, fractional anisotropy (FA) exhibited lower values in DCM patients, while apparent diffusion coefficient (ADC), mean diffusivity (MD), and radial diffusivity (RD) were notably higher in the DCM group. Moreover, our investigation into the diagnostic utility of DTI parameters disclosed high sensitivity, specificity, and area under the curve values for FA (.84, .80, .83 respectively) and ADC (.74, .84, .88 respectively). Additionally, we explored the correlation between DTI parameters and myelopathy severity, revealing a significant correlation of FA (.53, 95% CI:0.40 to .65) at MC level with JOA/mJOA scores. CONCLUSION: Current guidelines for DCM suggest decompressive surgery for both mild and severe cases. However, they lack clear recommendations on which mild DCM patients might benefit from conservative treatment vs immediate surgery. ADCs role here could be pivotal, potentially differentiating between healthy individuals and DCM. While it may not correlate with symptom severity, it might predict surgical outcomes, making it a valuable imaging biomarker for clearer management decisions in mild DCM.
- Published
- 2024
448. Incidence and clinical outcomes of perforations during mechanical thrombectomy for medium vessel occlusion in acute ischemic stroke: A retrospective, multicenter, and multinational study.
- Author
-
Dmytriw, Adam, Musmar, Basel, Salim, Hamza, Ghozy, Sherief, Siegler, James, Kobeissi, Hassan, Shaikh, Hamza, Khalife, Jane, Abdalkader, Mohamad, Klein, Piers, Nguyen, Thanh, Heit, Jeremy, Regenhardt, Robert, Cancelliere, Nicole, Bernstock, Joshua, Naamani, Kareem, Amllay, Abdelaziz, Meyer, Lukas, Dusart, Anne, Bellante, Flavio, Forestier, Géraud, Rouchaud, Aymeric, Saleme, Suzana, Mounayer, Charbel, Fiehler, Jens, Kühn, Anna, Puri, Ajit, Dyzmann, Christian, Kan, Peter, Colasurdo, Marco, Marnat, Gaultier, Berge, Jérôme, Barreau, Xavier, Sibon, Igor, Nedelcu, Simona, Henninger, Nils, Marotta, Thomas, Stapleton, Christopher, Rabinov, James, Ota, Takahiro, Dofuku, Shogo, Yeo, Leonard, Tan, Benjamin, Gopinathan, Anil, Martinez-Gutierrez, Juan, Salazar-Marioni, Sergio, Sheth, Sunil, Renieri, Leonardo, Capirossi, Carolina, Mowla, Ashkan, Chervak, Lina, Vagal, Achala, Adeeb, Nimer, Cuellar-Saenz, Hugo, Tjoumakaris, Stavropoula, Jabbour, Pascal, Khandelwal, Priyank, Biswas, Arundhati, Clarençon, Frédéric, Elhorany, Mahmoud, Premat, Kevin, Valente, Iacopo, Pedicelli, Alessandro, Filipe, João, Varela, Ricardo, Quintero-Consuegra, Miguel, Gonzalez, Nestor, Möhlenbruch, Markus, Jesser, Jessica, Costalat, Vincent, Ter Schiphorst, Adrien, Yedavalli, Vivek, Harker, Pablo, Aziz, Yasmin, Gory, Benjamin, Stracke, Christian, Hecker, Constantin, Kadirvel, Ramanathan, Killer-Oberpfalzer, Monika, Griessenauer, Christoph, Thomas, Ajith, Hsieh, Cheng-Yang, Liebeskind, David, Alexandru Radu, Răzvan, Alexandre, Andrea, Tancredi, Illario, Faizy, Tobias, Fahed, Robert, Weyland, Charlotte, Lubicz, Boris, Patel, Aman, Pereira, Vitor, and Guenego, Adrien
- Subjects
AIS ,MT ,MeVo ,Stroke ,mechanical thrombectomy ,perforation ,Humans ,Ischemic Stroke ,Male ,Retrospective Studies ,Female ,Aged ,Middle Aged ,Incidence ,Thrombectomy ,Treatment Outcome ,Aged ,80 and over - Abstract
BACKGROUND: Mechanical thrombectomy (MT) has revolutionized the treatment of acute ischemic stroke (AIS) due to large vessel occlusion (LVO), but its efficacy and safety in medium vessel occlusion (MeVO) remain less explored. This multicenter, retrospective study aims to investigate the incidence and clinical outcomes of vessel perforations (confirmed by extravasation during an angiographic series) during MT for AIS caused by MeVO. METHODS: Data were collected from 37 academic centers across North America, Asia, and Europe between September 2017 and July 2021. A total of 1373 AIS patients with MeVO underwent MT. Baseline characteristics, procedural details, and clinical outcomes were analyzed. RESULTS: The incidence of vessel perforation was 4.8% (66/1373). Notably, our analysis indicates variations in perforation rates across different arterial segments: 8.9% in M3 segments, 4.3% in M2 segments, and 8.3% in A2 segments (p = 0.612). Patients with perforation had significantly worse outcomes, with lower rates of favorable angiographic outcomes (TICI 2c-3: 23% vs 58.9%, p
- Published
- 2024
449. Joint learning of linear time-invariant dynamical systems
- Author
-
Modi, Aditya, Faradonbeh, Mohamad Kazem Shirani, Tewari, Ambuj, and Michailidis, George
- Subjects
Applied Mathematics ,Mathematical Sciences ,Control Engineering ,Mechatronics and Robotics ,Engineering ,Multiple linear systems ,Data sharing ,Finite time identification ,Autoregressive processes ,Joint estimation ,Information and Computing Sciences ,Industrial Engineering & Automation ,Information and computing sciences ,Mathematical sciences - Published
- 2024
450. Different Trajectories of Functional Connectivity Captured with Gamma-Event Coupling and Broadband Measures of Electroencephalographic in the Rat Fluid Percussion Injury Model
- Author
-
Fox, Rachel, Santana-Gomez, Cesar, Shamas, Mohamad, Pavade, Aarja, Staba, Richard, and Harris, Neil G
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Neurodegenerative ,Brain Disorders ,Physical Injury - Accidents and Adverse Effects ,2.1 Biological and endogenous factors ,Neurological ,electroencephalography ,network ,resting state ,sleep ,traumatic brain injury - Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.