1,622 results on '"Hybrid Models"'
Search Results
2. Modeling of Digital Twins
- Author
-
Gupta, Sunil, Iyer, Ravi S., Kumar, Sanjeev, Gupta, Sunil, Iyer, Ravi S., and Kumar, Sanjeev
- Published
- 2025
- Full Text
- View/download PDF
3. Statistical Methods in Forecasting Water Consumption: A Review of Previous Literature
- Author
-
Mukhlif, Anmar Jabbar, Mustafa, Ayad S., Al-Somaydaii, Jumaa A., Karkush, Mahdi, editor, Choudhury, Deepankar, editor, and Fattah, Mohammed, editor
- Published
- 2025
- Full Text
- View/download PDF
4. A novel interpretable hybrid model for multi-step ahead dissolved oxygen forecasting in the Mississippi River basin.
- Author
-
Ali, Hayder Mohammed, Mohammadi Ghaleni, Mehdi, Moghaddasi, Mahnoosh, and Moradi, Mansour
- Subjects
- *
KRIGING , *WATER quality monitoring , *WATERSHEDS , *WATER temperature , *ECOSYSTEM health , *POTASSIUM - Abstract
Accurate forecasting of dissolved oxygen (DO) levels is vital for river ecosystem health. A novel methodology, MVMD-TSA-GPR, combines Multivariate Variational Mode Decomposition (MVMD), the Tunicate Swarm Algorithm (TSA), and Gaussian Process Regression (GPR) to improve DO level predictions. This study also incorporated Generalized Additive Model and Regression Bagged Ensemble (RBE) for 1- and 3-month forecasts using monthly data (1974–2023) from 16 water quality parameters across five Mississippi River basin sites. Key predictors identified through cross-correlation include lagged values and parameters like water temperature, discharge, pH, total phosphorus, potassium, and sulfate, which significantly influence DO levels. The MVMD-TSA-GPR model outperformed others, especially at site 5, showing substantial improvements in accuracy with decreased RMSE values across various scenarios. Model ranking via the Taylor Diagram indicated MVMD-TSA-GPR had the highest performance, followed by MVMD-TSA-RBE and others. Notably, the GPR model's RMSE at site 3 decreased from 2.11 to 1.01 (109% reduction) for the 1-month forecast, while at site 4 for the 3-month forecast, it dropped from 1.85 to 1.04 (106% reduction). The results revealed that the MVMD-TSA-GPR model demonstrated the highest performance for DO (t + 1), achieving R = 0.90, PBIAS = 0.73%, and WI = 0.804, as well as for DO (t + 3), with R = 0.88, PBIAS = 0.54%, and WI = 0.779, at the Lower Mississippi site during the test phase. Additionally, the MVMD-TSA-RBE model excelled for DO (t + 1) in the Missouri River at the Hermann site, achieving R = 0.91, PBIAS = 0.86%, and WI = 0.805 during the test phase. These results underscore the effectiveness of the MVMD-TSA hybrid approach. Interpretative analysis using SHapley Additive exPlanations (SHAP) revealed water temperature, pH, and potassium as key factors affecting DO levels. The speed and accuracy of MVMD-TSA-GPR make it a promising tool for monitoring river water quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. AMBER: A Modular Model for Tumor Growth, Vasculature and Radiation Response.
- Author
-
Kunz, Louis V., Bosque, Jesús J., Nikmaneshi, Mohammad, Chamseddine, Ibrahim, Munn, Lance L., Schuemann, Jan, Paganetti, Harald, and Bertolet, Alejandro
- Abstract
Computational models of tumor growth are valuable for simulating the dynamics of cancer progression and treatment responses. In particular, agent-based models (ABMs) tracking individual agents and their interactions are useful for their flexibility and ability to model complex behaviors. However, ABMs have often been confined to small domains or, when scaled up, have neglected crucial aspects like vasculature. Additionally, the integration into tumor ABMs of precise radiation dose calculations using gold-standard Monte Carlo (MC) methods, crucial in contemporary radiotherapy, has been lacking. Here, we introduce AMBER, an Agent-based fraMework for radioBiological Effects in Radiotherapy that computationally models tumor growth and radiation responses. AMBER is based on a voxelized geometry, enabling realistic simulations at relevant pre-clinical scales by tracking temporally discrete states stepwise. Its hybrid approach, combining traditional ABM techniques with continuous spatiotemporal fields of key microenvironmental factors such as oxygen and vascular endothelial growth factor, facilitates the generation of realistic tortuous vascular trees. Moreover, AMBER is integrated with TOPAS, an MC-based particle transport algorithm that simulates heterogeneous radiation doses. The impact of radiation on tumor dynamics considers the microenvironmental factors that alter radiosensitivity, such as oxygen availability, providing a full coupling between the biological and physical aspects. Our results show that simulations with AMBER yield accurate tumor evolution and radiation treatment outcomes, consistent with established volumetric growth laws and radiobiological understanding. Thus, AMBER emerges as a promising tool for replicating essential features of tumor growth and radiation response, offering a modular design for future expansions to incorporate specific biological traits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. TTG-Text: A Graph-Based Text Representation Framework Enhanced by Typical Testors for Improved Classification.
- Author
-
Sánchez-Antonio, Carlos, Valdez-Rodríguez, José E., and Calvo, Hiram
- Subjects
- *
NATURAL language processing , *FEATURE selection , *ALGORITHMS , *CLASSIFICATION , *SYMBOLIC computation - Abstract
Recent advancements in graph-based text representation, particularly with embedding models and transformers such as BERT, have shown significant potential for enhancing natural language processing (NLP) tasks. However, challenges related to data sparsity and limited interpretability remain, especially when working with small or imbalanced datasets. This paper introduces TTG-Text, a novel framework that strengthens graph-based text representation by integrating typical testors—a symbolic feature selection technique that refines feature importance while reducing dimensionality. Unlike traditional TF-IDF weighting, TTG-Text leverages typical testors to enhance feature relevance within text graphs, resulting in improved model interpretability and performance, particularly for smaller datasets. Our evaluation on a text classification task using a graph convolutional network (GCN) demonstrates that TTG-Text achieves a 95% accuracy rate, surpassing conventional methods and BERT with fewer required training epochs. By combining symbolic algorithms with graph-based models, this hybrid approach offers a more interpretable, efficient, and high-performing solution for complex NLP tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Selecting a Time-Series Model to Predict Drinking Water Extraction in a Semi-Arid Region in Chihuahua, Mexico.
- Author
-
Legarreta-González, Martín Alfredo, Meza-Herrera, César A., Rodríguez-Martínez, Rafael, Loya-González, Darithsa, Chávez-Tiznado, Carlos Servando, Contreras-Villarreal, Viridiana, and Véliz-Deras, Francisco Gerardo
- Abstract
As the effects of global climate change intensify, it is increasingly important to implement more effective water management practices, particularly in arid and semi-arid regions such as Meoqui, Chihuahua, situated in the arid northern center of Mexico. The objective of this study was to identify the optimal time-series model for analyzing the pattern of water extraction volumes and predicting a one-year forecast. It was hypothesized that the volume of water extracted over time could be explained by a statistical time-series model, with the objective of predicting future trends. To achieve this objective, three time-series models were evaluated. To assess the pattern of groundwater extraction, three time-series models were employed: the seasonal autoregressive integrated moving average (SARIMA), Prophet, and Prophet with extreme gradient boosting (XGBoost). The mean extraction volume for the entire period was 50,935 ± 47,540 m
3 , with a total of 67,233,578 m3 extracted from all wells. The greatest volume of water extracted has historically been from urban wells, with an average extraction of 55,720 ± 48,865 m3 and a total of 63,520,284 m3 . The mean extraction volume for raw water wells was determined to be 20,629 ± 19,767 m3 , with a total extraction volume of 3,713,294 m3 . The SARIMA(1,1,1)(1,0,0)12 model was identified as the optimal time-series model for general extraction, while a "white noise" model, an ARIMA(0,1,0) for raw water, and an SARIMA(2,1,1)(2,0,0)12 model were identified as optimal for urban wells. These findings serve to reinforce the efficacy of the SARIMA model in forecasting and provide a basis for water resource managers in the region to develop policies that promote sustainable water management. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
8. An improved hybrid model for shoreline change.
- Author
-
Lakku, Naresh Kumar Goud, Chowdhury, Piyali, and Behera, Manasa Ranjan
- Subjects
COASTAL zone management ,SEDIMENT transport ,GEOMORPHOLOGY ,COASTS ,BEACHES ,LITTORAL drift ,SHORELINES - Abstract
Predicting the nearshore sediment transport and shifts in coastlines in view of climate change is important for planning and management of coastal infrastructure and requires an accurate prediction of the regional wave climate as well as an in-depth understanding of the complex morphology surrounding the area of interest. Recently, hybrid shoreline evolution models are being used to inform coastal management. These models typically apply the one-line theory to estimate changes in shoreline morphology based on littoral drift gradients calculated from a 2DH coupled wave, flow, and sediment transport model. As per the one-line theory, the calculated littoral drift is uniformly distributed over the active coastal profile. A key challenge facing the application of hybrid models is that they fail to consider complex morphologies when updating the shorelines for several scenarios. This is mainly due to the scarcity of field datasets on beach behavior and nearshore morphological change that extends up to the local depth of closure, leading to assumptions in this value in overall shoreline shift predictions. In this study, we propose an improved hybrid model for shoreline shift predictions in an open sandy beach system impacted by human interventions and changes in wave climate. Three main conclusions are derived from this study. First, the optimal boundary conditions for modeling shoreline evolution need to vary according to local coastal geomorphology and processes. Second, specifying boundary conditions within physically realistic ranges does not guarantee reliable shoreline evolution predictions. Third, hybrid 2D/one-line models have limited applicability in simple planform morphologies where the active beach profile is subject to direct impacts due to wave action and/or human interventions, plausibly due to the one-line theory assumption of a constant time-averaged coastal profile. These findings provide insightful information into the drivers of shoreline evolution around sandy beaches, which have practical implications for advancing the shoreline evolution models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Hybrid Modeling Techniques for Municipal Solid Waste Forecasting: An Application to OECD Countries.
- Author
-
Chellai, Fatih
- Subjects
- *
WASTE minimization , *WASTE management , *WASTE recycling , *SOLID waste , *STATISTICAL smoothing - Abstract
Accurate forecasting of municipal solid waste (MSW) generation is critical for effective waste management, given the rising volumes of waste posing environmental and public health challenges. This study investigates the efficacy of hybrid forecasting models in predicting MSW generation trends across Organization for Economic Cooperation and Development (OECD) countries. The empirical analysis utilizes five distinct approaches – ARIMA, Theta model, neural networks, exponential smoothing state space (ETS), and TBATS models. MSW data spanning 1995–2021 for 29 OECD nations are analyzed using the hybrid models and benchmarked against individual ARIMA models. The results demonstrate superior predictive accuracy for the hybrid models across multiple error metrics, capturing complex data patterns and relationships missed by individual models. The forecasts project continued MSW generation growth in most countries but reveal nuanced country-level differences as well. The implications for waste management policies include implementing waste reduction and recycling programs, investing in infrastructure and technology, enhancing public education, implementing pricing incentives, rigorous monitoring and evaluation of practices, and multi-stakeholder collaboration. However, uncertainties related to model selection and data limitations warrant acknowledgment. Overall, this study affirms the value of hybrid forecasting models in providing robust insights to inform evidence-based waste management strategies and transition toward sustainability in the OECD region. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Enhancing stress detection in wearable IoT devices using federated learning and LSTM based hybrid model.
- Author
-
Mouhni, Naoual, Amalou, Ibtissam, Chakri, Sana, Tourad, Mohamedou Cheikh, Chakraoui, Mohamed, and Abdali, Abdelmounaim
- Subjects
CONVOLUTIONAL neural networks ,FEDERATED learning ,BLENDED learning ,RANDOM forest algorithms ,DEEP learning - Abstract
In the domain of smart health devices, the accurate detection of physical indicators levels plays a crucial role in enhancing safety and well-being. This paper introduces a cross device federated learning framework using hybrid deep learning model. Specifically, the paper presents a comprehensive comparison of different combination of long short-term memory (LSTM), gated recurrent unit (GRU), convolutional neural network (CNN), random forest (RF), and extreme gradient boosting (XGBoost), in order to forecast stress levels by utilizing time series information derived from wearable smart gadgets. The LSTM-RF model demonstrated the highest level of accuracy, achieving 93.53% for user 1, 99.40% for user 2, and 97.88% for user 3. Similarly, the LSTM-XGBoost model yielded favorable outcomes, with accuracy rates of 85.88%, 98.55%, and 92.02% for users 1, 2, and 3, respectively, out of 23 users studied. These findings highlight the efficacy of federated learning and the utilization of hybrid models in stress detection. Unlike traditional centralized learning paradigms, the presented federated approach ensures privacy preservation and reduces data transmission requirements by processing data locally on Edge devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Forecasting Multi-Step Soil Moisture with Three-Phase Hybrid Wavelet-Least Absolute Shrinkage Selection Operator-Long Short-Term Memory Network (moDWT-Lasso-LSTM) Model.
- Author
-
Jayasinghe, W. J. M. Lakmini Prarthana, Deo, Ravinesh C., Raj, Nawin, Ghimire, Sujan, Yaseen, Zaher Mundher, Nguyen-Huy, Thong, and Ghahramani, Afshin
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,DISCRETE wavelet transforms ,FEATURE selection ,INDEPENDENT variables ,DEEP learning - Abstract
To develop agricultural risk management strategies, the early identification of water deficits during the growing cycle is critical. This research proposes a deep learning hybrid approach for multi-step soil moisture forecasting in the Bundaberg region in Queensland, Australia, with predictions made for 1-day, 14-day, and 30-day, intervals. The model integrates Geospatial Interactive Online Visualization and Analysis Infrastructure (Giovanni) satellite data with ground observations. Due to the periodicity, transience, and trends in soil moisture of the top layer, time series datasets were complex. Hence, the Maximum Overlap Discrete Wavelet Transform (moDWT) method was adopted for data decomposition to identify the best correlated wavelet and scaling coefficients of the predictor variables with the target top layer moisture. The proposed 3-phase hybrid moDWT-Lasso-LSTM model used the Least Absolute Shrinkage and Selection Operator (Lasso) method for feature selection. Optimal hyperparameters were identified using the Hyperopt algorithm with deep learning LSTM method. This proposed model's performances were compared with benchmarked machine learning (ML) models. In total, nine models were developed, including three standalone models (e.g., LSTM), three integrated feature selection models (e.g., Lasso-LSTM), and three hybrid models incorporating wavelet decomposition and feature selection (e.g., moDWT-Lasso-LSTM). Compared to alternative models, the hybrid deep moDWT-Lasso-LSTM produced the superior predictive model across statistical performance metrics. For example, at 1-day forecast, The moDWT-Lasso-LSTM model exhibits the highest accuracy with the highest R 2 ≈ 0.92469 and the lowest RMSE ≈ 0.97808 , MAE ≈ 0.76623 , and SMAPE ≈ 4.39700 %, outperforming other models. The moDWT-Lasso-DNN model follows closely, while the Lasso-ANN and Lasso-DNN models show lower accuracy with higher RMSE and MAE values. The ANN and DNN models have the lowest performance, with higher error metrics and lower R2 values compared to the deep learning models incorporating moDWT and Lasso techniques. This research emphasizes the utility of the advanced complementary ML model, such as the developed moDWT-Lasso-LSTM 3-phase hybrid model, as a robust data-driven tool for early forecasting of soil moisture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Hybrid modeling approach for precise estimation of energy production and consumption based on temperature variations.
- Author
-
Mbasso, Wulfran Fendzi, Molu, Reagan Jean Jacques, Harrison, Ambe, Pushkarna, Mukesh, Kemdoum, Fritz Nguemo, Donfack, Emmanuel Fendzi, Jangir, Pradeep, Tiako, Pierre, and Tuka, Milkias Berhanu
- Subjects
- *
INDEPENDENT variables , *CONSUMPTION (Economics) , *CLIMATE change , *LOW temperatures , *HIGH temperatures - Abstract
This study introduces an advanced mathematical methodology for predicting energy generation and consumption based on temperature variations in regions with diverse climatic conditions and increasing energy demands. Using a comprehensive dataset of monthly energy production, consumption, and temperature readings spanning ten years (2010–2020), we applied polynomial, sinusoidal, and hybrid modeling techniques to capture the non-linear and cyclical relationships between temperature and energy metrics. The hybrid model, which combines sinusoidal and polynomial functions, achieved an accuracy of 79.15% in estimating energy consumption using temperature as a predictor variable. This model effectively captures the seasonal and non-linear consumption patterns, demonstrating a significant improvement over conventional models. In contrast, the polynomial model for energy production, while yielding partial accuracy (R² = 0.65), highlights the need for more advanced techniques to fully capture the temperature-dependent nature of energy production. The results indicate that temperature variations significantly affect energy consumption, with higher temperatures driving increased energy demand for cooling, while lower temperatures affect production efficiency, particularly in systems like hydropower. These findings underscore the necessity for integrating sophisticated models into energy planning to ensure resilience in energy systems amidst climate variability. The study offers critical insights for policymakers to optimize energy generation and distribution in response to changing climatic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Early Cervical Cancer Diagnosis with SWIN-Transformer and Convolutional Neural Networks.
- Author
-
Mohammed, Foziya Ahmed, Tune, Kula Kekeba, Mohammed, Juhar Ahmed, Wassu, Tizazu Alemu, and Muhie, Seid
- Subjects
- *
CONVOLUTIONAL neural networks , *TRANSFORMER models , *DATA augmentation , *IMAGE recognition (Computer vision) , *EARLY detection of cancer - Abstract
Introduction: Early diagnosis of cervical cancer at the precancerous stage is critical for effective treatment and improved patient outcomes. Objective: This study aims to explore the use of SWIN Transformer and Convolutional Neural Network (CNN) hybrid models combined with transfer learning to classify precancerous colposcopy images. Methods: Out of 913 images from 200 cases obtained from the Colposcopy Image Bank of the International Agency for Research on Cancer, 898 met quality standards and were classified as normal, precancerous, or cancerous based on colposcopy and histopathological findings. The cases corresponding to the 360 precancerous images, along with an equal number of normal cases, were divided into a 70/30 train–test split. The SWIN Transformer and CNN hybrid model combines the advantages of local feature extraction by CNNs with the global context modeling by SWIN Transformers, resulting in superior classification performance and a more automated process. The hybrid model approach involves enhancing image quality through preprocessing, extracting local features with CNNs, capturing the global context with the SWIN Transformer, integrating these features for classification, and refining the training process by tuning hyperparameters. Results: The trained model achieved the following classification performances on fivefold cross-validation data: a 94% Area Under the Curve (AUC), an 88% F1 score, and 87% accuracy. On two completely independent test sets, which were never seen by the model during training, the model achieved an 80% AUC, a 75% F1 score, and 75% accuracy on the first test set (precancerous vs. normal) and an 82% AUC, a 78% F1 score, and 75% accuracy on the second test set (cancer vs. normal). Conclusions: These high-performance metrics demonstrate the models' effectiveness in distinguishing precancerous from normal colposcopy images, even with modest datasets, limited data augmentation, and the smaller effect size of precancerous images compared to malignant lesions. The findings suggest that these techniques can significantly aid in the early detection of cervical cancer at the precancerous stage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Data‐Driven Approach Using a Hybrid Model for Predicting Oxygen Consumption in Argon Oxygen Decarburization Converter.
- Author
-
Mingming, Li, Xihong, Chen, Dongxu, Liu, Lei, Shao, Wentao, Zhou, and Zongshu, Zou
- Abstract
Accurately controlling oxygen supply in argon oxygen decarburization (AOD) process is invariably desired for efficient decarburization and reducing alloying elements consumption. Herein, a data‐driven approach using a hybrid model integrating oxygen balance mechanism model and a two‐layer Stacking ensemble learning model is successfully established for predicting oxygen consumption in AOD converter. In this hybrid model, the oxygen balance mechanism model is used to calculate the oxygen consumption based on industrial data. Then the model calculation error is compensated using an optimized two‐layer Stacking model that is identified as (random forest (RF) + XGBoost + ridge regression)‐RF model by evaluating different hybrid model frameworks and Bayesian optimization. The results show that, in comparison to conventional prediction model based on oxygen balance mechanism, the present hybrid model greatly improves the control accuracy of oxygen consumption in AOD industrial production. The hit rate and mean absolute error of the present hybrid model for predicting oxygen consumption are 84.8% and 330 Nm3, respectively, within absolute oxygen consumption prediction error ±600 Nm3 (relative error of 3.8%). This data‐driven approach using the present hybrid model provides one pathway to efficient oxygen consumption control in AOD process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Enhancing Heart Disease Prediction Accuracy by Comparing Classification Models Employing Varied Feature Selection Techniques.
- Author
-
Balliu, Lorena, Zanaj, Blerina, Basha, Gledis, Zanaj, Elma, and Meçe, Elinda Kajo
- Abstract
ML (Machine Learning) is frequently used in health systems to alert physicians in real time. This helps to take preventive measures, such as predicting a future heart attack. This study presents ML combined with various forms of feature selection to identify heart disease. It includes the analysis of different algorithms such as Decision Tree, Logistic Regression, Support Vector Machine, Random Forest and hybrid models. This results in SVM and RM performing better after applying feature selection for individual ML models. Meanwhile, hybrid cases provide good results if the ensemble is done using a Voting Classifier. Our approach in this paper is based on our study of existing literature and methodologies. We can conclude that, for the used dataset, the Voting Classifier appears to be the most accurate and precise model out of all individual and hybrid classifiers that use feature selection techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Hybrid model approach in data mining.
- Author
-
Bakirarar, Batuhan, Cosgun, Erdal, and Elhan, Atilla Halil
- Subjects
- *
SUPERVISED learning , *MACHINE learning , *DATA mining , *DATABASES , *INDEPENDENT variables - Abstract
Studies on hybrid data mining approach has been increasing in recent years. Hybrid data mining is defined as an effective combination of various data mining techniques to use the power of each technique and compensate for each other's weaknesses. The purpose of this study is to present state-of-the-art data mining algorithms and applications and to propose a new hybrid data mining approach for classifying medical data. In addition, in the study, it was aimed to calculate performance metrics of data mining methods and to compare these metrics with the metrics obtained from the hybrid model. The study utilized simulated datasets produced on the basis of various scenarios and hepatitis dataset obtained from the UCI database. Supervised learning algorithms were used. In addition, hybrid models were created by combining these algorithms. In simulated datasets, it was observed that MCC values increased with a higher sample size and higher correlation between the independent variables. In addition, as the correlation between independent variables increased in imbalanced datasets, a noticeable increase was observed in the performance metrics of the group with lower sample size. A similar case was observed with the actual datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Hybrid Long Short-Term Memory Wavelet Transform Models for Short-Term Electricity Load Forecasting.
- Author
-
Guenoukpati, Agbassou, Agbessi, Akuété Pierre, Salami, Adekunlé Akim, and Bakpo, Yawo Amen
- Subjects
- *
STANDARD deviations , *ARTIFICIAL neural networks , *ELECTRIC networks , *ELECTRICAL load , *LOAD forecasting (Electric power systems) , *ELECTRICAL energy - Abstract
To ensure the constant availability of electrical energy, power companies must consistently maintain a balance between supply and demand. However, electrical load is influenced by a variety of factors, necessitating the development of robust forecasting models. This study seeks to enhance electricity load forecasting by proposing a hybrid model that combines Sorted Coefficient Wavelet Decomposition with Long Short-Term Memory (LSTM) networks. This approach offers significant advantages in reducing algorithmic complexity and effectively processing patterns within the same class of data. Various models, including Stacked LSTM, Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network—Long Short-Term Memory (CNN-LSTM), and Convolutional Long Short-Term Memory (ConvLSTM), were compared and optimized using grid search with cross-validation on consumption data from Lome, a city in Togo. The results indicate that the ConvLSTM model outperforms its counterparts based on Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), and correlation coefficient (R2) metrics. The ConvLSTM model was further refined using wavelet decomposition with coefficient sorting, resulting in the WT+ConvLSTM model. This proposed approach significantly narrows the gap between actual and predicted loads, reducing discrepancies from 10–50 MW to 0.5–3 MW. In comparison, the WT+ConvLSTM model surpasses Autoregressive Integrated Moving Average (ARIMA) models and Multilayer Perceptron (MLP) type artificial neural networks, achieving a MAPE of 0.485%, an RMSE of 0.61 MW, and an R2 of 0.99. This approach demonstrates substantial robustness in electricity load forecasting, aiding stakeholders in the energy sector to make more informed decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Compressive Strength Prediction of Fly Ash-Based Concrete Using Single and Hybrid Machine Learning Models.
- Author
-
Li, Haiyu, Chung, Heungjin, Li, Zhenting, and Li, Weiping
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,CONVOLUTIONAL neural networks ,TRANSFORMER models ,ARTIFICIAL intelligence - Abstract
The compressive strength of concrete is a crucial parameter in structural design, yet its determination in a laboratory setting is both time-consuming and expensive. The prediction of compressive strength in fly ash-based concrete can be accelerated through the use of machine learning algorithms with artificial intelligence, which can effectively address the problems associated with this process. This paper presents the most innovative model algorithms established based on artificial intelligence technology. These include three single models—a fully connected neural network model (FCNN), a convolutional neural network model (CNN), and a transformer model (TF)—and three hybrid models—FCNN + CNN, TF + FCNN, and TF + CNN. A total of 471 datasets were employed in the experiments, comprising 7 input features: cement (C), fly ash (FA), water (W), superplasticizer (SP), coarse aggregate (CA), fine aggregate (S), and age (D). Six models were subsequently applied to predict the compressive strength (CS) of fly ash-based concrete. Furthermore, the loss function curves, assessment indexes, linear correlation coefficient, and the related literature indexes of each model were employed for comparison. This analysis revealed that the FCNN + CNN model exhibited the highest prediction accuracy, with the following metrics: R
2 = 0.95, MSE = 14.18, MAE = 2.32, SMAPE = 0.1, and R = 0.973. Additionally, SHAP was utilized to elucidate the significance of the model parameter features. The findings revealed that C and D exerted the most substantial influence on the model prediction outcomes, followed by W and FA. Nevertheless, CA, S, and SP demonstrated comparatively minimal influence. Finally, a GUI interface for predicting compressive strength was developed based on six models and nonlinear functional relationships, and a criterion for minimum strength was derived by comparison and used to optimize a reasonable mixing ratio, thus achieving a fast data-driven interaction that was concise and reliable. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
19. Hybrid RNNs and USE for enhanced sequential sentence classification in biomedical paper abstracts.
- Author
-
Ndama, Oussama, Bensassi, Ismail, and En-Naimi, El Mokhtar
- Subjects
RECURRENT neural networks ,DATA mining ,LEXICAL access ,MEDICAL research ,ALGORITHMS - Abstract
This research evaluates a number of hybrid recurrent neural network (RNN) architectures for classifying sequential sentences in biomedical abstracts. The architectures include long short-term memory (LSTM), bidirectional LSTM (BI-LSTM), gated recurrent unit (GRU), and bidirectional GRU (BIGRU) models, all of which are combined with the universal sentence encoder (USE). The investigation assesses their efficacy in categorizing sentences into predefined classes: background, objective, method, result, and conclusion. Each RNN variant is used with the pre-trained USE as word embeddings to find complex sequential relationships in biomedical text. Results demonstrate the adaptability and effectiveness of these hybrid architectures in discerning diverse sentence functions. This research addresses the need for improved literature comprehension in biomedicine by employing automated sentence classification techniques, highlighting the significance of advanced hybrid algorithms in enhancing text classification methodologies within biomedical research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Hybrid modeling approach for precise estimation of energy production and consumption based on temperature variations
- Author
-
Wulfran Fendzi Mbasso, Reagan Jean Jacques Molu, Ambe Harrison, Mukesh Pushkarna, Fritz Nguemo Kemdoum, Emmanuel Fendzi Donfack, Pradeep Jangir, Pierre Tiako, and Milkias Berhanu Tuka
- Subjects
Energy modeling ,Temperature impact ,Hybrid models ,Polynomial regression ,Sinusoidal functions ,Energy consumption ,Medicine ,Science - Abstract
Abstract This study introduces an advanced mathematical methodology for predicting energy generation and consumption based on temperature variations in regions with diverse climatic conditions and increasing energy demands. Using a comprehensive dataset of monthly energy production, consumption, and temperature readings spanning ten years (2010–2020), we applied polynomial, sinusoidal, and hybrid modeling techniques to capture the non-linear and cyclical relationships between temperature and energy metrics. The hybrid model, which combines sinusoidal and polynomial functions, achieved an accuracy of 79.15% in estimating energy consumption using temperature as a predictor variable. This model effectively captures the seasonal and non-linear consumption patterns, demonstrating a significant improvement over conventional models. In contrast, the polynomial model for energy production, while yielding partial accuracy (R² = 0.65), highlights the need for more advanced techniques to fully capture the temperature-dependent nature of energy production. The results indicate that temperature variations significantly affect energy consumption, with higher temperatures driving increased energy demand for cooling, while lower temperatures affect production efficiency, particularly in systems like hydropower. These findings underscore the necessity for integrating sophisticated models into energy planning to ensure resilience in energy systems amidst climate variability. The study offers critical insights for policymakers to optimize energy generation and distribution in response to changing climatic conditions.
- Published
- 2024
- Full Text
- View/download PDF
21. A General Framework for Generating Three-Components Heavy-Tailed Distributions with Application
- Author
-
Patrick Osatohanmwen, Francis O. Oyegue, Sunday M. Ogbonmwan, and William Muhwava
- Subjects
Extreme value theory ,Heavy-tailed distribution ,Hybrid models ,Maximum likelihood estimation ,S& P 500 index ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
Abstract The estimation of a certain threshold beyond which an extreme value distribution can be fitted to the tail of a data distribution remains one of the main issues in the theory of statistics of extremes. While standard Peak over Threshold (PoT) approaches determine this threshold graphically, we introduce in this paper a general framework which makes it possible for one to determine this threshold algorithmically by estimating it as a free parameter within a composite distribution. To see how this threshold point arises, we propose a general framework for generating three-component hybrid distributions which meets the need of data sets with right heavy-tail. The approach involves the combination of a distribution which can efficiently model the bulk of the data around the mean, with an heavy-tailed distribution meant to model the data observations in the tail while using another distribution as a link to connect the two. Some special examples of distributions resulting from the general framework are generated and studied. An estimation algorithm based on the maximum likelihood method is proposed for the estimation of the free parameters of the hybrid distributions. Application of the hybrid distributions to the S &P 500 index financial data set is also carried out.
- Published
- 2024
- Full Text
- View/download PDF
22. Machine learning and deep learning models based grid search cross validation for short-term solar irradiance forecasting
- Author
-
Doaa El-Shahat, Ahmed Tolba, Mohamed Abouhawwash, and Mohamed Abdel-Basset
- Subjects
Solar radiation ,Deep learning ,Machine learning ,Hybrid models ,XAI ,Computer engineering. Computer hardware ,TK7885-7895 ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In late 2023, the United Nations conference on climate change (COP28), which was held in Dubai, encouraged a quick move from fossil fuels to renewable energy. Solar energy is one of the most promising forms of energy that is both sustainable and renewable. Generally, photovoltaic systems transform solar irradiance into electricity. Unfortunately, instability and intermittency in solar radiation can lead to interruptions in electricity production. The accurate forecasting of solar irradiance guarantees sustainable power production even when solar irradiance is not present. Batteries can store solar energy to be used during periods of solar absence. Additionally, deterministic models take into account the specification of technical PV systems and may be not accurate for low solar irradiance. This paper presents a comparative study for the most common Deep Learning (DL) and Machine Learning (ML) algorithms employed for short-term solar irradiance forecasting. The dataset was gathered in Islamabad during a five-year period, from 2015 to 2019, at hourly intervals with accurate meteorological sensors. Furthermore, the Grid Search Cross Validation (GSCV) with five folds is introduced to ML and DL models for optimizing the hyperparameters of these models. Several performance metrics are used to assess the algorithms, such as the Adjusted R 2 score, Normalized Root Mean Square Error (NRMSE), Mean Absolute Deviation (MAD), Mean Absolute Error (MAE) and Mean Square Error (MSE). The statistical analysis shows that CNN-LSTM outperforms its counterparts of nine well-known DL models with Adjusted R 2 score value of 0.984. For ML algorithms, gradient boosting regression is an effective forecasting method with Adjusted R 2 score value of 0.962, beating its rivals of six ML models. Furthermore, SHAP and LIME are examples of explainable Artificial Intelligence (XAI) utilized for understanding the reasons behind the obtained results.
- Published
- 2024
- Full Text
- View/download PDF
23. Physics-informed transfer learning model for fatigue life prediction of IN718 alloy
- Author
-
Baihan Chen, Jianfeng Zhang, Shangcheng Zhou, Guangping Zhang, and Fang Xu
- Subjects
Fatigue life prediction ,Transfer learning ,Physical information ,Hybrid models ,Mining engineering. Metallurgy ,TN1-997 - Abstract
To address the challenges posed by inadequate data and data utilization in multiple scenarios of fatigue loading, a Physics-informed Transfer Learning (PITL) model has been developed to predict the fatigue life of IN718 superalloy. Strain-controlled low-cycle fatigue tests were carried out at 400 °C with three distinct strain ratios, which were subsequently segmented for individual transfer learning tests. PITL models with significant engineering value were built by integrating transfer learning methodologies rooted in TrAdaBoost with a physics-based model that hinges on the principles of equivalent strain theory. The findings suggest that PITL models exhibit improved accuracy and greater robustness compared to both transfer learning and physics models.
- Published
- 2024
- Full Text
- View/download PDF
24. On the phenomenological modelling of physical phenomena
- Author
-
Jüri Engelbrecht, Kert Tamm, and Tanel Peets
- Subjects
science-driven models ,internal variables ,phenomenological variables ,hybrid models ,mathematical modelling ,Science - Abstract
Mathematical modelling of physical phenomena is based on the laws of physics, but for complicated processes, phenomenological models could enhance the descriptive and prescriptive power of the analysis. This paper describes some hybrid models, where in addition to the physics-driven part, some phenomenological variables (based on observations) are added. The internal variables widely used in continuum mechanics for modelling dissipative processes and the phenomenological variables used in modelling neural impulses are described and compared. The appendices describe two models of neural impulses and test problems for two classical cases: the wave equation and the diffusion equation. These test problems demonstrate the usage of phenomenological variables for describing dissipation as well as amplification.
- Published
- 2024
- Full Text
- View/download PDF
25. Hybrid deep learning models with data fusion approach for electricity load forecasting.
- Author
-
Özen, Serkan, Yazıcı, Adnan, and Atalay, Volkan
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *MULTISENSOR data fusion , *ELECTRIC power consumption , *BLENDED learning - Abstract
This study explores the application of deep learning in forecasting electricity consumption. Initially, we assess the performance of standard neural networks, such as convolutional neural networks (CNN) and long short‐term memory (LSTM), along with basic methods like ARIMA and random forest, on a univariate electricity consumption data set. Subsequently, we develop hybrid models for a comprehensive multivariate data set created by merging weather and electricity data. These hybrid models demonstrate superior performance compared to individual models on the univariate data set. Our main contribution is the introduction of a novel hybrid data fusion model. This model integrates a single‐model approach for univariate data, a hybrid model for multivariate data, and a linear regression model that processes the outputs from both. Our hybrid fusion model achieved an RMSE value of 0.0871 on the Chicago data set, outperforming other models such as Random Forest (0.2351), ARIMA (0.2184), CNN (0.1802), LSTM + LSTM (0.1496), and CNN + LSTM (0.1587). Additionally, our model surpassed the performance of our base transformer model. Furthermore, combining the best‐performing transformer model, with a Gaussian Process model resulted in further improvement in performance. The Transformer + Gaussian model achieved an RMSE of 0.0768, compared with 0.0781 for the single transformer model. Similar trends were observed in the Pittsburgh and IHEC data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Machine learning and deep learning models based grid search cross validation for short-term solar irradiance forecasting.
- Author
-
El-Shahat, Doaa, Tolba, Ahmed, Abouhawwash, Mohamed, and Abdel-Basset, Mohamed
- Subjects
MACHINE learning ,SUSTAINABILITY ,TECHNICAL specifications ,ARTIFICIAL intelligence ,STANDARD deviations ,DEEP learning - Abstract
In late 2023, the United Nations conference on climate change (COP28), which was held in Dubai, encouraged a quick move from fossil fuels to renewable energy. Solar energy is one of the most promising forms of energy that is both sustainable and renewable. Generally, photovoltaic systems transform solar irradiance into electricity. Unfortunately, instability and intermittency in solar radiation can lead to interruptions in electricity production. The accurate forecasting of solar irradiance guarantees sustainable power production even when solar irradiance is not present. Batteries can store solar energy to be used during periods of solar absence. Additionally, deterministic models take into account the specification of technical PV systems and may be not accurate for low solar irradiance. This paper presents a comparative study for the most common Deep Learning (DL) and Machine Learning (ML) algorithms employed for short-term solar irradiance forecasting. The dataset was gathered in Islamabad during a five-year period, from 2015 to 2019, at hourly intervals with accurate meteorological sensors. Furthermore, the Grid Search Cross Validation (GSCV) with five folds is introduced to ML and DL models for optimizing the hyperparameters of these models. Several performance metrics are used to assess the algorithms, such as the Adjusted R
2 score, Normalized Root Mean Square Error (NRMSE), Mean Absolute Deviation (MAD), Mean Absolute Error (MAE) and Mean Square Error (MSE). The statistical analysis shows that CNN-LSTM outperforms its counterparts of nine well-known DL models with Adjusted R2 score value of 0.984. For ML algorithms, gradient boosting regression is an effective forecasting method with Adjusted R2 score value of 0.962, beating its rivals of six ML models. Furthermore, SHAP and LIME are examples of explainable Artificial Intelligence (XAI) utilized for understanding the reasons behind the obtained results. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
27. Cell factory design with advanced metabolic modelling empowered by artificial intelligence.
- Author
-
Lu, Hongzhong, Xiao, Luchi, Liao, Wenbin, Yan, Xuefeng, and Nielsen, Jens
- Subjects
- *
MACHINE learning , *FACTORY design & construction , *METABOLIC models , *ARTIFICIAL intelligence , *BIOLOGICAL models , *SYNTHETIC biology - Abstract
Advances in synthetic biology and artificial intelligence (AI) have provided new opportunities for modern biotechnology. High-performance cell factories, the backbone of industrial biotechnology, are ultimately responsible for determining whether a bio-based product succeeds or fails in the fierce competition with petroleum-based products. To date, one of the greatest challenges in synthetic biology is the creation of high-performance cell factories in a consistent and efficient manner. As so-called white-box models, numerous metabolic network models have been developed and used in computational strain design. Moreover, great progress has been made in AI-powered strain engineering in recent years. Both approaches have advantages and disadvantages. Therefore, the deep integration of AI with metabolic models is crucial for the construction of superior cell factories with higher titres, yields and production rates. The detailed applications of the latest advanced metabolic models and AI in computational strain design are summarized in this review. Additionally, approaches for the deep integration of AI and metabolic models are discussed. It is anticipated that advanced mechanistic metabolic models powered by AI will pave the way for the efficient construction of powerful industrial chassis strains in the coming years. • Advanced mechanistic metabolic models enhance rational design of cell factories • Machine learning models refine reconstruction of functional metabolic models. • Data-driven AI models provide alternative solutions for strain design in DBTL cycle. • Hybrid AI models with biological insights boost precision in cell factory design. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Dealing with Anomalies in Day-Ahead Market Prediction Using Machine Learning Hybrid Model.
- Author
-
Pilot, Karol, Ganczarek-Gamrot, Alicja, and Kania, Krzysztof
- Subjects
- *
MACHINE learning , *ENERGY industries , *ELECTRICITY pricing , *ELECTRICITY markets , *PREDICTION models - Abstract
Forecasting the electricity market, even in the short term, is a difficult task, due to the nature of this commodity, the lack of storage capacity, and the multiplicity and volatility of factors that influence its price. The sensitivity of the market results in the appearance of anomalies in the market, during which forecasting models often break down. The aim of this paper is to present the possibility of using hybrid machine learning models to forecast the price of electricity, especially when such events occur. It includes the automatic detection of anomalies using three different switch types and two independent forecasting models, one for use during periods of stable markets and the other during periods of anomalies. The results of empirical tests conducted on data from the Polish energy market showed that the proposed solution improves the overall quality of prediction compared to using each model separately and significantly improves the quality of prediction during anomaly periods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A General Framework for Generating Three-Components Heavy-Tailed Distributions with Application.
- Author
-
Osatohanmwen, Patrick, Oyegue, Francis O., Ogbonmwan, Sunday M., and Muhwava, William
- Subjects
DISTRIBUTION (Probability theory) ,EXTREME value theory ,VALUE distribution theory ,DATA distribution ,PARAMETER estimation - Abstract
The estimation of a certain threshold beyond which an extreme value distribution can be fitted to the tail of a data distribution remains one of the main issues in the theory of statistics of extremes. While standard Peak over Threshold (PoT) approaches determine this threshold graphically, we introduce in this paper a general framework which makes it possible for one to determine this threshold algorithmically by estimating it as a free parameter within a composite distribution. To see how this threshold point arises, we propose a general framework for generating three-component hybrid distributions which meets the need of data sets with right heavy-tail. The approach involves the combination of a distribution which can efficiently model the bulk of the data around the mean, with an heavy-tailed distribution meant to model the data observations in the tail while using another distribution as a link to connect the two. Some special examples of distributions resulting from the general framework are generated and studied. An estimation algorithm based on the maximum likelihood method is proposed for the estimation of the free parameters of the hybrid distributions. Application of the hybrid distributions to the S &P 500 index financial data set is also carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Inferring current and Last Glacial Maximum distributions are improved by physiology‐relevant climatic variables in cold‐adapted ectotherms.
- Author
-
Guillon, Michaël, Martínez‐Freiría, Fernando, Lucchini, Nahla, Ursenbacher, Sylvain, Surget‐Groba, Yann, Kageyama, Masa, Lagarde, Frédéric, Cubizolle, Hervé, and Lourdais, Olivier
- Subjects
- *
LAST Glacial Maximum , *PHYLOGEOGRAPHY , *COLONIZATION (Ecology) , *VIVIPAROUS lizard , *ECOLOGICAL models , *COLD-blooded animals , *SOLAR temperature - Abstract
Aim: Ecological niche‐based models (ENM) frequently rely on bioclimatic variables (BioV) to reconstruct biogeographic scenarios for species evolution, ignoring mechanistic relations. We tested if climatic predictors relevant to species hydric and thermal physiology better proximate distribution patterns and support location of Pleistocene refugia derived from phylogeographic studies. Location: The Western Palaearctic. Taxon: Vipera berus and Zootoca vivipara, two cold‐adapted species. Methods: We used two sets of variables, that is physiologically meaningful climatic variables (PMV) and BioV, in a multi‐algorithm ENM approach, to compare their ability to predict current and Last Glacial Maximum (LGM) species ranges. We estimated current and LGM permafrost extent to address spatially the cold hardiness dissimilarity between both species. Results: PMV explained more accurately the current distribution of these two cold‐adapted species and identified the importance of summer temperature and solar radiation that constrain activity in cold habitats. PMV also provide a better insight than BioV predictors on LGM distribution. By including notably, the permafrost extent, PMV‐based models gave parsimonious putative arrangement and validity of refugia for each clade and subclade in accordance with phylogeographic data. Northern refugia were also identified from 48 to 52° N for V. berus and from 50 to 54° N for Z. vivipara. Main Conclusions: Our hybrid approach based on PMV generated more realistic predictions for both current (biogeographical validation) and past distributions (phylogeographic validation). By combining constraints during the activity period (summer climatic niche) and those inherent to the wintering period (freeze tolerance), we managed to identify glacial refuges in agreement with phylogeographic hypotheses concerning post‐glacial routes and colonization scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Optimization of Support Vector Machine with Biological Heuristic Algorithms for Estimation of Daily Reference Evapotranspiration Using Limited Meteorological Data in China.
- Author
-
Guo, Hongtao, Wu, Liance, Wang, Xianlong, Xing, Xuguang, Zhang, Jing, Qing, Shunhao, and Zhao, Xinbo
- Subjects
- *
METAHEURISTIC algorithms , *WATER management , *PARTICLE swarm optimization , *SUPPORT vector machines , *CLIMATIC zones - Abstract
Precise estimation of daily reference crop evapotranspiration (ET0) is critical for water resource management and agricultural irrigation optimization worldwide. In China, diverse climatic zones pose challenges for accurate ET0 prediction. Here, we evaluate the performance of a support vector machine (SVM) and its hybrid models, PSO-SVM and WOA-SVM, utilizing meteorological data spanning 1960–2020. Our study aims to identify a high-precision, low-input ET0 estimation tool. The findings indicate that the hybrid models, particularly WOA-SVM, demonstrated superior accuracy with R2 values ranging from 0.973 to 0.999 and RMSE values between 0.123 and 0.863 mm/d, outperforming the standalone SVM model with R2 values of 0.955 to 0.989 and RMSE values of 0.168 to 0.982 mm/d. The standalone SVM model showed relatively lower accuracy with R2 values of 0.822 to 0.887 and RMSE values of 0.381 to 1.951 mm/d. Notably, the WOA-SVM model, with R2 values of 0.990 to 0.992 and RMSE values of 0.092 to 0.160 mm/d, emerged as the top performer, showcasing the benefits of the whale optimization algorithm in enhancing SVM's predictive capabilities. The PSO-SVM model also presented improved performance, especially in the temperate continental zone (TCZ), subtropical monsoon region (SMZ), and temperate monsoon zone (TMZ), when using limited meteorological data as the input. The study concludes that the WOA-SVM model is a promising tool for high-precision daily ET0 estimation with fewer meteorological parameters across the different climatic zones of China. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. End‐Point Prediction of Converter Steelmaking Based on Main Process Data.
- Author
-
Kang, Yi, Zhao, Jun‐xue, Li, Bin, Ren, Meng‐meng, Cao, Geng, Yue, Shen, and An, Bei‐qi
- Abstract
In this article, main process data, notably time–series data such as lance position patterns, are analyzed during converter steelmaking, and methodologies in data processing and transforming are proposed. In this study, utilizing both the transformed key time–series and primary static process data, the influence of various process parameters on the end‐point parameters of converter steelmaking is analyzed. Furthermore, it establishes predictive models for the end‐point content of carbon (C) and phosphorus (P), as well as the end‐point temperature. In the findings, it is indicated that the end‐point carbon content and temperature are primarily influenced by the oxygen flow pattern, lime addition pattern, and key smelting parameters. The end‐point phosphorus content is mainly affected by the oxygen flow pattern, limestone addition pattern, and dolomite addition pattern. Regarding the prediction of end‐point carbon and phosphorus content, and end‐point temperature, compared to seven sub‐models, the hybrid model demonstrates an average accuracy improvement of 37.88%, 25.03%, and 31.51%, respectively, and the end‐point hit rate improves by 18.77%, 19.59%, and 20.41%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Optimization Techniques in Municipal Solid Waste Management: A Systematic Review.
- Author
-
Alshaikh, Ryan and Abdelfatah, Akmal
- Abstract
As a consequence of human activity, waste generation is unavoidable, and its volume and complexity escalate with urbanization, economic progress, and the elevation of living standards in cities. Annually, the world produces about 2.01 billion tons of municipal solid waste, which often lacks environmentally safe management. The importance of solid waste management lies in its role in sustainable development, aimed at reducing the environmental harms from waste creation and disposal. With the expansion of urban populations, waste management systems grow increasingly complex, necessitating more sophisticated optimization strategies. This analysis thoroughly examines the optimization techniques used in solid waste management, assessing their application, benefits, and limitations by using PRISMA 2020. This study, reviewing the literature from 2010 to 2023, divides these techniques into three key areas: waste collection and transportation, waste treatment and disposal, and resource recovery, using tools like mathematical modeling, simulation, and artificial intelligence. It evaluates these strategies against criteria such as cost-efficiency, environmental footprint, energy usage, and social acceptability. Significant progress has been noted in optimizing waste collection and transportation through innovations in routing, bin placement, and the scheduling of vehicles. The paper also explores advancements in waste treatment and disposal, like selecting landfill sites and converting waste to energy, alongside newer methods for resource recovery, including sorting and recycling materials. In conclusion, this review identifies research gaps and suggests directions for future optimization efforts in solid waste management, emphasizing the need for cross-disciplinary collaboration, leveraging new technologies, and adopting tailored approaches to tackle the intricate challenges of managing waste. These insights offer valuable guidance for policymakers, waste management professionals, and researchers involved in crafting sustainable waste strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Clustering techniques performance comparison for predicting the battery state of charge: A hybrid model approach.
- Author
-
Ordás, María Teresa, Blanco, David Yeregui Marcos del, Aveleira-Mata, José, Zayas-Gato, Francisco, Jove, Esteban, Casteleiro-Roca, José-Luis, Quintián, Héctor, Calvo-Rolle, José Luis, and Alaiz-Moreton, Héctor
- Subjects
VOLTAGE ,ELECTRIC currents ,HOUSEHOLD electronics ,STORAGE batteries ,RENEWABLE energy sources - Abstract
Batteries are a fundamental storage component due to its various applications in mobility, renewable energies and consumer electronics among others. Regardless of the battery typology, one key variable from a user's perspective is the remaining energy in the battery. It is usually presented as the percentage of remaining energy compared to the total energy that can be stored and is labeled State Of Charge (SOC). This work addresses the development of a hybrid model based on a Lithium Iron Phosphate (LiFePO4) power cell, due to its broad implementation. The proposed model calculates the SOC, by means of voltage and electric current as inputs and the latter as the output. Therefore, four models based on k-Means, Agglomerative Clustering, Gaussian Mixture and Spectral Clustering techniques have been tested in order to obtain an optimal solution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Fuzzy Cognitive Maps for Analyzing User Satisfaction in Information Services.
- Author
-
Girija, D. K., N., Yogeesh, and M., Rashmi
- Subjects
MACHINE learning ,COGNITIVE maps (Psychology) ,STIMULUS & response (Psychology) ,SATISFACTION ,QUALITY of service - Abstract
In this paper, we develop a Fuzzy Cognitive Map (FCM)-based framework for investigating user satisfaction in information services and further focus on how various affecting factors are connected to the overall user service experience. Conventional approaches usually perform poorly with extensive uncertainties or difficult, complex relationships between service quality, response time, usability and personalization. As a case in point, FCMs give us an appropriate mathematical framework to model causal relationships and simulate dynamic interplays between these factors. In the study, critical factors are taken as nodes in FCM and then it establishes causal weights between these so that their importance can be quantified. The state of every concept is then updated via matrix-vector operation and iterative updates with sigmoid activation function until convergence. A comprehensive case study illustrates an actual usage of the FCM framework, and highlights its ability to isolate key drivers that impact satisfaction and suggests avenues for improvement. Next steps include integrating IoT sensors for real-time monitoring, hybrid models with machine learning to improve predictions and relevant applications in different fields which goes from e-commerce to healthcare. This experimentation underscores FCMs potential in decision-making procedures which presents good insight towards enhancing users experience when dealing with information service. [ABSTRACT FROM AUTHOR]
- Published
- 2024
36. Blood Glucose Prediction from Nutrition Analytics in Type 1 Diabetes: A Review.
- Author
-
Lubasinski, Nicole, Thabit, Hood, Nutter, Paul W., and Harper, Simon
- Abstract
Introduction: Type 1 Diabetes (T1D) affects over 9 million worldwide and necessitates meticulous self-management for blood glucose (BG) control. Utilizing BG prediction technology allows for increased BG control and a reduction in the diabetes burden caused by self-management requirements. This paper reviews BG prediction models in T1D, which include nutritional components. Method: A systematic search, utilizing the PRISMA guidelines, identified articles focusing on BG prediction algorithms for T1D that incorporate nutritional variables. Eligible studies were screened and analyzed for model type, inclusion of additional aspects in the model, prediction horizon, patient population, inputs, and accuracy. Results: The study categorizes 138 blood glucose prediction models into data-driven (54%), physiological (14%), and hybrid (33%) types. Prediction horizons of ≤30 min are used in 36% of models, 31–60 min in 34%, 61–90 min in 11%, 91–120 min in 10%, and >120 min in 9%. Neural networks are the most used data-driven technique (47%), and simple carbohydrate intake is commonly included in models (data-driven: 72%, physiological: 52%, hybrid: 67%). Real or free-living data are predominantly used (83%). Conclusion: The primary goal of blood glucose prediction in T1D is to enable informed decisions and maintain safe BG levels, considering the impact of all nutrients for meal planning and clinical relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. LuckyMera: a modular AI framework for building hybrid NetHack agents.
- Author
-
Quarantiello, Luigi, Marzeddu, Simone, Guzzi, Antonio, and Lomonaco, Vincenzo
- Subjects
- *
ARTIFICIAL intelligence , *REINFORCEMENT learning , *COMPUTATIONAL complexity , *BLENDED learning , *VIDEO games - Abstract
In the last few decades we have witnessed a significant development in Artificial Intelligence (AI) thanks to the availability of a variety of testbeds, mostly based on simulated environments and video games. Among those, roguelike games offer a very good trade-off in terms of complexity of the environment and computational costs, which makes them perfectly suited to test AI agents generalization capabilities. In this work, we present LuckyMera, a flexible, modular, extensible and configurable AI framework built around NetHack, a popular terminal-based, single-player roguelike video game. This library is aimed at simplifying and speeding up the development of AI agents capable of successfully playing the game and offering a high-level interface for designing game strategies. LuckyMera comes with a set of off-the-shelf symbolic and neural modules (called "skills"): these modules can be either hard-coded behaviors, or neural Reinforcement Learning approaches, with the possibility of creating compositional hybrid solutions. Additionally, LuckyMera comes with a set of utility features to save its experiences in the form of trajectories for further analysis and to use them as datasets to train neural modules, with a direct interface to the NetHack Learning Environment and MiniHack. Through an empirical evaluation we validate our skills implementation and propose a strong baseline agent that can reach state-of-the-art performances in the complete NetHack game. LuckyMera is open-source and available at . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. УЗАГАЛЬНЕНА МОДЕЛЬ ТЛІЮЧОГО РОЗРЯДУ НА ОСНОВІ ТРИГОНОМЕТРИЧНОГО БАЗИСА
- Author
-
О. В., АНДРІЄНКО
- Subjects
- *
GLOW discharges , *ORTHONORMAL basis , *MONTE Carlo method , *HYDRODYNAMICS , *ELECTRODE testing - Abstract
Purpose. To review and classify currently existing glow discharge models by the method of mathematical description of processes, pressure, gas type and electrode geometry. The article is aimed at creating a generalized model of the discharge, which will determine the influence of each parameter on the characteristics of the discharge. Methodology. To achieve the goals of the research, methods of theoretical analysis of scientific sources were used, as well as a mathematical method for describing a generalized discharge model based on an orthonormal basis, in particular a trigonometric basis. Findings. An overview of the glow discharge was conducted, a classification of models based on analytical methods in their basis was proposed. The vector of parameters common to all models is isolated. Also, during the analysis, a comparison of the characteristics of the models was carried out in the form of a table. A generalized model for the study of gas discharges by the method of modeling using orthonormal bases was proposed. As an example, the formation of a vector of variable parameters in the trigonometric basis is given. Originality. The article offers a generalized discharge model that demonstrates how a change in a specific parameter affects the discharge characteristics. Such a model allows you to simultaneously change the parameters of the model, accumulate reactions to changes in parameters in each test, and upon completion, based on the results of the analysis, isolate the effect of each parameter with a cumulative change in the parameters of the model. Practical value: The model allows analyzing the influence of various parameters on the discharge regardless of the specific conditions of the experiment. The use of an orthonormal basis for the representation of parameters makes it possible to identify which parameters have the greatest influence on the stability and efficiency of the discharge both individually and in combination with other parameters. This makes it possible to optimize these parameters to achieve the best results in specific conditions, making it a universal tool for researchers and engineers, which facilitates the analysis of the relationships between parameters and allows you to get a more complete picture of the system's behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. On the phenomenological modelling of physical phenomena.
- Author
-
Engelbrecht, Jüri, Tamm, Kert, and Peets, Tanel
- Subjects
- *
HEAT equation , *PHYSICAL laws , *MATHEMATICAL variables , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Mathematical modelling of physical phenomena is based on the laws of physics, but for complicated processes, phenomenological models could enhance the descriptive and prescriptive power of the analysis. This paper describes some hybrid models, where in addition to the physics-driven part, some phenomenological variables (based on observations) are added. The internal variables widely used in continuum mechanics for modelling dissipative processes and the phenomenological variables used in modelling neural impulses are described and compared. The appendices describe two models of neural impulses and test problems for two classical cases: the wave equation and the diffusion equation. These test problems demonstrate the usage of phenomenological variables for describing dissipation as well as amplification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Groundwater spring potential mapping: Assessment the contribution of hydrogeological factors.
- Author
-
Zhao, Rui, Fan, Chenchen, Arabameri, Alireza, Santosh, M, Mohammad, Lal, and Mondal, Ismail
- Subjects
- *
HYDROGEOLOGY , *GROUNDWATER , *MACHINE learning , *WATER springs , *RAINFALL , *TOPOGRAPHY - Abstract
Groundwater, a fundamental asset, isn't effectively accessible in some parts of the world. The current research work pointed toward obtaining precise maps of potential groundwater zones. This study aimed for potential groundwater modeling and extracting the precise maps using four new advanced hybrid ML models (Dagging-HP, Bagging-HP, AdaBoost-HP, Decorate-HP) and one single model Hyperpipes (HP) in the Doji Watershed, situated in the eastern part of Golestan province, Iran. Among the selected models, the AdaBoost-HP model is the most efficient, with an AUC - ROC of 0.972, accuracy (0.922), sensitivity (0.906), and specificity (0.938), which gives the most promising values, when determining the collinearity between the 14 training factors, which are, in descending order of significance, LULC, Distance to stream (DtS), Topography wetness index (TWI), HAND, Distance to the road (DtR), Geomorphology, Topography position index (TPI), Lithology, Drainage density (DD), Elevation, Slope, Rainfall, and Clay (%). The AUC-ROC approach was employed to assess the model's performance along with Accuracy, Specificity, and Sensitivity. This model revealed that 7.37% has very high groundwater potential in the eastern and south-western parts of the study, whereas 36.8% has a very low groundwater potential in the north-western and south-eastern parts of the study. It can be said from this assessment that results obtained from this investigation are better and more reliable, which gives essential encouragement for further study put on this method for groundwater potential mapping of other areas of the world along with other areas of hydrogeological investigations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Artificial Intelligence for Water Consumption Assessment: State of the Art Review.
- Author
-
Morain, Almando, Ilangovan, Nivedita, Delhom, Christopher, and Anandhi, Aavudai
- Subjects
WATER consumption ,ARTIFICIAL intelligence ,SNOWBALL sampling ,TIME perspective ,RESEARCH personnel ,MACHINE learning ,KNOWLEDGE gap theory - Abstract
In recent decades, demand for freshwater resources has increased the risk of severe water stress. With the growing prevalence of artificial intelligence (AI), many researchers have turned to it as an alternative to linear methods to assess water consumption (WC). Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, this study utilized 229 screened publications identified through database searches and snowball sampling. This study introduces novel aspects of AI's role in water consumption assessment by focusing on innovation, application sectors, sustainability, and machine learning applications. It also categorizes existing models, such as standalone and hybrid, based on input, output variables, and time horizons. Additionally, it classifies learnable parameters and performance indexes while discussing AI models' advantages, disadvantages, and challenges. The study translates this information into a guide for selecting AI models for WC assessment. As no one-size-fits-all AI model exists, this study suggests utilizing hybrid AI models as alternatives. These models offer flexibility regarding efficiency, accuracy, interpretability, adaptability, and data requirements. They can address the limitations of individual models, leverage the strengths of different approaches, and provide a better understanding of the relationships between variables. Several knowledge gaps were identified, resulting in suggestions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Growing degree‐days do not explain moth species' distributions at broad scales.
- Author
-
Keefe, Hannah E. and Kharouba, Heather M.
- Subjects
SPECIES distribution ,GROWING season ,CLIMATE change ,MOTHS ,PREDICTION models - Abstract
Growing degree‐days (GDD), an estimate of an organism's growing season length, has been shown to be an important predictor of Lepidopteran species' distributions and could be influencing Lepidopteran range shifts to climate change. Yet, one understudied simplification in this literature is that the same thermal threshold is used in the calculations of GDD for all species instead of a species‐specific threshold. By characterizing the phenological process influenced by climate, a species‐specific estimate of GDD should improve the accuracy of species distribution models (SDMs). To test this hypothesis, we used published, experimentally estimated thermal thresholds and modeled the current geographic distribution of 30 moth species native to North America. We found that the predictive performance of models based on a species‐specific estimate of GDD was indistinguishable from models based on a standard estimate of GDD. This is likely because GDD was not an important predictor of these species' distributions. Our findings suggest that experimentally estimated thermal thresholds may not always scale up to be predictive at broad scales and that more work is needed to leverage the data from lab experiments into SDMs to accurately predict species' range shifts in response to climate change. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Enhancing stock market predictions via hybrid external trend and internal components analysis and long short term memory model
- Author
-
Fatene Dioubi, Negalign Wake Hundera, Huiying Xu, and Xinzhong Zhu
- Subjects
Stock market predictability ,Hybrid models ,ETICA decomposition method ,LSTM model ,Financial forecasting ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
When it comes to financial decision-making, stock market predictability is extremely important since it offers valuable information that may guide investment strategies, risk management, and portfolio allocation overall. Traditional methods often fail to accurately predict stock prices due to their complexity and inability to handle non-linear and non-stationary patterns in market data. To address these issues, this study introduces an innovative model that combines the External Trend and Internal Components Analysis decomposition method (ETICA) with the Long Short-Term Memory (LSTM) model, aiming to enhance stock market predictions for S&P 500, NASDAQ, Dow Jones, SSE and SZSE indices. Through rigorous testing across various training data proportions and epoch settings, our findings reveal that the proposed hybrid model outperforms the single LSTM model, delivering significantly lower Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) values. This enhanced precision reduces prediction errors, underscoring the model’s robustness and reliability. The superior performance of the ETICA-LSTM model highlights its potential as a powerful financial forecasting tool, promising to transform investment strategies, optimize risk management, and enhance portfolio performance.
- Published
- 2024
- Full Text
- View/download PDF
44. Machine learning based parameter estimation for an adapted finite element model of a blade bearing test bench
- Author
-
Luca Faller, Matthis Graßmann, and Timo Lichtenstein
- Subjects
Wind energy ,Blade bearing ,Machine learning ,Virtual testing ,Digital twin ,Hybrid models ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 ,Computer software ,QA76.75-76.765 - Abstract
Improving the reliability of blade bearings is essential for the safe operation of wind turbines. This challenge can be met with the help of virtual testing and digital-twin driven condition monitoring. For such approaches, a precise digital representation of the blade bearing and its test bench is an essential prerequisite. However, various factors prevent the capture of all parameters of the blade bearing and the associated test bench. Parameters such as bearing preload, rolling element and raceway dimensions, and bolt preload during assembly vary with each bearing and test bench setup. As these parameters cannot be measured directly, an alternative solution is required. This article presents a methodology to efficiently estimate non-measurable parameters of the test bench using a combination of model-based and data-driven approaches, improving the detailed and accurate virtual testing of blade bearings. It must be ensured to enable the fastest possible, most computationally efficient estimation of parameters during virtual testing or condition monitoring. The developed methodology is evaluated using the example of bolt preload on the test bench. By employing a random forest model and the strain gauge measurements attached to the blade bearing, the bolt preload parameters are estimated. The results demonstrate that the accuracy of the digital model of the blade bearing test bench is improved by up to 11 % in three out of four test bench setups. The great improvement in the accuracy of the digital model highlights the effectiveness of the proposed methodology in enhancing virtual blade bearing testing and digital-twin driven condition monitoring.
- Published
- 2024
- Full Text
- View/download PDF
45. Prediction of carbon dioxide emissions from Atlantic Canadian potato fields using advanced hybridized machine learning algorithms – Nexus of field data and modelling
- Author
-
Muhammad Hassan, Khabat Khosravi, Aitazaz A. Farooque, Travis J. Esau, Alaba Boluwade, and Rehan Sadiq
- Subjects
Greenhouse gases ,Soil-climatic variables ,Hybrid models ,Predictive analytics ,Agriculture (General) ,S1-972 ,Agricultural industries ,HD9000-9495 - Abstract
In this study, three novel machine learning algorithms of additive regression-random forest (AR-RF), Iterative Classifier Optimizer (ICO-AR-RF), and multi-scheme (MS-RF) were explored for carbon dioxide (CO2) flux rate prediction from three agricultural fields. To build the dataset, 401 samples were collected from two fields in Prince Edward Island (PEI) and 122 samples from the New Brunswick (NB), Canada. In addition, soil moisture (SM), temperature (ST), and electrical conductivity (EC), alongside eight climatic variables including wind speed (WS), solar radiation (SR), relative humidity (RH), precipitation (PCP), air temperature (AT), dew point (DP), vapour pressure difference (VPD) and reference evapotranspiration (ETo) were also collected. Greedy stepwise (GS) approach was implemented for feature selection. Finally, different qualitative (scatter plot, line graph, Taylor diagram, box plot, and Rug plot), and quantitative (uncertainty analysis, root mean square error (RMSE), percent of BIAS (PBIAS), Nash Sutcliff efficiency (NSE) and RMSE-observations standard deviation ratio (RSR)) techniques were used for model evaluation and comparison. Results of feature selection approaches revealed that DP, AT, SM, and ST are the four most effective variables at CO2 prediction in PEI, while AT, RH, DP, and ST are the most effective in the NB study area. For optimum input scenario, the GS algorithm was applied, and results showed that a combination of DP, AT, ST, SM, and ETo was the best for the PEI study area, while for NB, all input variables should be involved. Our analysis, for prediction of CO2 fluxes, confirmed that the ICO-AR-RF model performed the best at both PEI (RMSE=0.70, NSE=0.76, PBIAS=-5.11, RSR=0.48) and NB (RMSE=0.74, NSE=0.75, PBIAS=3.23, RSR=0.50), followed by MS-RF and AR-RF. Uncertainty analysis showed that CO2 prediction is more sensitive to input scenario selection than models in both study areas. Results revealed that climatic variables are more effective in CO2 prediction than soil characteristics and the developed hybrid model ICO-AR-RF can be a promising tool for decision-makers and beneficial for stakeholders.
- Published
- 2024
- Full Text
- View/download PDF
46. Landslide susceptibility mapping using an integration of different statistical models for the 2015 Nepal earthquake in Tibet
- Author
-
Senwang Huang and Liping Chen
- Subjects
Landslide susceptibility maps ,bivariate models ,multivariate models ,hybrid models ,Environmental technology. Sanitary engineering ,TD1-1066 ,Environmental sciences ,GE1-350 ,Risk in industry. Risk management ,HD61 - Abstract
Landslide susceptibility maps (LSMs) can play a bigger role in promoting the understanding of future landslides. This paper explores and compares the capability of three state-of-the-art bivariate models, namely the frequency ratio (FR), statistical index (SI), and weights of evidence (WoE), with ensembles of multivariate logistic regression (LR), for LSM in part of Tibet. Firstly, a landslide inventory map with 829 landslide records is obtained from field surveys and interpretation. Secondly, 15 landslide conditioning factors (LCFs) are considered and prepared from multi-data sources. Subsequently, a multicollinearity analysis is conducted to calculate the independence between different factors. Then, the Information Gain Ratio method (IGR) is performed to confirm the predictive ability of the LCFs. Finally, LSMs are constructed by, SI, WoE, LR and their combination through 12 preferred LCFs. The performance of different methods are validated and compared in term of areas under the receiver operating characteristic curve (AUC) and statistical measures. The results from this study indicate the hybrid models FR-LR, WoE-LR and SI-LR achieved higher AUC value than all corresponding single methods. The ensemble frameworks are well in line with the distribution pattern of historical landslides in the research area. Therefore, the proposed high-performance ensemble frameworks are expected to provide a useful reference for landslide hazard prevention in similar areas.
- Published
- 2024
- Full Text
- View/download PDF
47. An interpretable hybrid framework combining convolution latent vectors with transformer based attention mechanism for rolling element fault detection and classification
- Author
-
Ali Saeed, M. Usman Akram, Muazzam Khattak, and M. Belal Khan
- Subjects
Intelligent fault diagnosis ,Deep learning ,CNN and transformer ,Fault classification ,Hybrid models ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Failure of industrial assets can cause financial, operational and safety hazards across different industries. Monitoring their condition is crucial for successful and smooth operations. The colossal volume of sensory data generated and acquired throughout industrial operations supports real-time condition monitoring of these assets. Leveraging digital technologies to analyze acquired data creates an ideal environment for applying advanced data-driven machine learning techniques, such as convolutional neural networks (CNNs) and vision transformer (ViT) to detect faults and classify, enabling accurate prediction and timely maintenance of industrial assets. In this paper, we present a novel hybrid framework based on the local feature extraction ability of CNN with comprehensive understanding of transformer within a global context. The proposed method leverages the complex weight-sharing properties of CNNs and ability of transformers to understand the larger context of spatial relationships in large-scale patterns, making it applicable to datasets of varying sizes. Preprocessing methods such as data augmentation are used to train the model on the Case Western Reserve University (CWRU) dataset in order to increase generalization through computational efficiency. An average fault classification accuracy of 99.62% is accomplished over all three fault classes with an average time-to-fault detection of 38.4 ms. MFPT fault dataset is used to further validate the method with an accuracy of 99.17% for outer race and 99.26% for inner race. Moreover, the proposed framework can be modified to accommodate alternative convolutional models.
- Published
- 2024
- Full Text
- View/download PDF
48. Bioprocess feeding optimization through in silico dynamic experiments and hybrid digital models—a proof of concept
- Author
-
Gianmarco Barberi, Christian Giacopuzzi, and Pierantonio Facco
- Subjects
cell cultures ,hybrid models ,DoDE ,feeding schedule optimization ,artificial neural networks ,Technology ,Chemical technology ,TP1-1185 - Abstract
The development of cell cultures to produce monoclonal antibodies is a multi-step, time-consuming, and labor-intensive procedure which usually lasts several years and requires heavy investment by biopharmaceutical companies. One key aspect of process optimization is improving the feeding strategy. This step is typically performed though design of experiments (DoE) during process development, in such a way as to identify the optimal combinations of factors which maximize the productivity of the cell cultures. However, DoE is not suitable for time-varying factor profiles because it requires a large number of experimental runs which can last several weeks and cost tens of thousands of dollars. We here suggest a methodology to optimize the feeding schedule of mammalian cell cultures by virtualizing part of the experimental campaign on a hybrid digital model of the process to accelerate experimentation and reduce experimental burden. The proposed methodology couples design of dynamic experiments (DoDE) with a hybrid semi-parametric digital model. In particular, DoDE is used to design optimal experiments with time-varying factor profiles, whose experimental data are then utilized to train the hybrid model. This will identify the optimal time profiles of glucose and glutamine for maximizing the antibody titer in the culture despite the limited number of experiments performed on the process. As a proof-of-concept, the proposed methodology is applied on a simulated process to produce monoclonal antibodies at a 1-L shake flask scale, and the results are compared with an experimental campaign based on DoDE and response surface modeling. The hybrid digital model requires an extremely limited number of experiments (nine) to be accurately trained, resulting in a promising solution for performing in silico experimental campaigns. The proposed optimization strategy provides a 34.9% increase in the antibody titer with respect to the training data and a 2.8% higher antibody titer than the optimal results of two DoDE-based experimental campaigns comprising different numbers of experiments (i.e., 9 and 31), achieving a high antibody titer (3,222.8 mg/L) —very close to the real process optimum (3,228.8 mg/L).
- Published
- 2024
- Full Text
- View/download PDF
49. An improved hybrid model for shoreline change
- Author
-
Naresh Kumar Goud Lakku, Piyali Chowdhury, and Manasa Ranjan Behera
- Subjects
shoreline shift modeling ,hybrid models ,depth of closure ,coastal geomorphology ,wave climate ,Science ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Abstract
Predicting the nearshore sediment transport and shifts in coastlines in view of climate change is important for planning and management of coastal infrastructure and requires an accurate prediction of the regional wave climate as well as an in-depth understanding of the complex morphology surrounding the area of interest. Recently, hybrid shoreline evolution models are being used to inform coastal management. These models typically apply the one-line theory to estimate changes in shoreline morphology based on littoral drift gradients calculated from a 2DH coupled wave, flow, and sediment transport model. As per the one-line theory, the calculated littoral drift is uniformly distributed over the active coastal profile. A key challenge facing the application of hybrid models is that they fail to consider complex morphologies when updating the shorelines for several scenarios. This is mainly due to the scarcity of field datasets on beach behavior and nearshore morphological change that extends up to the local depth of closure, leading to assumptions in this value in overall shoreline shift predictions. In this study, we propose an improved hybrid model for shoreline shift predictions in an open sandy beach system impacted by human interventions and changes in wave climate. Three main conclusions are derived from this study. First, the optimal boundary conditions for modeling shoreline evolution need to vary according to local coastal geomorphology and processes. Second, specifying boundary conditions within physically realistic ranges does not guarantee reliable shoreline evolution predictions. Third, hybrid 2D/one-line models have limited applicability in simple planform morphologies where the active beach profile is subject to direct impacts due to wave action and/or human interventions, plausibly due to the one-line theory assumption of a constant time-averaged coastal profile. These findings provide insightful information into the drivers of shoreline evolution around sandy beaches, which have practical implications for advancing the shoreline evolution models.
- Published
- 2024
- Full Text
- View/download PDF
50. Developing a digital mapping of soil organic carbon on a national scale using Sentinel-2 and hybrid models at varying spatial resolutions
- Author
-
Xiande Ji, Balamuralidhar Purushothaman, R. Venkatesha Prasad, and P.V. Aravind
- Subjects
Soil organic carbon ,Sentinel-2 ,Hybrid models ,Digital soil mapping ,Spatial autocorrelation ,Germany ,Ecology ,QH540-549.5 - Abstract
Mapping the spatial distribution of soil organic carbon (SOC) is crucial for monitoring soil health, understanding ecosystem functions, and contributing to global carbon cycling. However, few studies have directly compared the influence of hybrid models and individual models with varying spatial resolutions on SOC prediction at a national scale. In this study, by combining remote sensing data, we utilized the LUCAS 2018 soil dataset to evaluate the potential capacities of hybrid models for predicting SOC content at different spatial resolutions in Germany. The hybrid models PLSRK and RFK consisted of partial least square regression (PLSR) with residual original kriging (OK) models, and random forest (RF) models with residual OK models, respectively. Individual PLSR and RF models were used as reference models. All these models were applied to estimate SOC content at 10 m, 50 m, 100 m, and 200 m spatial resolutions. Sentinel-2 bands, band indices, and topography variables were as predictors. The results revealed that hybrid models had a more accurate prediction of SOC content with higher explanations and lower prediction errors compared with individual models. The RFK model at the spatial resolution of 100 m was the fittest model with R2 = 0.416, RMSE = 0.545, and RPIQ = 1.647, which enhanced 3.74% of explanation compared with the performance of RF model. The results also showed that hybrid models at a relatively coarse resolution (100 m) had better accuracy instead of those at high spatial resolution (10 m, 50 m). Sentinel-2 remote sensing data showed significant predictive capabilities for estimating SOC content. The predicted spatial distribution of SOC content revealed that the high SOC concentrated in the northwest grassland, central and southwestern mountains, and the Alps in Germany. Our study provided a benchmark SOC map in Germany for monitoring the changes resulting from land use and climate impacts, and we illustrated the accuracy of hybrid models and the effects of spatial resolutions on SOC predictions at a national scale.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.