1. Improving stability and safety in concrete structures against high-energy projectiles: a machine learning perspective
- Author
-
Qianhui Zhang, Yuzhen Jin, Guangzhi Wang, Qingmei Sun, and Hamzeh Ghorbani
- Subjects
strategic structure ,concrete structures ,bullet penetration depth ,machine learning ,construction materials ,Technology - Abstract
Concrete structures are commonly used as secure settlements and strategic shelters due to their inherent strength, durability, and wide availability. Examining the robustness and integrity of strategic concrete structures in the face of super-energy projectiles is of utmost significance in safeguarding vital infrastructure sectors, ensuring the well-being of individuals, and advancing the course of worldwide sustainable progress. This research focuses on forecasting the penetration depth (BPD) through the application of robust models, such as Multilayer Perceptron (MLP), Support Vector Machine (SVM), Light Gradient Boosting Machine (LightGBM), and K-Nearest Neighbors (KNN) as ML models. The dataset used consists of 1,020 data points sourced from the National Institute of Standards and Technology (NIST), encompassing various parameters such as cement content (Cp), ground granulated blast-furnace slag (GGBFS), fly ash content (FA), water portion (Wp), superplasticizer content (Sp), coarse aggregate content (CA), fine aggregate content (FAA), concrete sample age (t), concrete compressive strength (CCS), gun type (G-type), bullet caliber (B-Cali), bullet weight (Wb), and bullet velocity (Vb). Feature selection techniques revealed that the MLP model, incorporating eight input variables (FA, CA, Sp, GGBFS, Cp, t, FAA, and CCS), provides the most accurate predictions for BPD across the entire dataset. Comparing the four models used in this study, KNN demonstrates distinct superiority over the other methods. KNN, a non-parametric ML model used for classification and regression, possesses several advantages, including simplicity, non-parametric nature, no training requirements, robustness to noisy data, suitability for large datasets, and interpretability. The results reveal that KNN outperforms the other models presented in this paper, exhibiting an R2 value of 0.9905 and an RMSE value of 0.1811 cm, signifying higher accuracy in its predictions compared to the other models. Finally, based on the error analysis across iterations, it is evident that the final accuracy error of the KNN model surpasses that of the SVM, MLP, and LightGBM models, respectively.
- Published
- 2024
- Full Text
- View/download PDF