262 results on '"Sensitivity analysis"'
Search Results
2. Conformal Sensitivity Analysis for Individual Treatment Effects.
- Author
-
Yin, Mingzhang, Shi, Claudia, Wang, Yixin, and Blei, David M.
- Subjects
- *
SENSITIVITY analysis , *TREATMENT effectiveness , *DECISION making , *SCIENTIFIC observation - Abstract
Estimating an individual treatment effect (ITE) is essential to personalized decision making. However, existing methods for estimating the ITE often rely on unconfoundedness, an assumption that is fundamentally untestable with observed data. To assess the robustness of individual-level causal conclusion with unconfoundedness, this article proposes a method for sensitivity analysis of the ITE, a way to estimate a range of the ITE under unobserved confounding. The method we develop quantifies unmeasured confounding through a marginal sensitivity model, and adapts the framework of conformal inference to estimate an ITE interval at a given confounding strength. In particular, we formulate this sensitivity analysis as a conformal inference problem under distribution shift, and we extend existing methods of covariate-shifted conformal inference to this more general setting. The resulting predictive interval has guaranteed nominal coverage of the ITE and provides this coverage with distribution-free and nonasymptotic guarantees. We evaluate the method on synthetic data and illustrate its application in an observational study. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Sensitivity Analyses of Clinical Trial Designs: Selecting Scenarios and Summarizing Operating Characteristics.
- Author
-
Han, Larry, Arfè, Andrea, and Trippa, Lorenzo
- Subjects
- *
EXPERIMENTAL design , *SENSITIVITY analysis , *MATHEMATICAL optimization - Abstract
The use of simulation-based sensitivity analyses is fundamental for evaluating and comparing candidate designs of future clinical trials. In this context, sensitivity analyses are especially useful to assess the dependence of important design operating characteristics with respect to various unknown parameters. Typical examples of operating characteristics include the likelihood of detecting treatment effects and the average study duration, which depend on parameters that are unknown until after the onset of the clinical study, such as the distributions of the primary outcomes and patient profiles. Two crucial components of sensitivity analyses are (i) the choice of a set of plausible simulation scenarios and (ii) the list of operating characteristics of interest. We propose a new approach for choosing the set of scenarios to be included in a sensitivity analysis. We maximize a utility criterion that formalizes whether a specific set of sensitivity scenarios is adequate to summarize how the operating characteristics of the trial design vary across plausible values of the unknown parameters. Then, we use optimization techniques to select the best set of simulation scenarios (according to the criteria specified by the investigator) to exemplify the operating characteristics of the trial design. We illustrate our proposal in three trial designs. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Optical solitons, qualitative analysis and multistability response to study the dynamical behaviour of light wave promulgation.
- Author
-
Raza, Nauman and Alhussain, Ziyad A.
- Abstract
Our aim is to explore novel optical solitons within the context of an extended higher-order nonlinear Schrödinger equation that governs the behaviour of propagating light waves. Primarily, this research finds abundant types of solitons, such as singular, kink and trigonometric functions subject to certain constraint conditions. We have utilized G /(bG + G + a)- expansion method, which has proved effective in retrieving these solitons and presenting them for further analysis. The model’s dynamic behaviour is then investigated through bifurcation, quasi-periodic oscillations, chaotic behaviour, and sensitivity. These include methods like phase portrait rendering, time series scrutiny, Lyapunov exponents calculation, and the assessment of multi-stability. Finally, sensitivity analysis is conducted at three distinct initial conditions, revealing that the model displays a high level of sensitivity, with substantial alterations occurring in response to minor changes in the initial conditions. The results of this study are revolutionary, intriguing, and possess crucial theoretical importance in evolution disruptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Effects of behaviour change on HFMD transmission.
- Author
-
Zhang, Tongrui, Zhang, Zhijie, Yu, Zhiyuan, Huang, Qimin, and Gao, Daozhou
- Subjects
- *
BASIC reproduction number , *FOOT & mouth disease , *DISEASE eradication , *SENSITIVITY analysis - Abstract
We propose a hand, foot and mouth disease (HFMD) transmission model for children with behaviour change and imperfect quarantine. The symptomatic and quarantined states obey constant behaviour change while others follow variable behaviour change depending on the numbers of new and recent infections. The basic reproduction number $ \mathcal {R}_0 $ R 0 of the model is defined and shown to be a threshold for disease persistence and eradication. Namely, the disease-free equilibrium is globally asymptotically stable if $ \mathcal {R}_0\le 1 $ R 0 ≤ 1 whereas the disease persists and there is a unique endemic equilibrium otherwise. By fitting the model to weekly HFMD data of Shanghai in 2019, the reproduction number is estimated at 2.41. Sensitivity analysis for $ \mathcal {R}_0 $ R 0 shows that avoiding contagious contacts and implementing strict quarantine are essential to lower HFMD persistence. Numerical simulations suggest that strong behaviour change not only reduces the peak size and endemic level dramatically but also impairs the role of asymptomatic transmission. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Genetic evidence supporting a causal role of Janus kinase 2 in prostate cancer: a Mendelian randomization study.
- Author
-
Yujia Xi, Rui Wen, Ran Zhang, Qirui Dong, Sijia Hou, and Shengxiao Zhang
- Subjects
- *
PROSTATE cancer , *GENETIC variation , *SENSITIVITY analysis , *MULTIPLICITY (Mathematics) , *ANDROGEN receptors , *HETEROGENEITY - Abstract
Background: Janus kinase-2 (JAK2) inhibitors are now being tried in basic research and clinical practice in prostate cancer (PCa). However, the causal relationship between JAK2 and PCa has not been uniformly described. Here, we examined the cause-effect relation between JAK2 and PCa. Methods: Two-sample Mendelian randomization (MR) analysis of genetic variation data of JAK2, PCa from IEU OpenGWAS Project was performed by inverse variance weighted, MR-Egger, and weighted median. Cochran's Q heterogeneity test and MR-Egger multiplicity analysis were performed to normalize the MR analysis results to reduce the effect of bias on the results. Results: Five instrumental variables were identified for further MR analysis. Specifically, combining the inverse variance-weighted (OR: 1.0009, 95% CI: 1.0001-1.0015, p = 0.02) and weighted median (OR: 1.0009, 95% CI: 1.0000-1.0017, p = 0.03). Sensitivity analysis showed that there was no heterogeneity (p = 0.448) and horizontal multiplicity (p = 0.770) among the instrumental variables. Conclusions: We found JAK2 was associated with the development of PCa and was a risk factor for PCa, which might be instructive for the use of JAK2 inhibitors in PCa patients. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Sharp Sensitivity Analysis for Inverse Propensity Weighting via Quantile Balancing.
- Author
-
Dorn, Jacob and Guo, Kevin
- Subjects
- *
SENSITIVITY analysis , *TREATMENT effectiveness , *QUANTILE regression - Abstract
Inverse propensity weighting (IPW) is a popular method for estimating treatment effects from observational data. However, its correctness relies on the untestable (and frequently implausible) assumption that all confounders have been measured. This article introduces a robust sensitivity analysis for IPW that estimates the range of treatment effects compatible with a given amount of unobserved confounding. The estimated range converges to the narrowest possible interval (under the given assumptions) that must contain the true treatment effect. Our proposal is a refinement of the influential sensitivity analysis by Zhao, Small, and Bhattacharya, which we show gives bounds that are too wide even asymptotically. This analysis is based on new partial identification results for Tan's marginal sensitivity model. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Optimal control analysis of a COVID-19 model.
- Author
-
Kifle, Zenebe Shiferaw and Obsu, Legesse Lemecha
- Subjects
- *
PONTRYAGIN'S minimum principle , *COVID-19 pandemic , *BASIC reproduction number , *COVID-19 , *INFECTIOUS disease transmission , *TRANSMISSION of sound - Abstract
In this paper, an optimal control model for the transmission dynamics of COVID-19 is investigated. We established important model properties like nonnegativity and boundedness of solutions, and also the region of invariance. Further, an expression for the basic reproduction number is computed and its sensitivity w.r.t model parameters is carried out to identify the most sensitive parameter. Based on sensitivity analysis, optimal control strategies were presented to reduce the disease burden and related costs. It is demonstrated that optimal control does exist and is unique. The characterization of optimal trajectories is analytically studied via Pontryagin's Minimum Principle. Moreover, various simulations were performed to support analytical results. The simulation results showed that the proposed controls significantly influence the disease burden compared to the absence of control cases. Further, it reveals that the applied control strategies are effective throughout the intervention period in reducing COVID-19 diseases in the community. Besides, the simulation results of the optimal control suggested that concurrently applying all controlling strategies outperform in mitigating the spread of COVID-19 compared to any other preventive measures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Optimal control analysis of coffee berry borer infestation in the presence of farmer's awareness.
- Author
-
Abawari, Mohammedsultan Abaraya, Obsu, Legesse Lemecha, and Melese, Abdisa Shiferaw
- Subjects
- *
PONTRYAGIN'S minimum principle , *BASIC reproduction number , *NONLINEAR differential equations , *ORDINARY differential equations , *EDUCATION of farmers , *COFFEE drinks - Abstract
Coffee is the most critical stimulant beverage in the world and represents a significant source of income in many tropical and subtropical countries. In this paper, a deterministic mathematical model has been formulated to describe the infestation dynamic of coffee berry borer (CBB) using a system of non-linear ordinary differential equations with farmers' awareness and optimal control. The system has two equilibrium points, namely the CBB free equilibrium point and the endemic equilibrium point which exist conditionally. The basic reproduction number, which plays a vital role in mathematical epidemiology, was derived. The qualitative analysis of the model revealed the scenario for both CBB free equilibrium and endemic equilibrium points. The local stability of the equilibria is established via the Jacobian matrix and Routh-Hurwitz criteria, while the global stability of the equilibria is proven by using an appropriate Lyapunov function. The normalized sensitivity analysis has also been performed to observe the impact of different parameters on the basic reproduction number. We extended the proposed model into an optimal control problem by incorporating two control variables, the effort made to reduce the colonizing females based on chemicals, traps, and biological control using entomopathogenic fungi such as Beauveria bassiana, that are applied to the surface of the coffee berries and kill the colonizing females of CBB when they drill an entry hole into the coffee berry and the effort made to increase the awareness of farmers through media campaign and education for the coffee farmer. Then the optimal control strategy is found by minimizing the number of CBB individuals considering the cost of implementation. The existence of optimal controls is examined using Pontryagin's minimum principle. Finally, the numerical simulations show agreement with the analytical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Exploring the role of fractal-fractional operators in mathematical modelling of corruption.
- Author
-
Awadalla, Muath, ur Rahman, Mati, Al-Duais, Fuad S., Al-Bossly, Afrah, Abuasbeh, Kinda, and Arab, Meraa
- Subjects
- *
FIXED point theory , *CORRUPTION , *MATHEMATICAL models , *TEST validity , *MULTIFRACTALS - Abstract
In the proposed manuscript, we present a novel mathematical model for analysing the persistence of corruption in human communities, based on fractal-fractional concepts and the Mittag-Leffler kernel law. Corruption is considered analogous to a disease that can spread and influence others who are free from corruption. Our model evaluates the equilibrium points of corruption and tests their stability using the corruption reproduction number. We also apply the fixed point theory concept to check for the existence and uniqueness of a solution, in the context of a fractional fractal operator. Solution stability is verified using the perturbed Ulam Hyers technique, and an approximate solution is obtained through the use of Lagrangian polynomials. To test the validity of our model, we simulate all compartments at different fractional orders and time durations, providing additional insights into the dynamics of corruption beyond natural orders. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs.
- Author
-
Yang, Liuqing, Zhou, Yongdao, Fu, Haoda, Liu, Min-Qian, and Zheng, Wei
- Abstract
Abstract Shapley value is originally a concept in econometrics to fairly distribute both gains and costs to players in a coalition game. In the recent decades, its application has been extended to other areas such as marketing, engineering and machine learning. For example, it produces reasonable solutions for problems in sensitivity analysis, local model explanation toward the interpretable machine learning, node importance in social network, attribution models, etc. However, it could be very expensive to compute the Shapley value. Specifically, in a
d -player coalition game, calculating a Shapley value requires the evaluation of d ! or 2 d marginal contribution values, depending on whether we are taking the permutation or combination formulation of the Shapley value. Hence, it becomes infeasible to calculate the Shapley value whend is reasonably large. A common remedy is to take a random sample of the permutations to surrogate for the complete list of permutations. We find an advanced sampling scheme can be designed to yield much more accurate estimation of the Shapley value than the simple random sampling (SRS). Our sampling scheme is based on combinatorial structures in the field of design of experiments (DOE), particularly the order-of-addition experimental designs for the study of how the orderings of components would affect the output. We show that the obtained estimates are unbiased, and can sometimes deterministically recover the original Shapley value. Both theoretical and simulations results show that our DOE-based sampling scheme outperforms SRS in terms of estimation accuracy. Surprisingly, it is also slightly faster than SRS. Lastly, real data analysis is conducted for theC. elegans nervous system and the 9/11 terrorist network. Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
12. Multicriteria decision and sensitivity analysis support for optimal airport site locations in Ordu Province, Turkey.
- Author
-
Çolak, H. Ebru, Memişoğlu Baykal, Tuğba, and Genç, Nihal
- Subjects
- *
DECISION making , *SENSITIVITY analysis , *AIRPORTS , *PROVINCES - Abstract
In the study carried out in the Ordu province of Turkey, 16 criteria to be used in airport site selection were handled and evaluated by subjecting them to successive processes in the GIS environment. Each criterion was weighted with the AHP method, and a map of suitability for airport site selection was obtained in the GIS environment using these weights. The most suitable place for the airport in Ordu province was detected by evaluating the nine regions determined according to the resulting map. Then, the alternative areas preferred from the most suitable areas were evaluated according to the total scores from the classification intervals with a scenario where the criterion weights were assumed to be equal. Finally, sensitivity analysis was performed to identify those who played an active role in the site selection analysis or not. Thus the sensitivity of the site selection analysis was tested. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A Negative Correlation Strategy for Bracketing in Difference-in-Differences.
- Author
-
Ye, Ting, Keele, Luke, Hasegawa, Raiden, and Small, Dylan S.
- Abstract
Abstract The method of difference-in-differences (DID) is widely used to study the causal effect of policy interventions in observational studies. DID employs a before and after comparison of the treated and control units to remove bias due to time-invariant unmeasured confounders under the parallel trends assumption. Estimates from DID, however, will be biased if the outcomes for the treated and control units evolve differently in the absence of treatment, namely if the parallel trends assumption is violated. We propose a general identification strategy that leverages two groups of control units whose outcomes relative to the treated units exhibit a negative correlation, and achieves partial identification of the average treatment effect for the treated. The identified set is of a union bounds form that involves the minimum and maximum operators, which makes the canonical bootstrap generally inconsistent and naive methods overly conservative. By using the directional inconsistency of the bootstrap distribution, we develop a novel bootstrap method to construct confidence intervals for the identified set and parameter of interest when the identified set is of a union bounds form, and we theoretically establish the asymptotic validity of the proposed method. We develop a simple falsification test and sensitivity analysis. We apply the proposed strategy for bracketing to study whether minimum wage laws affect employment levels. Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Impact of suspicious streamflow data on the efficiency and parameter estimates of rainfall–runoff models.
- Author
-
Thébault, Cyril, Perrin, Charles, Andréassian, Vazken, Thirel, Guillaume, Legrand, Sébastien, and Delaigue, Olivier
- Subjects
- *
STREAM measurements , *RAIN gauges , *STREAMFLOW , *HYDROLOGIC models , *ENGINEERING models , *TIME series analysis - Abstract
Many sources of error in hydroclimatic data can affect hydrological modelling, yet the impact of streamflow data quality is poorly quantified. This work aims to investigate whether inconsistencies found in streamflow time series commonly available for hydrological studies (typically in national streamflow archives) have an impact on the efficiency and the parameter estimates of rainfall–runoff models. Hydroclimatic data were gathered at the hourly time step over the period 1998–2018 for a set of 30 catchments in France. Hydrological modelling was carried out with the lumped conceptual GR5H (standing for modèle du Génie Rural à 5 paramètres Horaire, i.e. Hourly 5-parameter rural engineering model) model. A typology of "realistic" suspicious streamflow was established to set up several error models in order to corrupt the data. Our results suggest that common suspicious streamflow data do not have a strong impact on model efficiency and parameter estimates overall, but may be an important source of instability and lack of robustness when working on a single catchment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Sensitivity to Unobserved Confounding in Studies with Factor-Structured Outcomes.
- Author
-
Zheng, Jiajing, Wu, Jiaxi, D’Amour, Alexander, and Franks, Alexander
- Abstract
Abstract In this work, we propose an approach for assessing sensitivity to unobserved confounding in studies with multiple outcomes. We demonstrate how prior knowledge unique to the multi-outcome setting can be leveraged to strengthen causal conclusions beyond what can be achieved from analyzing individual outcomes in isolation. We argue that it is often reasonable to make a shared confounding assumption, under which residual dependence amongst outcomes can be used to simplify and sharpen sensitivity analyses. We focus on a class of factor models for which we can bound the causal effects for all outcomes conditional on a single sensitivity parameter that represents the fraction of treatment variance explained by unobserved confounders. We characterize how causal ignorance regions shrink under additional prior assumptions about the presence of null control outcomes, and provide new approaches for quantifying the robustness of causal effect estimates. Finally, we illustrate our sensitivity analysis workflow in practice, in an analysis of both simulated data and a case study with data from the National Health and Nutrition Examination Survey (NHANES). Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Characteristic sensitivity analysis of herringbone gear power-split transmission system.
- Author
-
Dong, Jin-cheng, Wang, San-Min, and Kou, Xiao-xi
- Subjects
- *
TORSIONAL vibration , *SENSITIVITY analysis , *FREE vibration , *MODAL analysis , *STRAIN energy - Abstract
The herringbone gear power-split transmission is widely used in high-speed and heavy-load transmission systems, and its dynamic analysis is becoming increasingly important. The characteristic sensitivity of a gear transmission system indicates the influence of the dynamic parameters on the natural frequency of the gear transmission system, and it functions as an important theoretical reference value in the dynamic design of a herringbone gear power-split transmission system. In this paper, the dynamic equations of a herringbone gear power-split transmission system for a ship are presented, and the natural frequencies and modal shapes are calculated based on the free torsional vibration differential equation. Subsequently, the modal shapes were classified into the coupling vibration model and the branch vibration model. The characteristic sensitivity was studied using modal analysis, and the characteristic sensitivity calculation equations for single and double root of natural frequency were deduced, and the characteristic sensitivity of the natural frequencies to the mesh stiffness was computed. Finally, the relationship between the modal strain energy and the characteristic sensitivity was analysed, and the physics essence of the characteristic sensitivity was revealed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Sensitivity analysis of error-contaminated time series data under autoregressive models with the application of COVID-19 data.
- Author
-
Zhang, Qihuang and Yi, Grace Y.
- Subjects
- *
TIME series analysis , *AUTOREGRESSIVE models , *ERRORS-in-variables models , *SENSITIVITY analysis , *COVID-19 - Abstract
Autoregressive (AR) models are useful in time series analysis. Inferences under such models are distorted in the presence of measurement error, a common feature in applications. In this article, we establish analytical results for quantifying the biases of the parameter estimation in AR models if the measurement error effects are neglected. We consider two measurement error models to describe different data contamination scenarios. We propose an estimating equation approach to estimate the AR model parameters with measurement error effects accounted for. We further discuss forecasting using the proposed method. Our work is inspired by COVID-19 data, which are error-contaminated due to multiple reasons including those related to asymptomatic cases and varying incubation periods. We implement the proposed method by conducting sensitivity analyses and forecasting the fatality rate of COVID-19 over time for the four most populated provinces in Canada. The results suggest that incorporating or not incorporating measurement error effects may yield rather different results for parameter estimation and forecasting. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Bahadur Efficiency of Observational Block Designs.
- Author
-
Rosenbaum, Paul R.
- Abstract
Abstract To be convincing, an observational or nonrandomized study of causal effects must demonstrate that its conclusions cannot be readily explained by a small unmeasured bias in the way individuals were assigned to treatment or control. The Bahadur relative efficiency of a sensitivity analysis compares the performance of different test statistics or different research designs when sensitivity to unmeasured bias is appraised: better statistics and better designs exhibit insensitivity to larger biases. Bahadur efficiency is relevant here because, unlike Pitman efficiency, it can depict efficiency with an effect that is not trivially small: every trivially small treatment effect is sensitive to trivially small biases. The Bahadur efficiency of a sensitivity analysis has been used by various authors in the simple case of matched pairs, but the technical issues are more complex in the case of blocks larger than pairs, and they are developed here for the first time. Choosing a better test statistic for a block design, or choosing a better block size—larger than pairs—can result in enormous increases in the efficiency of a sensitivity analysis. An adaptive choice of test statistic can ensure the better Bahadur efficiency of two competing statistics. An R package weightedRank implements the methods, contains the example and reproduces its analysis. Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Stochastic multi-criteria decision making framework based on SMAA-VIKOR for reservoir flood control operation.
- Author
-
Xu, Chengjing, Zhong, Ping-an, Zhu, Feilin, Li, Lingjie, Lu, Qingwen, and Yang, Luhua
- Subjects
- *
FLOOD control , *MULTIPLE criteria decision making , *DECISION making , *RESERVOIRS , *SENSITIVITY analysis , *STOCHASTIC models , *WATERSHEDS - Abstract
In reservoir flood control operation, candidate reservoir operation alternatives are evaluated, ranked and selected by a multi-criteria decision making (MCDM) method. However, deterministic methods cannot handle MCDM problems where uncertainties exist in both criteria performance values (PVs) and criteria weights (CWs). To solve the issue, a SMAA-VIKOR model integrating stochastic multicriteria acceptability analysis (SMAA) theory with the Viekriterijumsko Kompromisno Rangiranje (VIKOR) method is established based on Aydogan and Ozmen (2017). Moreover, we conduct a comprehensive evaluation of the model's decision making results, including risk assessment and sensitivity analysis. The case study applied to a flood control system in Dadu River in China indicates that the SMAA-VIKOR model can quickly identify the optimal alternative that meets the subjective preferences of decision makers, and has remarkable advantages compared to the traditional SMAA-2 model and the deterministic VIKOR method, which can provide theoretical support for making a robust reservoir release decision. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Probabilistic mapping and sensitivity assessment of dam-break flood hazard.
- Author
-
Rizzo, Carmine, Maranzoni, Andrea, and D'Oria, Marco
- Subjects
- *
DAM failures , *STRUCTURAL failures , *FLOOD warning systems , *CONCRETE dams , *CONCRETE masonry , *CONCRETE fatigue , *EMERGENCY management - Abstract
Quantitative assessment of dam-break flood hazard is central in dam emergency action planning and is typically performed deterministically. In structural failure of concrete or masonry dams the collapse is assumed to be total and instantaneous, with the reservoir level at the spillway crest level. A probabilistic method is here proposed based on a set of dam-break scenarios characterized by different breach widths and reservoir levels in order to provide an appraisal of uncertainties in flood hazard indicators. Each scenario is attributed a weight, defined as a conditional probability given a dam-break event. Probabilistic flood hazard and inundation maps are produced for the case study of the hypothetical collapse of the Mignano dam (River Arda, northern Italy), and a sampling-based global sensitivity analysis is performed. Dam-break flooding was simulated using a two-dimensional hydrodynamic model on a high-resolution mesh. The probabilistic maps inherently provide quantitative information on the uncertainty of dam-break flood hazard. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Sensitivity analysis of unmeasured confounding in causal inference based on exponential tilting and super learner.
- Author
-
Zhou, Mi and Yao, Weixin
- Subjects
- *
CAUSAL inference , *SENSITIVITY analysis , *NONPARAMETRIC estimation , *MACHINE learning , *CONFOUNDING variables , *LATENT variables - Abstract
Causal inference under the potential outcome framework relies on the strongly ignorable treatment assumption. This assumption is usually questionable in observational studies, and the unmeasured confounding is one of the fundamental challenges in causal inference. To this end, we propose a new sensitivity analysis method to evaluate the impact of the unmeasured confounder by leveraging ideas of doubly robust estimators, the exponential tilt method, and the super learner algorithm. Compared to other existing methods of sensitivity analysis that parameterize the unmeasured confounder as a latent variable in the working models, the exponential tilting method does not impose any restrictions on the structure or models of the unmeasured confounders. In addition, in order to reduce the modeling bias of traditional parametric methods, we propose incorporating the super learner machine learning algorithm to perform nonparametric model estimation and the corresponding sensitivity analysis. Furthermore, most existing sensitivity analysis methods require multivariate sensitivity parameters, which make its choice difficult and subjective in practice. In comparison, the new method has a univariate sensitivity parameter with a nice and simple interpretation of log-odds ratios for binary outcomes, which makes its choice and the application of the new sensitivity analysis method very easy for practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Sensitivity Analysis of Parameters of U-Net Model for Semantic Segmentation of Silt Storage Dams from Remote Sensing Images.
- Author
-
Hou, Jingwei, Hou, Bo, Zhu, Moyan, Zhou, Ji, and Tian, Qiong
- Subjects
- *
DEEP learning , *DAMS , *SILT , *SOIL conservation , *SENSITIVITY analysis , *GEOGRAPHIC information systems , *REMOTE sensing - Abstract
Building silt storage dams is an important measure to control soil erosion. Sensitivity analysis of the parameters in a deep learning model is the premise of extracting high-precision silt storage dams from high-resolution remote sensing (RS) images. In this study, watershed features of Hulu River and Lanni River in the Loess Plateau, China, are extracted using a geographic information system and digital elevation model. The detection of silt storage dams using the U-Net model considered three high-resolution RS image datasets to evaluate the effect of different input sizes, batch sizes, and sample sizes on accuracies of silt storage dams. The results show that a large input size, batch size, and sample size can improve the accuracy of silt storage dams extracted by U-Net. U-Net with Dataset 3, input size of 576 × 576, and batch size of 4 achieved an overall accuracy of 96.26%, F1 score of 70.61%, mean intersection over union of 75.33%, training time of 485 ms/step, minimum noises and shadow, and clear outlines of silt storage dams. This study provides theoretical and practical decision-making for the planning, construction, and maintenance of silt storage dams, as well as ecological protection and high-quality development of the Yellow River Basin. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Analyzing the Effects of Plasma Treatment Process Parameters on Fading of Cotton Fabrics Dyed with Two-Color Mix Dyes Using Bayesian Regulated Neural Networks (BRNNs).
- Author
-
Liu, Senbiao, Liu, Yaohui Keane, Lo, Kwan-Yu Chris, and Kan, Chi-Wai
- Subjects
- *
NATURAL dyes & dyeing , *COTTON textiles , *COTTON , *PLASMA materials processing , *COLORIMETRY , *WATER purification , *DYES & dyeing - Abstract
This study used Bayesian Regulated Neural Networks (BRNN) with 10-fold cross-validation to accurately forecast fading effects of plasma treatment on cotton fabrics for a given set of parameters. By training six independent BRNN models, a reduction in model complexity and an enhancement in generalizability to unknown datasets were achieved. The input comprises plasma treatment parameters and color measurements of the cotton fabric before fading, while the output comprises color measurements after fading. The plasma treatment parameters included color depth, air (oxygen) concentration, water content and treatment time. Color measurements included CIE L*a*b*C*h and K/S values. Furthermore, 162 datasets derived from two-color mixed-dye cotton fabrics were utilized for training and testing. The outcomes revealed superior prediction performance of the BRNN compared to the Levenberg-Marquardt Neural Networks, with R2 values approaching 1 and 82.35% to 94.12% of the sample predictions lying within the acceptable color difference range. Through global sensitivity analysis, the impact of treatment parameters on fading effects was quantified, providing a scientific basis for parameter adjustment. This study not only elucidated the mechanism of plasma treatment-induced fading but also offers effective prediction tools for the intelligent and digital development of the fashion clothing fading domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Selection of Knitted Fabrics Using a Hybrid BBWM-PFTOPSIS Method.
- Author
-
Ye, Jing and Chen, Ting-Yu
- Subjects
- *
AMBIGUITY , *TOPSIS method , *MULTIPLE criteria decision making , *FUZZY sets , *TEXTILES , *SENSITIVITY analysis - Abstract
Selecting the best knitted fabric with various comfort properties is considered a complicated multi-criteria decision-making (MCDM) issue that involves ambiguity and vagueness. In such scenarios, Pythagorean fuzzy sets (PFSs) provide an effective tool for addressing uncertainty and ambiguity in MCDM problems that contain human subjective evaluations and judgments. First, this research identifies the factors affecting the comfort of knitted fabrics as the evaluation criteria. Second, the Bayesian best-worst method (BBWM) is preferred for less pairwise comparisons and obtains highly reliable results with a probabilistic perspective for determining the criteria weights. Furthermore, due to its logical computation approach and ease of operation, the technique for order preference by similarity to ideal solution (TOPSIS) is commonly utilized for addressing MCDM problems. Therefore, this research proposes an innovative MCDM framework that combines the BBWM technique with Pythagorean fuzzy TOPSIS (PFTOPSIS). The BBWM determines the criteria weights, and the weighted sine similarity-based PFTOPSIS is utilized to rank alternatives. The proposed BBWM-PFTOPSIS approach was employed to solve a real-world case. Moreover, this article conducts a sensitivity analysis and three comparative analyses to reveal the efficiency and reliability of the BBWM-PFTOPSIS approach. The ranking results establish the viability and effectiveness of BBWM-PFTOPSIS. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Determination of Quality Value of Cotton Fiber Using Integrated Best-Worst Method-Revised Analytic Hierarchy Process.
- Author
-
Mitra, Ashis and Majumdar, Abhijit
- Subjects
- *
ANALYTIC hierarchy process , *COTTON fibers , *COTTON quality - Abstract
Selection of cotton fibers in terms of their quality value has created a domain of emerging interest among the researchers. In this study, a newly developed Best-Worst Method (BWM) was integrated with Revised Analytic Hierarchy Process (RAHP) to rank cotton fiber lots on the basis of six apposite fiber properties namely fiber bundle tenacity, elongation, micronaire, upper half mean length, uniformity index, and short fiber index. Ranking performance of this integrated approach closely resembles those of the other multi-criteria decision-making (MCDM) approaches. No occurrence of rank reversal during the sensitivity analyses corroborates the stability and robustness of the BWM-RAHP method. Uniqueness of the present study lies in the fact that this is the maiden application of the vector-based BWM approach, that uses fewer pairwise comparisons than other variants of MCDM, in a cotton fiber grading problem. The RAHP adds value to the decision model by overcoming the problem of ranking inconsistency. Rank correlations between the ranking based on quality value of cotton and those based on yarn tenacity are also encouraging, and further bolster the efficacy of the BWM-RAHP method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A comparative analysis of surface roughness in robot spray painting using nano paint by Taguchi – fuzzy logic-neural network methods.
- Author
-
sethuramalingam, Prabhu, Kiran, J. R. V. Sai, Uma, M, and Thushar, T
- Subjects
- *
SPRAY painting , *EXPERT systems , *SURFACE roughness , *FUZZY expert systems , *SURFACE analysis , *PAINT , *ROBOTS - Abstract
Taguchi L9 orthogonal array is used to conduct the IRB1410 robot spray painting experiments to investigate the surface characteristics of Cold Rolled close Annealed (CRCA) steel workpiece. The main parameter of spray painting is considered as Distance (D), Pressure (P) and Speed (S) of the Robot. The nanopaint is prepared through the ultra-sonication process for robot spray painting applications. To optimise the robot process parameters and to achieve the accuracy level, the fuzzy expert system is integrated with Taguchi analysis. The developed truth valued fuzzy logic model is assessed the surface roughness by robot nanopaint with Carbon Nanotube (CNT). ANOVA is used to identify the noteworthy factors influencing surface roughness in robot painting process and validated using the empirical regression model. Based on the model, the influence of Robot painting process parameters on surface roughness is analysed by using the sensitivity analysis method. Further, the Neural Network model is used to predict the thickness variation of Nano painted workpiece and compared the results with experimental values. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Performance analysis and sensitivity of system parameters on the performance of trilateral-cycle power generator systems.
- Author
-
Ajimotokan, H. A., Ajao, K. R., Rabiu, A. B., Yahaya, T., Nasir, A., Adegun, I. K., and Popoola, O. T.
- Subjects
- *
SENSITIVITY analysis , *WASTE heat , *THERMODYNAMIC cycles , *THERMAL efficiency , *HEAT recovery , *THERMOCYCLING - Abstract
Though the trilateral cycle (TLC) is a promising heat recovery-to-power cycle, its application has not been widely accepted or commercialised due to some thermodynamic feasibility concerns. This study examined the performance analysis and sensitivity of the system parameters on the thermodynamic performance of the TLC power generator systems for waste heat recovery-to-power generation. Thermodynamic models of the simple, recuperative, reheat and regenerative TLCs were established. Performance analysis and sensitivity of the system parameters on the cycles' performance were conducted at the expander inlet temperature of 453 to 473 K, expander pressure of 2 to 3 MPa and expander isentropic efficiency of 50% to 100%. The expander inlet temperatures, pressures, and its isentropic efficiencies have significant effects on the thermodynamic efficiencies and net work output of the cycles. At 473 K cycle high temperature, the thermal efficiencies of the cycles increase from 20.13% to 21.97%, 23.29% to 23.91%, 20.62% to 22.07% and 20.66% to 22.9% for the simple, recuperative, reheat and regenerative TLCs, respectively. Their corresponding net power outputs varied from 131.6 to 134.1 kW, 145.9 to 152.2 kW, 113.8 to 124.1 kW and 124.9 to 130.5 kW, respectively. The cycles' thermodynamic performance increased with an increase in expander inlet temperature, expander pressure and expander isentropic efficiencies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Surface spectral irradiance and irradiance partitioning in a complex mountain environment: understanding location-dependent topographic effects in satellite imagery.
- Author
-
Bishop, Michael P., Young, Brennan W., and Colby, Jeffrey D.
- Subjects
- *
REMOTE-sensing images , *SPECTRAL irradiance , *REMOTE sensing , *RESEARCH personnel , *SENSITIVITY analysis , *COMPUTER simulation - Abstract
Remote sensing of mountain environments is extremely difficult due to atmosphere-topography-landcover couplings that govern the irradiant fluxes. Researchers have not adequately accounted for topographic effects in terms of irradiance components over the landscape. Consequently, we evaluated surface spectral-irradiance components and irradiance partitioning with respect to anisotropic-reflectance correction (ARC). Our work focuses on addressing issues of scale, parameter sensitivity analysis and irradiance partitioning to provide new insights into understanding the complex nature of topographic modulation of the radiation-transfer cascade. We accomplish this by characterizing irradiance components in numerical simulations that account for topographic effects and couplings. Our results reveal atmosphere-topographic couplings associated with irradiance components that are not spatially coincident. Furthermore, we found that commonly utilized adjacent-terrain irradiance parameterization schemes produce different results compared to a more comprehensive parameterization scheme. Parameter sensitivity analysis revealed that the exclusion of topographic effects also produces different results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Assessing the accuracy of sensitivity analysis: an application for a cellular automata model of Bogota's urban wetland changes.
- Author
-
Cuellar, Yenny and Perez, Liliana
- Subjects
- *
CELLULAR automata , *WETLANDS , *SENSITIVITY analysis , *CELL analysis , *LAND cover , *NEURAL circuitry - Abstract
This study analyzes the outcomes of Cellular Automata (CA) with different neighborhood sizes and spatial resolution configurations on the performance of the Future Land Use Simulation (FLUS) model. The analysis is executed using three analogic images to extract the land use/land cover in Bogota, Colombia, for three years: 1998, 2004, and 2010. The FLUS model has an Artificial Neuronal Network model, which was used for calculating the relationships between the land uses and the associated drivers and to estimate the probability of occurrence of each land use. Whenever a CA is used to model and simulate, sensitivity analysis (SA) becomes a crucial step in CA modeling to understand better the influence of parameters' changes in the simulation outcomes. Therefore, the SA is conducted by varying the neighborhood sizes between 3 × 3, 5 × 5, and 7 × 7 for 5 and 30 meters. In addition, cross-classification maps, Area Under the Curve (AUC) of the Total Operating Characteristic, landscape metrics, the figure of merit, Fuzzy Kappa, and disagreement metrics were calculated to assess how well the model performed. High AUC values and low disagreement results show that, in general, the model performed well, and the accuracy of the outputs improves with a 3 × 3 neighborhood size and 5 meters spatial resolution. This study provides a broad assessment approach to the different methods that must be considered to evaluate the sensitivity of CA models in the simulation of urban wetlands' spatial-temporal evolution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. An intrinsic vulnerability approach to assess an overburden alluvial aquifer exposure to sinkhole-prone area; results from a Central Iran case study.
- Author
-
Taheri, Kamal, Missimer, Thomas M., Bayatvarkeshi, Maryam, Mahmoudi Sivand, Siamak, Fathi, Saadi, Toranjian, Amin, and Dehghan Manshadi, Behrouz
- Subjects
- *
AQUIFER pollution , *AQUIFERS , *HYDRAULIC conductivity , *SINKHOLES , *KARST , *SENSITIVITY analysis - Abstract
DRASTIC is a model used to reliably assess the pollution potential of aquifers at global location. Despite various criticisms of this model or its optimization, sometimes the territorial or geological conditions still must rely on this model approach based on its simplicity. The existence of sinkholes in the alluvial aquifer with an underlying karst aquifer is a clear example of the need to change or optimize the DRASTIC model to improve accuracy of prediction. In this study, the DRASTIC model was changed by adding two factors, the distance to a sinkhole and the sinkhole catchment factors. The modified DRASTIC model was then used to evaluate the pollution potential of Abarkouh aquifer in Yazd province, Central Iran. The intrinsic vulnerability index of Abarkouh aquifer was calculated by the weighted sum method in GIS. The resulting numerical values ranged between 62 and 176 for full study area. Without adding the two layers related to Sin-DRASTIC, the generic DRASTIC Index map showed summed values ranging between 54 and 130. The final Sin-DRASTIC map was divided it into 4 orders very low (<106; 722 km2, 78%), low (107-133; 186 km2, 20%), low to moderate (133-160; 15.3 km2, 1.7%), and moderate vulnerability (161-176; 1.7 km2, 0.3%) vulnerability. Both of the created maps showed high Area Under the Curve values at 76% and 78% for the generic DRASTIC and Sin-DRASTIC respectively. However, the area of the aquifer containing the sinkholes showed low and very low zones of contamination vulnerability on the generic DRASTIC map. The sensitivity analysis showed that the aquifer vulnerability to pollution is mainly controlled by the vadose zone impacts and the sinkhole catchment factor. Therefore, it is concluded that for geologic settings containing sinkholes that are open or contain higher vertical hydraulic conductivity sediment compared to the general matrix of the unconfined aquifer, the Sin-DRASTIC model method yields a superior prediction. Sin-DRASTIC is a modified version of the generic DRASTIC method that used in regions wherein sinkholes occur The Sin-DRASTIC models was applied to intrinsic vulnerability of Abarkouh aquifer, Central Iran. The high AUC values at 76% and 78% for the generic DRASTIC and Sin-DRASTIC respectively, show the accuracy of the results. In the Sin-DRASTIC map the sinkhole areas showed a high potential for groundwater contamination [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Surrogate-based optimization approach for capacitated hub location problem with uncertainty.
- Author
-
Junghyun Kim and Changyun Chung
- Subjects
- *
SURROGATE-based optimization , *NETWORK hubs , *NETWORK operating system , *LOCATION problems (Programming) , *RESEARCH questions , *STARTUP costs - Abstract
CJ Logistics has started to consider opening single new hub facility to expand the current transportation network system. This naturally leads to formulating a research question "where should the new hub facility be located in South Korea to minimize total transportation cost of the network system operated by the company?". This research aims to answer the question by proposing a surrogatebased optimization approach. In addition to finding an optimal location of the new hub facility, this research performs sensitivity analysis to study the correlation between hub capacity (i.e., the source of uncertainty) and transportation cost. The results indicate that (1) total transportation cost after the establishment of the new hub facility at the optimal location is reduced by approximately 14% compared to the current transportation network system and (2) the currently operated hub facility located in Daejeon has the greatest influence on total transportation cost; while the existing hub facilities located in Cheongwon, Yongin, and Gunpo have little impact on total transportation cost after the construction of the new hub facility. It is expected that the outcome of this research helps the company systematically manage the transportation network system when the new hub facility is constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Predicting the compressive strength of cellulose nanofibers reinforced concrete using regression machine learning models.
- Author
-
Anwar, Aftab, Yang Wenyi, Li Jing, Wang Yanwei, Bo Sun, Ameen, Muhammad, Shah, Ismail, Li Chunsheng, Ul Mustafa, Zia, and Muhammad, Yaseen
- Subjects
- *
MACHINE learning , *COMPRESSIVE strength , *SUPERVISED learning , *CELLULOSE , *RANDOM forest algorithms , *REINFORCED concrete , *NANOFIBERS - Abstract
Cellulose nanofibers (CNFs) are the newly introduced plant-based materials in the construction industry to ensure sustainable development. The use of artificial intelligence (AI) techniques especially machine learning (ML) models has assisted to economized civil engineering. This research aims to determine the compressive strength of cellulose nanofibers reinforced concrete by using supervised regression machine learning techniques for analysis before adopting to utilize. To achieve this task, the machine learning models: Random Forest (RF), Linear Regression (LR), Support Vector Regressor (SVR), Gradient Boosting Regressor (GBR), Ada Boosting Regressor (ABR), K-Neighbor Regressor (KNN), Bagging Regressor (BR), XG Boost Regressor (XGBR), Decision Tree (DT), and Pruned Decision Tree (PDT) were implemented. An experimental-based dataset containing 695 data points was prepared and split into two categories (Training dataset = 70%, Testing dataset = 30%) for the evolution of ML models. There were seven independent variables: cement (kg/m³), water (kg/m³), CNFs (kg/m³), superplasticizer (kg/m³), fine aggregate (kg/m³), coarse aggregate (kg/m³), and age (Day) variables as an input and one dependent variable: compressive strength fc of CNFs reinforced concrete (MPa) as an output. The following metrics were employed to gauge the ability of the model: R², MAPE, MAE, MSE, and RMSE. The findings specified that seven out of ten models (RF, BR, XGBR, DT, GBR, ABR, and KNN) to predict the compressive strength of CNFs concrete had a firm capability (R² >0.72, MAPE = 0.1, and MAE = 5) confirming to the standard of R² value greater than 0.60 and metrics values very less, close to one. According to the sensitivity analysis of Random Forest model, water and cement were the factors with the biggest effects on the prediction of CNFs reinforced concrete, while the smallest effecting variable was coarse aggregate. It was concluded that the RF, BR, and DT were the premier predicting models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Estimation of daily reference evapotranspiration from limited climatic variables in coastal regions.
- Author
-
Vosoughifar, Hamidreza, Khoshkam, Helaleh, Bateni, Sayed M., Jun, Changhyun, Xu, Tongren, Band, Shahab S., and Neale, Christopher M. U.
- Subjects
- *
EVAPOTRANSPIRATION , *GENE expression , *GENETIC programming , *SENSITIVITY analysis , *SPLINES - Abstract
Generalized multi-adaptive regression splines (MARS) and genetic expression programming (GEP)-based equations were developed to estimate Reference Evapotranspiration (ETo) in coastal regions. Following existing regression-based ETo retrieval equations, five climatic data configurations were used to train, validate, and test the MARS and GEP models (hereafter called MARS1–MARS5 and GEP1–GEP5). The performances of the MARS and GEP models with each of the five input configurations were assessed. The generalized MARS1–MARS5 and GEP1–GEP5 models could estimate ETo accurately in regions other than their training region. In addition, MARS1 performed better than MARS2–MARS5. Similarly, GEP1 outperformed GEP2–GEP5, implying that input configuration 1 contains the most important information about ETo. The results also show that MARS can estimate ETo more accurately than GEP. The findings indicate that MARS1–MARS5 and GEP1–GEP5 improved ETo values compared with the corresponding traditional equations. Finally, sensitivity analyses were conducted to evaluate the impact of each input variable on ETo. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Mathematical modelling of echinococcosis in human, dogs and sheep with intervention.
- Author
-
Getachew Bitew, Birhan, Munganga, Justin Manango W., and Shitu Hassan, Adamu
- Subjects
- *
ECHINOCOCCOSIS , *ECHINOCOCCUS granulosus , *SHEEP , *MATHEMATICAL models , *DOGS , *SENSITIVITY analysis - Abstract
In this study, a model for the spread of cyst echinococcosis with interventions is formulated. The disease-free and endemic equilibrium points of the model are calculated. The control reproduction number R c for the model is derived, and the global dynamics are established by the values of R c . The disease-free equilibrium is globally asymptotically stable if and only if R c < 1. For R c > 1 , using Volterra–Lyapunov stable matrices, it is proven that the endemic equilibrium is globally asymptotically stable. Sensitivity analysis to identify the most influential parameters in the dynamics of CE is carried out. To establish the long-term behaviour of the disease, numerical simulations are performed. The impact of control strategies is investigated. It is shown that, whenever vaccination of sheep is carried out solely or in combination with cleaning or disinfecting of the environment, cyst echinococcosis can be wiped out. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Dynamic analysis and optimal control of a class of SISP respiratory diseases.
- Author
-
Shi, Lei and Qi, Longxing
- Subjects
- *
RESPIRATORY diseases , *DUST removal , *CONCENTRATION functions , *SENSITIVITY analysis , *FACTOR analysis , *GLOBAL analysis (Mathematics) - Abstract
In this paper, the actual background of the susceptible population being directly patients after inhaling a certain amount of PM 2.5 is taken into account. The concentration response function of PM 2.5 is introduced, and the SISP respiratory disease model is proposed. Qualitative theoretical analysis proves that the existence, local stability and global stability of the equilibria are all related to the daily emission P 0 of PM 2.5 and PM 2.5 pathogenic threshold K. Based on the sensitivity factor analysis and time-varying sensitivity analysis of parameters on the number of patients, it is found that the conversion rate β and the inhalation rate η has the largest positive correlation. The cure rate γ of infected persons has the greatest negative correlation on the number of patients. The control strategy formulated by the analysis results of optimal control theory is as follows: The first step is to improve the clearance rate of PM 2.5 by reducing the PM 2.5 emissions and increasing the intensity of dust removal. Moreover, such removal work must be maintained for a long time. The second step is to improve the cure rate of patients by being treated in time. After that, people should be reminded to wear masks and go out less so as to reduce the conversion rate of susceptible people becoming patients. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Sensitivity analysis for active electromagnetic field manipulation in free space.
- Author
-
Qi, Chaoxian, Egarguin, Neil Jerome A., Zeng, Shubin, Onofrei, Daniel, and Chen, Jiefu
- Subjects
- *
ELECTROMAGNETIC fields , *SENSITIVITY analysis , *SINGULAR value decomposition , *INVERSE problems , *INTEGRAL representations - Abstract
This paper presents a detailed sensitivity analysis of the active manipulation scheme for electromagnetic (EM) fields in free space. The active EM fields control strategy is designed to construct surface current sources (electric and/or magnetic) that can manipulate the EM fields in prescribed exterior regions. The active EM field control is formulated as an inverse source problem. We follow the numerical strategies in our previous works, which employ the Debye potential representation and integral equation representation in the forward modelling. We consider two regularization approaches to the inverse problem to approximate a current source, namely the truncated singular value decomposition (TSVD) and the Tikhonov regularization with the Morozov discrepancy principle. Moreover, we discuss the sensitivity of the active scheme (concerning power budget, control accuracy, and quality factor) as a function of the frequency, the distance between the control region and the source, the mutual distance between the control regions, and the size of the control region. The numerical simulations demonstrate some challenges and limitations of the active EM field control scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Sensitivity analysis of a Venturi shaped structure for cross-flow turbines.
- Author
-
Gabl, Roman, Burchell, Joseph, Hill, Mark, and Ingram, David M.
- Subjects
- *
CROSS-flow (Aerodynamics) , *TURBINES , *SENSITIVITY analysis , *RENEWABLE energy sources , *STATIC pressure , *FLUID flow - Abstract
Tidal energy is one of the world's most predicable renewable energy sources and therefore holds great potential to be a valuable building block for the decarbonisation of electricity production. This paper focuses on a Venturi shaped duct structure (shroud) to accelerate the flow speed at a vertical axis tidal turbine utilising the low static pressure created at the exit of the shroud. This concept is known as a Davidson Hill Venturi (DHV) turbine. By constructing the nozzle and diffusor using hydrofoils, initial demonstrations indicate increased system efficiency. However, owing to the potential number of geometric and structural hydrofoil variations, only a general description of the location of the hydrofoils is provided in order to facilitate modelling while allowing for future geometric variations to be devised. The conducted investigations focus on the influence of the nozzle and diffusor sections as the main geometry variations, identifying the length component in the orthogonal direction as the dominant parameter. By modelling multiple combinations of these variables it is clear that higher fluid velocities result in larger forces which must be supported by the devices structure. Small adjustments to the reference geometries hydrofoil placement and spacing provided improvements to the fluid flow. Thus, taking the slight alteration to the geometry as this papers main outcome, a further in a 3D-simulation study, including turbine interaction and rotation, is to be completed to fully characterise the systems benefits. The insights gained from this work will allow a reduction in computational costs for the detailed optimisation and study into the adaption of the concept for a wide range of (environmental) boundary conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Discharge coefficient prediction of canal radial gate using neurocomputing models: an investigation of free and submerged flow scenarios.
- Author
-
Tao, Hai, Jamei, Mehdi, Ahmadianfar, Iman, Khedher, Khaled Mohamed, Farooque, Aitazaz Ahsan, and Yaseen, Zaher Mundher
- Subjects
- *
DISCHARGE coefficient , *MEAN square algorithms , *RADIAL distribution function , *MACHINE learning , *STATISTICAL correlation , *STANDARD deviations , *ORIFICE plates (Fluid dynamics) - Abstract
In the current study, three machine learning (ML) models, i.e. Gaussian process regression (GPR), generalized regression neural network (GRNN), and multigene genetic programming (MGGP), were developed for predicting the discharge coefficient (Cd) of a radial gate under two different flow conditions, i.e. free and submerged. The modeling development of the flow and geometry input variables for the Cd was determined based on statistical correlations. We also performed a sensitivity analysis of the input variables for the Cd. The modeling results indicated that the developed ML models attained acceptable predictable performance; however, the prediction accuracy of the models was better under the free flow condition. In quantitative terms, the minimum root mean square error (RMSE) value was 0.010 using the GPR model and 0.019 using the MGGP model for the submerged and free flow conditions, respectively. The sensitivity analysis evidenced that the ratio of the gate opening height to the depth of water in the upstream (W:Yo) was the influential variable for the Cd under the free flow condition, whereas the ratio of the depth of water in the upstream to the depth of water in downstream (Yo:YT) was the influential variable for the Cd under the submerged flow condition. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Application of machine learning approaches in the analysis of mass absorption cross-section of black carbon aerosols: Aerosol composition dependencies and sensitivity analyses.
- Author
-
May, Andrew A. and Li, Hanyang
- Subjects
- *
CARBON-black , *AEROSOLS , *SENSITIVITY analysis , *MACHINE learning , *CARBONACEOUS aerosols , *CHEMICAL models , *SUPPORT vector machines - Abstract
Physics-based models typically require an in-depth understanding of a phenomenon and assumptions of the underlying process(es), which are often hard to obtain in practice, whereas data-driven machine learning models learn the structure and patterns in the training data without any prior theoretical assumptions and then use inference to develop useful predictions. A novel machine learning-based algorithm has been previously developed for the prediction of black carbon mass absorption cross-section (MACBC) and applied to a variety of different atmospheric environments. In contrast to light scattering theories which require assumptions about the underlying physics, this algorithm uses time-series data of aerosol properties to estimate the temporally varying MACBC at 870 nm. Here, we analyze our algorithm and discuss the influence of aerosol optical properties (such as Ångström exponents and single scattering albedo) and chemical composition on the model outputs and the associated accuracy. Additionally, we conduct sensitivity analyses on our models to understand how the predictions change in response to different sets of input variables. Our support vector machine (SVM) for regression model is the least sensitive to variations in the input variables, although all models tend to exhibit a degradation to their accuracy when scattering Ångström exponents are less than one. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Exploring the potential for parameter transfer from daily to hourly time step in the HYPE model for Sweden.
- Author
-
Fuentes-Andino, Diana, Hundecha, Yeshewatesfa, Lindström, Göran, and Olsson, Jonas
- Subjects
- *
FLOOD forecasting , *HYDROLOGIC models , *SENSITIVITY analysis , *RUNOFF , *MODELS & modelmaking - Abstract
Catchments with response time shorter than a day require simulations at sub-daily resolution for flood forecasting. Long-term hydrological data with sub-daily time resolution is lacking for many catchments and, thus, it is worthwhile to investigate the potential in using hydrological model parameters calibrated at a daily temporal resolution to simulate sub-daily runoff. This study investigates the potential of directly transferring the parameters from the one-day Hydrological Predictions for the Environment (HYPE) model for Sweden to a one-hour time step version. A sensitivity analysis and calibration of the one-hour model were also performed to explore the possibility of achieving improved model performance through an additive and multiplicative scaling of the parameters. Directly transferring the parameters led to a negligible loss in performance for all 147 catchments evaluated here. Scaling of the parameters did not lead to improvement for slow-responding catchments, while the increase in performance was negligible for catchments with sub-daily response time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Sensitivity Analysis via the Proportion of Unmeasured Confounding.
- Author
-
Bonvini, Matteo and Kennedy, Edward H.
- Subjects
- *
SENSITIVITY analysis , *CARDIAC catheterization , *TREATMENT effectiveness , *SCIENTIFIC observation , *ONLINE databases - Abstract
In observational studies, identification of ATEs is generally achieved by assuming that the correct set of confounders has been measured and properly included in the relevant models. Because this assumption is both strong and untestable, a sensitivity analysis should be performed. Common approaches include modeling the bias directly or varying the propensity scores to probe the effects of a potential unmeasured confounder. In this article, we take a novel approach whereby the sensitivity parameter is the "proportion of unmeasured confounding": the proportion of units for whom the treatment is not as good as randomized even after conditioning on the observed covariates. We consider different assumptions on the probability of a unit being unconfounded. In each case, we derive sharp bounds on the average treatment effect as a function of the sensitivity parameter and propose nonparametric estimators that allow flexible covariate adjustment. We also introduce a one-number summary of a study's robustness to the number of confounded units. Finally, we explore finite-sample properties via simulation, and apply the methods to an observational database used to assess the effects of right heart catheterization. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Sensitivity analysis of stress distribution in bicycle frame.
- Author
-
Gautam, Ravi Shankar
- Subjects
- *
STRAINS & stresses (Mechanics) , *STRESS concentration , *CYCLING , *SENSITIVITY analysis , *BICYCLES - Abstract
This paper presents sensitivity analysis of stress behaviour in the elements of bicycle frame having diamond structure with respect to design parameters. In order to achieve this goal stress on each element of the diamond frame have been derived in the form of analytical expression in terms of design parameters by imposing static equilibrium condition on 2D representation of the frame. Stresses on all the elements and reactionary forces on the frame are linear combination of two external loads on the frame with coefficients being trigonometric expressions. Stresses on all elements and reactionary forces have a common term which appear as denominator and is termed as Inter-element Interaction Common Factor (IICF). Analytical expressions for stresses on elements have the potential to provide insights into the set of values of design parameters for optimization of bicycle frame with respect to various criteria. Such insights, in addition to being valuable for designers using computer aided engineering softwares like ANSYS, are also helpful for comparative analysis among various design criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. The new Neyman type A generalized odd log-logistic-G-family with cure fraction.
- Author
-
Vigas, Valdemiro P., Ortega, Edwin M. M., Cordeiro, Gauss M., Suzuki, Adriano K., and Silva, Giovana O.
- Subjects
- *
MONTE Carlo method , *STANDARD deviations , *STOMACH tumors , *LOGISTIC regression analysis , *HAZARD function (Statistics) , *SURVIVAL analysis (Biometry) - Abstract
The work proposes a new family of survival models called the Odd log-logistic generalized Neyman type A long-term. We consider different activation schemes in which the number of factors M has the Neyman type A distribution and the time of occurrence of an event follows the odd log-logistic generalized family. The parameters are estimated by the classical and Bayesian methods. We investigate the mean estimates, biases, and root mean square errors in different activation schemes using Monte Carlo simulations. The residual analysis via the frequentist approach is used to verify the model assumptions. We illustrate the applicability of the proposed model for patients with gastric adenocarcinoma. The choice of the adenocarcinoma data is because the disease is responsible for most cases of stomach tumors. The estimated cured proportion of patients under chemoradiotherapy is higher compared to patients undergoing only surgery. The estimated hazard function for the chemoradiotherapy level tends to decrease when the time increases. More information about the data is addressed in the application section. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Review of Bayesian selection methods for categorical predictors using JAGS.
- Author
-
Jreich, Rana, Hatte, Christine, and Parent, Eric
- Subjects
- *
BAYESIAN field theory , *SENSITIVITY analysis - Abstract
The formulation of variable selection has been widely developed in the Bayesian literature by linking a random binary indicator to each variable. This Bayesian inference has the advantage of stochastically exploring the set of possible sub-models, whatever their dimension. Bayesian selection approaches, appropriate for categorical predictors, are generally beyond the scope of the standard Bayesian selection of regressors in the linear model since all levels of a categorical variable should be jointly handled in the selection procedure. For categorical covariates, new strategies have been developed to detect the effect of grouped covariates rather than the single effect of a quantitative regressor. In this paper, we review three Bayesian selection methods for categorical predictors: Bayesian Group Lasso with Spike and Slab priors, Bayesian Sparse Group Selection and Bayesian Effect Fusion using model-based clustering. The motivation behind this paper is to provide detailed information about the implementation of the three Bayesian selection methods mentioned above, appropriate for categorical predictors, using the JAGS software. Selection performance and sensitivity analysis of the hyperparameters tuning for prior specifications are assessed under various simulated scenarios. JAGS helps user implement these three Bayesian selection methods for more complex model structures such as hierarchical ones with latent layers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Investigating the impact of calibration timescales on streamflow simulation, parameter sensitivity and model performance for Indian catchments.
- Author
-
Setti, Sridhara, Barik, Kamal Kumar, Merz, Bruno, Agarwal, Ankit, and Rathinasamy, Maheswaran
- Abstract
Hydrological model calibration is a quintessential step in model development, and the time scale of calibration depends on the application. However, the implications of choice of time scale of calibration have not been explored extensively. Here, we evaluate the effect of the time scale of calibration on model sensitivity, best parameter ranges, and predictive uncertainty for three river basins using the Soil and Water Assessment Tool (SWAT) model. Multiple models were set up for three different catchments from southern India. Our results showed that the sensitivity of the parameters, best parameter ranges, and model performance are conditioned on the time scale of calibration. The models calibrated at coarser time scales marginally outperformed the models calibrated at fine time scale in terms of Nash-Sutcliffe efficiency and percentage bias. Transfer of parameters across scales (both from coarse to fine and from fine to coarse) have a general tendency to worsen the model performance in all three catchments, with few exceptions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Power Spectrum Sensitivity Analysis of the Global Mean Surface Temperature Fluctuations Simulated in a Two-Box Stochastic Energy Balance Model.
- Author
-
SOLDATENKO, SERGEI A. and COLMAN, ROBERT A.
- Abstract
Climate models used in theoretical studies and the long-term projections of climate change should be able to reproduce essential features of the Earth's climate system including natural global scale variability on timescales from years to decades. It is notable then, that models simulate a very wide range in unforced "natural" variability, for example showing a 2 ½ fold range in standard deviation of decadal surface temperature. The global mean surface temperature (GMST) temporal fluctuations used as one of the main indicators of climate variability are usefully characterized by their power spectral density, which represents the distribution of temperature variance in the frequency domain. We applied a randomly-forced two-box energy balance model (EBM) with parameters that correspond to the Coupled Model Intercomparison Project Phase 5 (CMIP5) models to estimate the influence of such crucial aspects of the climate system as feedbacks, thermal inertia and deep ocean heat uptake on the power spectra of the GMST fluctuations (climate variability). These sensitivities can provide clues to allow us to better understand the reasons for the very wide range of climate variability derived from CMIP5 models. It is found that the influence of variations (uncertainties) in the EBM parameters on power spectra of the GMST fluctuations strongly depends on periods (frequencies) of these fluctuations. In particular, it was identified that the effect of variations in the feedback parameter significantly increases with increasing periods of GMST oscillations, while the influence of uncertainties in the climate thermal inertia parameter (effective heat capacity of the "atmosphere-mixed ocean layer" system) demonstrates the opposite behaviour. Variations in the deep-ocean heat uptake parameter tangibly affect GMST fluctuations on decadal and inter-decadal time scales. Meanwhile the uncertainty in the deep-ocean heat capacity parameter is minor for GMST fluctuations over all time scales. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. An appropriate unstructured kinetic model describing the batch fermentation growth of Debaryomyces hansenii for xylitol production using hydrolysis of oil palm empty fruit bunch.
- Author
-
Kasbawati, Kasbawati, Mardawati, Efri, Samsir, Rusni, Suhartini, Sri, and Kalondeng, Anisa
- Subjects
- *
OIL palm , *XYLITOL , *FERMENTATION , *XYLOSE , *FRUIT , *HYDROLYSIS , *CELL growth - Abstract
This study aimed to determine an appropriate unstructured kinetic model describing the growth of Debaryomyces hansenii with the hydrolysis of oil palm empty bunches as the substrate sources. We tested the well-known cell growth kinetic models for approximating the xylitol fermentation data. We modeled the extracellular process by considering the biomass as a variable, xylose as the substrate and xylitol as the product. The appropriate kinetic model was determined by fitting the models to the experimental data. The fitting process involved an optimization procedure that minimizes a least-squared error between the model solution and the experimental data using a gradient-based method. We found that the unstructured model with Monod specific kinetic growth model had the best approximating ability to describe the dynamics of the batch xylitol fermentation data. Some factors were highlighted as the key factors that influence the growth of yeast cells. One of them is the maximum growth rate of the yeast cell which had a high sensitivity to the maximum production of new cells and xylitol as the main product. As Monod was the best-approximated model, it implies that inhibitory effects by substrate and product played an important role in controlling the growth of yeast cells. There was a certain maximum growth rate for the yeast cell that generates maximum production of new cells and xylitol within a relatively short fermentation time. This result indicates that the growth rate parameter played an important role in adjusting the fermentation process for optimality purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Prediction of bedload transport rate using a block combined network structure.
- Author
-
Hosseini, Seyed Abbas, Abbaszadeh Shahri, Abbas, and Asheghi, Reza
- Subjects
- *
BED load , *ARTIFICIAL neural networks , *GENETIC algorithms - Abstract
Modularity as a system of separate and independent sub-tasks is the appropriate way to improve the performance of artificial neural network (ANN) models in hydrological processes. Using this approach, a block combined neural network (BCNN) structure incorporated with genetic algorithm (GA) and an additional decision block is suggested in this study. The optimum topology of embedded networks in each block was detected using a vector-based method subjected to different internal characteristics. This model was then applied on 879 bedload datasets, considering velocity, discharge, mean grain size, slope, and depth as model inputs over streams in Idaho, USA. The correct classification rate of predicted bedload using BCNN (89.77%) showed superior performance accuracy compared to other ANNs, and to empirical models. Results of computed error metrics and confusion matrixes also demonstrated outstanding progress in BCNN relative to other models. We show that BCNN as a new method with an appropriate accuracy level could effectively be adopted for bedload prediction purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Analysing of Tuberculosis in Turkey through SIR, SEIR and BSEIR Mathematical Models.
- Author
-
Ucakan, Yasin, Gulen, Seda, and Koklu, Kevser
- Subjects
- *
MATHEMATICAL models , *TUBERCULOSIS , *COMMUNICABLE diseases , *LEAST squares , *SENSITIVITY analysis - Abstract
Since mathematical models play a key role in investigating the dynamics of infectious diseases, many mathematical models for these diseases are developed. In this paper, it is aimed to determine the dynamics of Tuberculosis (TB) in Turkey, how much it will affect the future and the impact of vaccine therapy on the disease. For this purpose, three mathematical models (SIR, SEIR and BSEIR) in the literature are considered for the case of Turkey. The model parameters are obtained with TB reported data from 2005 to 2015 by using the least square method. The obtained results revealed that the basic reproduction ratio for all three models is less than 1. Moreover, the stability analysis of the models and sensitivity analysis of the model parameters are presented and discussed. Finally, the accuracy of results for all three models is compared and the effect of the vaccination rate is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Modelling homosexual and heterosexual transmissions of hepatitis B virus in China.
- Author
-
Lu, Min, Shu, Yaqin, Huang, Jicai, Ruan, Shigui, Zhang, Xinan, and Zou, Lan
- Subjects
- *
HEPATITIS B virus , *BASIC reproduction number , *HETEROSEXUALS , *HEPATITIS B - Abstract
Studies have shown that sexual transmission, both heterosexually and homosexually, is one of the main ways of HBV infection. Based on this fact, we propose a mathematical model to study the sexual transmission of HBV among adults by classifying adults into men and women and considering both same-sex and opposite-sex transmissions of HBV in adults. Firstly, we calculate the basic reproduction number R 0 and the disease-free equilibrium point E 0 . Secondly, by analysing the sensitivity of R 0 in terms of model parameters, we find that the infection rate among people who have same-sex partners, the frequency of homosexual contact and the immunity rate of adults play important roles in the transmission of HBV. Moreover, we use our model to fit the reported data in China and forecast the trend of hepatitis B. Our results demonstrate that popularizing the basic knowledge of HBV among residents, advocating healthy and reasonable sexual life style, reducing the number of adult carriers, and increasing the immunization rate of adults are effective measures to prevent and control hepatitis B. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.