411 results
Search Results
2. Autoregressive Random Forests: Machine Learning and Lag Selection for Financial Research
- Author
-
Polyzos, Efstathios and Siriopoulos, Costas
- Published
- 2024
- Full Text
- View/download PDF
3. The Devil is in the Details: On the Robust Determinants of Development Aid in G5 Sahel Countries
- Author
-
Bayale, Nimonka and Kouassi, Brigitte Kanga
- Published
- 2022
- Full Text
- View/download PDF
4. Model Error (or Ambiguity) and Its Estimation, with Particular Application to Loss Reserving.
- Author
-
Taylor, Greg and McGuire, Gráinne
- Subjects
INSURANCE reserves ,ADMISSIBLE sets ,AMBIGUITY - Abstract
This paper is concerned with the estimation of forecast error, particularly in relation to insurance loss reserving. Forecast error is generally regarded as consisting of three components, namely parameter, process and model errors. The first two of these components, and their estimation, are well understood, but less so model error. Model error itself is considered in two parts: one part that is capable of estimation from past data (internal model error), and another part that is not (external model error). Attention is focused here on internal model error. Estimation of this error component is approached by means of Bayesian model averaging, using the Bayesian interpretation of the LASSO. This is used to generate a set of admissible models, each with its prior probability and likelihood of observed data. A posterior on the model set, conditional on the data, may then be calculated. An estimate of model error (for a loss reserve estimate) is obtained as the variance of the loss reserve according to this posterior. The population of models entering materially into the support of the posterior may turn out to be "thinner" than desired, and bootstrapping of the LASSO is used to increase this population. This also provides the bonus of an estimate of parameter error. It turns out that the estimates of parameter and model errors are entangled, and dissociation of them is at least difficult, and possibly not even meaningful. These matters are discussed. The majority of the discussion applies to forecasting generally, but numerical illustration of the concepts is given in relation to insurance data and the problem of insurance loss reserving. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Bayesian ensemble methods for predicting ground deformation due to tunnelling with sparse monitoring data.
- Author
-
Zilong Zhang, Tingting Zhang, Xiaozhou Li, and Daniel Dias
- Subjects
DEFORMATIONS (Mechanics) ,TUNNEL design & construction ,BAYESIAN analysis ,STRUCTURAL health monitoring ,PREDICTION models - Abstract
Numerous analytical models have been developed to predict ground deformations induced by tunneling, which is a critical issue in tunnel engineering. However, the accuracy of these predictions is often limited by errors and uncertainties resulting from model selection and parameter fittings, given the paucity of monitoring data in field settings. This paper proposes a novel approach to estimate tunnelling-induced ground deformations by applying Bayesian model averaging to several representative prediction models. By accounting for both model and parameter uncertainties, this approach enables more realistic predictions of ground deformations than individual models. Specifically, our results indicate that the Gonzalez-Sagaseta model outperforms other models in predicting ground surface settlements, while the Loganathan-Poulos model is most suitable for predicting subsurface vertical and horizontal deformations. Importantly, our analysis reveals that when monitoring data are sparse, model uncertainties may contribute up to 78.7% of the total uncertainties. Thus, obtaining sufficient data for parameter fitting is crucial for accurate predictions. The proposed method in this study offers a more realistic and efficient prediction of tunnelling-induced ground deformations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Bayesian model averaging for river flow prediction.
- Author
-
Darwen, Paul J.
- Subjects
BAYESIAN analysis ,STREAMFLOW ,ARITHMETIC mean ,FLOODS ,PREDICTION theory - Abstract
This paper explores the practical benefits of Bayesian model averaging, for a problem with limited data, namely future flow of five intermittent rivers. This problem is a useful proxy for many others, as the limited amount of data only allows tuning of small, simple models. Bayesian model averaging is theoretically a good way to cope with these difficulties, but it has not been widely used on this and similar problems. This paper uses real-world data to illustrate why. Bayesian model averaging can indeed give a better prediction, but only if the amount of data is small — if the data is so limited that it agrees a wide range of different models (instead of consistent with only a few near-identical models), then the weighted votes of those diverse models in Bayesian model averaging will (on average) give a better prediction than the single best model. In contrast, plenty of data can fit only one or a few very similar models; since they'll vote the same way, Bayesian model averaging will give no practical improvement. Even with limited data that agrees with a range of models, the improvement is not very big large, but it is the direction of the improvement that stands out as a help for forecasting. Working around these caveats lets us better predict river floods, and similar problems with limited data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. A Novel Search Strategy for Complex Agent Networks.
- Author
-
ARXIDEN, Ablimit, Shanxia WANG, and Huiwen DENG
- Subjects
MOBILE agent systems ,SEARCH algorithms - Abstract
To compute the search algorithm of the Complex Agent Network, CAN, this paper proposes a 'Based on Mobile Agent' (BMA) search strategy. The method transmits query requests by intelligence and random movements by allocating certain number of Agents in the CAN network to solve the search problem in the complex agent network. We also propose and solve key technologies on the BMA search mechanism, BMA search algorithm description, BDI (Belief, Desire, Intention)-based Agent search model. Finally, we discuss experimental software for algorithms to specify features and experimental environments of simulation software, and devise parameter setting situations relevant to the experiment. We test the search algorithm performance indices SSR, LC and RPL of the two classic algorithms and the Bayesian Adaptive Sampling (BAS) algorithm discussed in this paper, and analyze SSR, LC and RPL of MBA, etc. The basic performance, expansibility and adaptation of the BAS algorithm proposed here are also assessed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Data-driven ensemble model for probabilistic prediction of debris-flow volume using Bayesian model averaging.
- Author
-
Tian, Mi, Fan, Hao, Xiong, Zimin, and Li, Lihua
- Abstract
Accurate and reliable predictions of the debris-flow volume are the necessary prerequisite for potential hazard delineation and risk assessment of debris flows. Various theoretical, empirical, and machine learning methods have been proposed by researchers to estimate the debris-flow volume. However, current methods generally provide point-value deterministic predictions and have limitation in assessing the predictive uncertainties associated with the observation data, model parameters, and structures. This paper proposed a data-driven ensemble model to probabilistically forecast the debris-flow volume using multiple deterministic machine learning methods and Bayesian model averaging (BMA). The rainfall-induced debris flows in Taiwan were selected as an illustrative example to evaluate the feasibility of the proposed approach. Firstly, the debris-flow datasets are preprocessed by the principal component analysis (PCA) to select input variables. Then, four data-driven models are applied to provide deterministic estimates for ensemble forecasts. Finally, BMA incorporates the deterministic predictions of multiple data-driven models to generate probabilistic forecasts. The performances of individual data-driven models and BMA ensemble forecast are evaluated and compared. Results show that the proposed BMA ensemble model performs better than the single models for predicting the debris-flow volume in terms of the effectiveness and robustness. Ensemble models with good performance can combine the strengths of different models to improve the prediction accuracy. Weighting only good members may not achieve the best performance for both calibration and validation periods. The performance of different combinations of data-driven models is closely related to the observation data and the prediction accuracy of each model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Forecasting Bitcoin Futures: A Lasso-BMA Two-Step Predictor Selection for Investment and Hedging Strategies.
- Author
-
Huang, Weige and Gao, Xiang
- Subjects
BITCOIN ,HEDGING (Finance) ,BAYESIAN analysis ,MARKET volatility - Abstract
After Bitcoin futures were introduced by the Chicago Mercantile Exchange in December 2017, their trading volume has stayed in an uptrend due to speculation, though the scale is still small compared to other traditional futures. As increasing trading indicates more attention and the presence of institutional traders, there exists a need for reliable return and variance forecasts of Bitcoin futures contracts. Therefore, this paper first applies LASSO to pick out best-fitting predictors by shrinking the dimension of a universe of potential determinants sourced from intraday Bitcoin spot trades and daily futures variables. Then, a second round of predictor selection is conducted via Bayesian model averaging so that the modeling uncertainty can be mitigated. We find that factors standing out from this two-step procedure possess a strong predictive power for Bitcoin futures return and volatility in different time horizons. It is further demonstrated that the investment and hedging strategies established based on our forecasts perform well in out-of-sample validations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. TOTAL FACTOR PRODUCTIVITY AND ITS DETERMINANTS IN THE EUROPEAN UNION.
- Author
-
Čekmeová, Petra
- Subjects
INDUSTRIAL productivity ,ECONOMIC development ,ECONOMIC history - Abstract
Copyright of Journal of International Relations / Medzinarodne Vztahy is the property of University of Economics in Bratislava, Faculty of International Relations and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2016
11. On the robust drivers of public debt in Africa: Fresh evidence from Bayesian model averaging approach.
- Author
-
Nagou, Madow, Bayale, Nimonka, and Kouassi, Brigitte Kanga
- Subjects
POLITICAL stability ,INTERNATIONAL economic assistance ,MOMENTS method (Statistics) ,FOREIGN exchange rates ,POLITICAL systems ,PUBLIC debts - Abstract
While economic theory suggests a wide range of potential drivers of public debt, there is little consensus regarding the most relevant ones. This paper analyzes the determinants of the public debt in Africa. This is done by adopting a Bayesian Model Averaging (BMA) approach applied to data of 51 African countries, spanning the period 1990–2018. Our results suggest that, among the set of twenty-seven (27) regressors considered, those reflecting international financial and institutional conditions as well as internal economic prospects tend to receive high posterior inclusion probabilities. Then, the study explores the effect of these regressors on public debt by employing the fixed effects (FE) and the system Generalized Method of Moments (GMM) estimators. The results reveal that, foreign aid, fiscal deficit, trade openness, military expenditure, interest and exchange rates, debt-service, domestic credit, government stability index, political regime type and socio-economic crises are the main and robust drivers of public debt accumulation in African countries. These findings are robust to changes in the model specification, the inclusion of socio-economic crises and regional heterogeneities. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Bayesian Model Averaging to Account for Model Uncertainty in Estimates of a Vaccine's Effectiveness.
- Author
-
Oliveira, Carlos R, Shapiro, Eugene D, and Weinberger, Daniel M
- Subjects
VACCINE effectiveness ,LYME disease ,PARAMETER estimation ,CONFIDENCE intervals ,MEDICAL records ,CONFOUNDING variables - Abstract
Purpose: Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When many confounders are being considered, it can be challenging to know which variables need to be included in the final model. We propose an intuitive Bayesian model averaging (BMA) framework for this task. Patients and Methods: Data were used from a matched case–control study that aimed to assess the effectiveness of the Lyme vaccine post-licensure. Cases were residents of Connecticut, 15– 70 years of age with confirmed Lyme disease. Up to 2 healthy controls were matched to each case subject by age. All participants were interviewed, and medical records were reviewed to ascertain immunization history and evaluate potential confounders. BMA was used to systematically search for potential models and calculate the weighted average VE estimate from the top subset of models. The performance of BMA was compared to three traditional single-best-model-selection methods: two-stage selection, stepwise elimination, and the leaps and bounds algorithm. Results: The analysis included 358 cases and 554 matched controls. VE ranged between 56% and 73% and 95% confidence intervals crossed zero in < 5% of all candidate models. Averaging across the top 15 models, the BMA VE was 69% (95% CI: 18– 88%). The two-stage, stepwise, and leaps and bounds algorithm yielded VE of 71% (95% CI: 21– 90%), 73% (95% CI: 26– 90%), and 74% (95% CI: 27– 91%), respectively. Conclusion: This paper highlights how the BMA framework can be used to generate transparent and robust estimates of VE. The BMA-derived VE and confidence intervals were similar to those estimated using traditional methods. However, by incorporating model uncertainty into the parameter estimation, BMA can lend additional rigor and credibility to a well-designed study. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Inference and model determination for temperature-driven non-linear ecological models.
- Author
-
Kondakis, Marios, Demiris, Nikolaos, Ntzoufras, Ioannis, and Papanikolaou, Nikos E.
- Subjects
ECOLOGICAL models ,TEMPERATURE effect ,ARTHROPODA - Abstract
This paper is concerned with a contemporary Bayesian approach to the effect of temperature on developmental rates. We develop statistical methods using recent computational tools to model four commonly used ecological non-linear mathematical curves that describe arthropods' developmental rates. Such models address the effect of temperature fluctuations on the developmental rate of arthropods. In addition to the widely used Gaussian distributional assumption, we also explore Inverse Gamma-based alternatives, which naturally accommodate adaptive variance fluctuation with temperature. Moreover, to overcome the associated parameter indeterminacy in the case of no development, we suggest the zero-inflated Inverse Gamma model. The ecological models are compared graphically via posterior predictive plots and quantitatively via marginal likelihood estimates and Information criteria. Inference is performed using the Stan software and we investigate the statistical and computational efficiency of its Hamiltonian Monte Carlo and Variational Inference methods. We also explore model uncertainty and employ Bayesian Model Averaging framework for robust estimation of the key ecological parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Structural Compressed Panel VAR with Stochastic Volatility: A Robust Bayesian Model Averaging Procedure.
- Author
-
Pacifico, Antonio
- Subjects
STRUCTURAL panels ,MARKOV processes ,REGRESSION analysis ,VOLATILITY (Securities) ,MARKET volatility ,FORECASTING - Abstract
This paper improves the existing literature on the shrinkage of high dimensional model and parameter spaces through Bayesian priors and Markov Chains algorithms. A hierarchical semiparametric Bayes approach is developed to overtake limits and misspecificity involved in compressed regression models. Methodologically, a multicountry large structural Panel Vector Autoregression is compressed through a robust model averaging to select the best subset across all possible combinations of predictors, where robust stands for the use of mixtures of proper conjugate priors. Concerning dynamic analysis, volatility changes and conditional density forecasts are addressed ensuring accurate predictive performance and capability. An empirical and simulated experiment are developed to highlight and discuss the functioning of the estimating procedure and forecasting accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Combining dimensionality reduction methods with neural networks for realized volatility forecasting
- Author
-
Bucci, Andrea, He, Lidan, and Liu, Zhi
- Published
- 2023
- Full Text
- View/download PDF
16. Forecasting in dynamic factor models using Bayesian model averaging.
- Author
-
Koop, Gary and Potter, Simon
- Subjects
DIFFUSION indexes ,MARKOV processes ,MONTE Carlo method ,GROSS domestic product ,ECONOMETRICS - Abstract
This paper considers the problem of forecasting in dynamic factor models using Bayesian model averaging. Theoretical justifications for averaging across models, as opposed to selecting a single model, are given. Practical methods for implementing Bayesian model averaging with factor models are described. These methods involve algorithms which simulate from the space defined by all possible models. We discuss how these simulation algorithms can also be used to select the model with the highest marginal likelihood (or highest value of an information criterion) in an efficient manner. We apply these methods to the problem of forecasting GDP and inflation using quarterly U.S. data on 162 time series. For both GDP and inflation, we find that the models which contain factors do out-forecast an AR(p), but only by a relatively small amount and only at short horizons. We attribute these findings to the presence of structural instability and the fact that lags of dependent variable seem to contain most of the information relevant for forecasting. Relative to the small forecasting gains provided by including factors, the gains provided by using Bayesian model averaging over forecasting methods based on a single model are appreciable. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
17. Modeling the spread of COVID‐19 in New York City.
- Author
-
Olmo, Jose and Sanso‐Navarro, Marcos
- Subjects
- *
COVID-19 , *COVID-19 pandemic , *POISSON regression , *ZIP codes , *SOCIOECONOMIC factors - Abstract
This paper proposes an ensemble predictor for the weekly increase in the number of confirmed COVID‐19 cases in the city of New York at zip code level. Within a Bayesian model averaging framework, the baseline is a Poisson regression for count data. The set of covariates includes autoregressive terms, spatial effects, and demographic and socioeconomic variables. Our results for the second wave of the coronavirus pandemic show that these regressors are more significant to predict the number of new confirmed cases as the pandemic unfolds. Both pointwise and interval forecasts exhibit strong predictive ability in‐sample and out‐of‐sample. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Response of Growing Season Gross Primary Production to El Niño in Different Phases of the Pacific Decadal Oscillation over Eastern China Based on Bayesian Model Averaging.
- Author
-
Li, Yueyue, Dan, Li, Peng, Jing, Wang, Junbang, Yang, Fuqiang, Gao, Dongdong, Yang, Xiujing, and Yu, Qiang
- Subjects
GROWING season ,OSCILLATIONS ,MULTISCALE modeling ,CARBON cycle - Abstract
Copyright of Advances in Atmospheric Sciences is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
19. Factor Endowments, Economic Integration, Round-Tripping, and Inward FDI: Evidence from the Baltic Economies.
- Author
-
Cieslik, Andrzej and Gurshev, Oleg
- Subjects
ENDOWMENTS ,FOREIGN investments ,BILATERAL trade ,EUROPEAN integration ,COMMERCIAL treaties ,FREE trade - Abstract
This paper studies the location choice of foreign multinational firms in the Baltic economies of Estonia, Latvia, and Lithuania using a knowledge-and-physical capital model across 2004-2017. We used the Bayesian model averaging estimation method to investigate a set of possible factors that drive inward FDI. Our analysis demonstrates that factor endowments play a dominant role in driving vertical foreign direct investment, while external market barriers generate "tariff-jumping" FDI. Our analysis quantifies the effects of round-trip FDI, European integration, and external bilateral free trade agreements vis-à-vis inward FDI in the Baltics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. When Does Monetary Policy Sway House Prices? A Meta-Analysis
- Author
-
Ehrenbergerova, Dominika, Bajzik, Josef, and Havranek, Tomas
- Published
- 2023
- Full Text
- View/download PDF
21. Time stability of the impact of institutions on economic growth and real convergence of the EU countries: implications from the hidden Markov models analysis.
- Author
-
Bernardelli, Michał, Próchniak, Mariusz, and Witkowski, Bartosz
- Subjects
HIDDEN Markov models ,ECONOMIC liberty ,MARKOV processes ,ECONOMIC convergence ,ECONOMIC expansion ,ECONOMIC impact ,INSTITUTIONAL environment - Abstract
Research background: It is not straightforward to identify the role of institutions for the economic growth. The possible unknown or uncertain areas refer to nonlinearities, time stability, transmission channels, and institutional complementarities. The research problem tackled in this paper is the analysis of the time stability of the relationship between institutions and economic growth and real economic convergence. Purpose of the article: The article aims to verify whether the impact of the institutional environment on GDP dynamics was stable over time or diffed in various subperiods. The analysis covers the EU28 countries and the 1995-2019 period. Methods: We use regression equations with time dummies and interactions to assess the stability of the impact of institutions on economic growth. The analysis is based on the partially overlapping observations. The models are estimated with the use of Blundell and Bond's GMM system estimator. The results are then averaged with the Bayesian Model Averaging (BMA) approach. Structural breaks are identified on the basis of the Hidden Markov Models (HMM). Findings & value added: The value added of the study is threefold. First, we use the HMM approach to find structural breaks. Second, the BMA method is applied to assess the robustness of the outcomes. Third, we show the potential of HMM in foresighting. The results of regression estimates indicate that good institution reflected in the greater scope of economic freedom and better governance lead to the higher economic growth of the EU countries. However, the impact of institutions on economic growth was not stable over time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Methods to account for uncertainties in exposure assessment in studies of environmental exposures
- Author
-
Wu, You, Hoffman, F. Owen, Apostoaei, A. Iulian, Kwon, Deukwoo, Thomas, Brian A., Glass, Racquel, and Zablotska, Lydia B.
- Published
- 2019
- Full Text
- View/download PDF
23. A Model Selection Approach for Variable Selection with Censored Data.
- Author
-
Eugenia Castellanos, María, García-Donato, Gonzalo, and Cabras, Stefano
- Subjects
REGRESSION analysis ,DYNAMIC models ,METHODOLOGY ,BAYESIAN analysis ,HYPOTHESIS - Abstract
We consider the variable selection problem when the response is subject to censoring. A main particularity of this context is that information content of sampled units varies depending on the censoring times. Our approach is based on model selection where all 2k possible models are entertained and we adopt an objective Bayesian perspective where the choice of prior distributions is a delicate issue given the well-known sensitivity of Bayes factors to these prior inputs. We show that borrowing priors from the 'uncensored' literature may lead to unsatisfactory results as this default procedure implicitly assumes a uniform contribution of all units independently on their censoring times. In this paper, we develop specific methodology based on a generalization of the g-priors, explicitly addressing the particularities of survival problems arguing that it behaves comparatively better than standard approaches on the basis of arguments specific to variable selection problems (like e.g. predictive matching) in the particular case of the accelerated failure time model with lognormal errors. We apply the methodology to a recent large epidemiological study about breast cancer survival rates in Castell'on, a province of Spain. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Comparison of the BMA and EMOS statistical methods for probabilistic quantitative precipitation forecasting.
- Author
-
Javanshiri, Zohreh, Fathi, Maede, and Mohammadi, Seyedeh Atefeh
- Subjects
PRECIPITATION forecasting ,METEOROLOGICAL research ,WEATHER forecasting ,QUANTITATIVE research ,NUMERICAL weather forecasting ,FORECASTING - Abstract
The main approach to probabilistic weather forecasting has been the use of ensemble forecasting. In ensemble forecasting, the probability information is generally derived by using several numerical model runs, with perturbation of the initial conditions, physical schemes or dynamic core of the numerical weather prediction (NWP) models. However, ensemble forecasting usually tends to be under-dispersive. Statistical post-processing has, therefore, become an essential component of any ensemble prediction system aiming to improve the quality of numerical weather forecasts as they seek to generate calibrated and sharp predictive distributions of future weather quantities. Different versions of the ensemble model output statistics (EMOS) and the Bayesian model averaging (BMA) post-processing methods are used in the present paper to calibrate 24, 48 and 72 hr forecasts of 24 hr accumulative precipitation. The ensemble employs the weather and research forecasting (WRF) model with eight different configurations which were run over Iran for six months (September 2015-February 2016). The results reveal that the BMA and EMOScensored, shifted gamma (CSG) techniques are substantially successful at improving the raw WRF ensemble forecasts; however, each approach improves different aspects of the forecast quality. The BMA method is more accurate, skilful and reliable than the EMOS-CSG method, but has poorer discrimination. Moreover, it has better resolution in predicting the probability of highprecipitation events than the EMOS-CSG method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Applying Bayesian Model Averaging for Quantile Estimation in Accelerated Life Tests.
- Author
-
Yu, I-Tang and Chang, Che-Lun
- Subjects
BAYESIAN analysis ,MATHEMATICAL models ,INTEGRATED circuits ,PARAMETER estimation ,ACCELERATED life testing ,DISTRIBUTION (Probability theory) ,DATA modeling ,UNCERTAINTY (Information theory) ,MAXIMUM likelihood statistics - Abstract
In an accelerated life test, inferences on extreme quantiles of the lifetime distribution at the use condition are obtained via extrapolation in two directions: in time, and in stress levels. Extrapolation is known to highly depend on the working model, and ignoring model uncertainty can result in over-confidence. This paper explores the use of Bayesian model averaging for estimating quantiles in an accelerated life test. Two of the most commonly used lifetime regression models, lognormal, and Weibull log-location-scale regression models, are considered in this paper as candidate models. To illustrate, we analyse complementary metal-oxide semiconductor integrated circuit data. We also construct a simulation study to compare the performance of the Bayesian model averaging s-credibility intervals with other exiting interval estimators. The simulation study shows that, for estimating extreme quantiles, both the standard Bayesian and the maximum likelihood approaches can lead to an over-confident result, and Bayesian model averaging provides a s-credibility interval with a wider average length, but a more accurate coverage probability. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
26. Bayesian Model Averaging With Fixed and Flexible Priors: Theory, Concepts, and Calibration Experiments for Rainfall‐Runoff Modeling.
- Author
-
Samadi, S., Pourreza‐Bilondi, M., Wilson, C. A. M. E., and Hitchcock, D. B.
- Subjects
STREAMFLOW ,PROBABILITY density function ,GAUSSIAN distribution ,ALGORITHMS ,COASTAL plains ,CALIBRATION ,GAMMA distributions - Abstract
This paper introduces for the first time the concept of Bayesian model averaging (BMA) with multiple prior structures, for rainfall‐runoff modeling applications. The original BMA model proposed by Raftery et al. (2005, https://doi.org.10.1175/MWR2906.1) assumes that the prior probability density function (pdf) is adequately described by a mixture of Gamma and Gaussian distributions. Here we discuss the advantages of using BMA with fixed and flexible prior distributions. Uniform, Binomial, Binomial‐Beta, Benchmark, and Global Empirical Bayes priors along with Informative Prior Inclusion and Combined Prior Probabilities were applied to calibrate daily streamflow records of a coastal plain watershed in the southeast United States. Various specifications for Zellner's g prior including Hyper, Fixed, and Empirical Bayes Local (EBL) g priors were also employed to account for the sensitivity of BMA and derive the conditional pdf of each constituent ensemble member. These priors were examined using the simulation results of conceptual and semidistributed rainfall‐runoff models. The hydrologic simulations were first coupled with a new sensitivity analysis model and a parameter uncertainty algorithm to assess the sensitivity and uncertainty associated with each model. BMA was then used to subsequently combine the simulations of the posterior pdf of each constituent hydrological model. Analysis suggests that a BMA based on combined fixed and flexible priors provides a coherent mechanism and promising results for calculating a weighted posterior probability compared to individual model calibration. Furthermore, the probability of Uniform and Informative Prior Inclusion priors received significantly lower predictive error, whereas more uncertainty resulted from a fixed g prior (i.e., EBL). Plain Language Summary: This study presents a two‐step procedure that includes model calibration of a range of hydrological models using DREAM (zs) algorithm, followed by ensemble prediction of streamflow using Bayesian model averaging (BMA) with various prior structures. The hydrological modeling simulations were first coupled with a new sensitivity analysis model and a parameter uncertainty algorithm to assess the sensitivity and uncertainty associated with each hydrologic model simulation. BMA was then used to subsequently combine the simulations on the most important parts of the posterior probabilities of each constituent hydrological model. Analysis suggests a BMA with fixed and flexible priors provides a coherent mechanism and promising results for calibrating a weighted posterior probability compared to individual model calibration. The hierarchy of prior distributions used in this study increased the flexibility of BMA fitting for daily streamflow simulation and reduced the dependence of posterior and predictive uncertainty (including model probabilities) on prior assumptions of hydrological modeling simulation. Key Points: Bayesian model averaging with fixed and flexible prior structures were applied to combine the posterior probability distribution of four hydrological modelsCustom prior inclusion and uniform prior induced a much sharper posterior medianPutting a prior on both θ and g makes the analysis naturally adaptive and avoids the information paradox [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. A Replication of "Sorting through Global Corruption Determinants: Institutions and Education Matter—Not Culture" (World Development 2018).
- Author
-
Goel, Rajeev K. and Saunoris, James W.
- Subjects
CORRUPTION ,MATTER ,LITERACY ,EDUCATION - Abstract
An interesting recent paper by Jetter and Parmeter (World Development 2018), (J-P), provides a useful robustness analysis of the significant determinants of cross-country corruption and identifies institutions and literacy as key influences, but a nation's culture is shown to not matter. Whereas J-P consider a range of potential influences on corruption, their findings are based on a single measure of corruption. This note considers two additional widely used measures of corruption and tests the robustness of J-P's findings. Results show support for a number of J-P's findings but also show the sensitivity of their main findings to the sample size, and that the influence of culture is shown to be not insignificant. This suggests appropriate caution be used in framing policies, especially when they are based on a single measure of variables that are plagued by measurement issues. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. A Comparison of Variables Selection Methods and their Sequential Application: A Case Study of the Bankruptcy of Polish Companies.
- Author
-
Zanka, Mikhail
- Subjects
BANKRUPTCY ,BANKING industry ,FORECASTING ,LOGISTIC regression analysis ,CASE studies - Abstract
Research background: Even though in recent decades, a lot of new techniques were developed, there is still a lack of studies aimed at comparing the performance of variable selection methods. Bankruptcy prediction is an excellent example of the conservative research field with the tendency to use classical approaches. Although the results of studies in this field are directly applied in banks and other financial institutions, variables selected for these models can be biased by the author's preference for one technique. Purpose: This work aims to compare different variable selection approaches and introduce a new methodology of sequential variable selection that can be applied when the low-dimensional model is preferred. Research methodology: This study has been conducted on Polish companies' insolvency data from the period of 2007–2013. The risk has been modeled with logistic regression; hence variables have been selected with approaches suitable for linear models. Results: The one-step methods did not lead to sufficient dimensionality reduction, while the sequential approach provided compact models keeping the high-performance level. Also, this method allowed us to identify the main financial determinants of insolvency for studied companies, which are the volume of total assets and the ratio of profit to total assets. Novelty: This paper compares different variable selection methods and demonstrates the effectiveness of their sequential application for dimensionality reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. What drives business cycle synchronization? BMA results from the European Union.
- Author
-
Beck, Krzysztof
- Subjects
BUSINESS cycles ,CUSTOMS unions ,MONETARY unions ,SYNCHRONIZATION ,RISK sharing - Abstract
The last twenty years have brought a bulk of inconsistent results on the determinants of business cycle synchronization (BCS). Researchers have usually focused their attention on a limited set of possible determinants, not accounting for model uncertainty. For these reasons, Bayesian Model Averaging has been applied in this paper to the dataset with 43 potential determinants of BCS for the EU. There is strong evidence to claim that migration, exchange rate variability, similarity of production structures, TFP shocks, similarity in exchange rate policy, intra-industry trade, risk sharing, and capital mobility are robust determinants of BCS. Some well-established determinants such as bilateral trade, monetary policy similarity, gravity variables, and participation in a monetary union and free trade area have turned out to be fragile. The structure of trade is more important for BCS than its magnitude, as intra-industry trade and structural similarity are taking explanatory power away from the bilateral trade. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Adaptive sampling for Bayesian variable selection.
- Author
-
Nott, David J. and Kohn, Robert
- Subjects
ADAPTIVE sampling (Statistics) ,BAYESIAN analysis ,MARKOV processes ,PROPOSAL writing in research ,DATA analysis ,EDUCATION - Abstract
Our paper proposes adaptive Monte Carlo sampling schemes for Bayesian variable selection in linear regression that improve on standard Markov chain methods. We do so by considering Metropolis–Hastings proposals that make use of accumulated information about the posterior distribution obtained during sampling. Adaptation needs to be done carefully to ensure that sampling is from the correct ergodic distribution. We give conditions for the validity of an adaptive sampling scheme in this problem, and for simulating from a distribution on a finite state space in general, and suggest a class of adaptive proposal densities which uses best linear prediction to approximate the Gibbs sampler. Our sampling scheme is computationally much faster per iteration than the Gibbs sampler, and when this is taken into account the efficiency gains when using our sampling scheme compared to alternative approaches are substantial in terms of precision of estimation of posterior quantities of interest for a given amount of computation time. We compare our method with other sampling schemes for examples involving both real and simulated data. The methodology developed in the paper can be extended to variable selection in more general problems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
31. Explainable AI-based innovative hybrid ensemble model for intrusion detection.
- Author
-
Ahmed, Usman, Jiangbin, Zheng, Almogren, Ahmad, Khan, Sheharyar, Sadiq, Muhammad Tariq, Altameem, Ayman, and Rehman, Ateeq Ur
- Abstract
Cybersecurity threats have become more worldly, demanding advanced detection mechanisms with the exponential growth in digital data and network services. Intrusion Detection Systems (IDSs) are crucial in identifying illegitimate access or anomalous behaviour within computer network systems, consequently opposing sensitive information. Traditional IDS approaches often struggle with high false positive rates and the ability to adapt embryonic attack patterns. This work asserts a novel Hybrid Adaptive Ensemble for Intrusion Detection (HAEnID), an innovative and powerful method to enhance intrusion detection, different from the conventional techniques. HAEnID is composed of a string of multi-layered ensemble, which consists of a Stacking Ensemble (SEM), a Bayesian Model Averaging (BMA), and a Conditional Ensemble method (CEM). HAEnID combines the best of these three ensemble techniques for ultimate success in detection with a considerable cut in false alarms. A key feature of HAEnID is an adaptive mechanism that allows ensemble components to change over time as network traffic patterns vary and new threats appear. This way, HAEnID would provide adequate protection as attack vectors change. Furthermore, the model would become more interpretable and explainable using Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). The proposed Ensemble model for intrusion detection on CIC-IDS 2017 achieves excellent accuracy (97-98%), demonstrating effectiveness and consistency across various configurations. Feature selection further enhances performance, with BMA-M (20) reaching 98.79% accuracy. These results highlight the potential of the ensemble model for accurate and reliable intrusion detection and, hence, is a state-of-the-art choice for accuracy and explainability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Bayesian Model Averaging and Regularized Regression as Methods for Data-Driven Model Exploration, with Practical Considerations.
- Author
-
Han, Hyemin
- Subjects
TEACHER researchers ,RESEARCH personnel ,QUANTITATIVE research ,PREDICTION models ,EXPERTISE - Abstract
Methodological experts suggest that psychological and educational researchers should employ appropriate methods for data-driven model exploration, such as Bayesian Model Averaging and regularized regression, instead of conventional hypothesis-driven testing, if they want to explore the best prediction model. I intend to discuss practical considerations regarding data-driven methods for end-user researchers without sufficient expertise in quantitative methods. I tested three data-driven methods, i.e., Bayesian Model Averaging, LASSO as a form of regularized regression, and stepwise regression, with datasets in psychology and education. I compared their performance in terms of cross-validity indicating robustness against overfitting across different conditions. I employed functionalities widely available via R with default settings to provide information relevant to end users without advanced statistical knowledge. The results demonstrated that LASSO showed the best performance and Bayesian Model Averaging outperformed stepwise regression when there were many candidate predictors to explore. Based on these findings, I discussed appropriately using the data-driven model exploration methods across different situations from laypeople's perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Translating Uncertain Sea Level Projections Into Infrastructure Impacts Using a Bayesian Framework.
- Author
-
Moftakhari, Hamed, AghaKouchak, Amir, Sanders, Brett F., Matthew, Richard A., and Mazdiyasni, Omid
- Abstract
Climate change may affect ocean-driven coastal flooding regimes by both raising the mean sea level (msl) and altering ocean-atmosphere interactions. For reliable projections of coastal flood risk, information provided by different climate models must be considered in addition to associated uncertainties. In this paper, we propose a framework to project future coastal water levels and quantify the resulting flooding hazard to infrastructure. We use Bayesian Model Averaging to generate a weighted ensemble of storm surge predictions from eight climate models for two coastal counties in California. The resulting ensembles combined with msl projections, and predicted astronomical tides are then used to quantify changes in the likelihood of road flooding under representative concentration pathways 4.5 and 8.5 in the near-future (1998-2063) and mid-future (2018-2083). The results show that road flooding rates will be significantly higher in the near-future and mid-future compared to the recent past (1950-2015) if adaptation measures are not implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
34. Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks.
- Author
-
Dajles, Andres and Cavanaugh, Joseph
- Subjects
STATISTICAL bootstrapping ,AKAIKE information criterion ,PROBABILITY theory ,SELECTION bias (Statistics) ,DISTRIBUTION (Probability theory) - Abstract
Most statistical modeling applications involve the consideration of a candidate collection of models based on various sets of explanatory variables. The candidate models may also differ in terms of the structural formulations for the systematic component and the posited probability distributions for the random component. A common practice is to use an information criterion to select a model from the collection that provides an optimal balance between fidelity to the data and parsimony. The analyst then typically proceeds as if the chosen model was the only model ever considered. However, such a practice fails to account for the variability inherent in the model selection process, which can lead to inappropriate inferential results and conclusions. In recent years, inferential methods have been proposed for multimodel frameworks that attempt to provide an appropriate accounting of modeling uncertainty. In the frequentist paradigm, such methods should ideally involve model selection probabilities, i.e., the relative frequencies of selection for each candidate model based on repeated sampling. Model selection probabilities can be conveniently approximated through bootstrapping. When the Akaike information criterion is employed, Akaike weights are also commonly used as a surrogate for selection probabilities. In this work, we show that the conventional bootstrap approach for approximating model selection probabilities is impacted by bias. We propose a simple correction to adjust for this bias. We also argue that Akaike weights do not provide adequate approximations for selection probabilities, although they do provide a crude gauge of model plausibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Risk Identification of Mountain Torrent Hazard Using Machine Learning and Bayesian Model Averaging Techniques.
- Author
-
Chu, Ya, Song, Weifeng, and Chen, Dongbin
- Subjects
MACHINE learning ,HAZARD mitigation ,FLOOD warning systems ,EMERGENCY management ,IDENTIFICATION ,RANDOM forest algorithms ,ECONOMIC security - Abstract
Frequent mountain torrent disasters have caused significant losses to human life and wealth security and restricted the economic and social development of mountain areas. Therefore, accurate identification of mountain torrent hazards is crucial for disaster prevention and reduction. In this study, based on historical mountain torrent hazards, a mountain torrent hazard prediction model was established by using Bayesian Model Average (BMA) and three classic machine learning algorithms (gradient-boosted decision tree (GBDT), backpropagation neural network (BP), and random forest (RF)). The mountain torrent hazard condition factors used in modeling were distance to river, elevation, precipitation, slope, gross domestic product (GDP), population, and land use type. Based on the proposed BMA model, flood risk maps were produced using GIS. The results demonstrated that the BMA model significantly improved upon the accuracy and stability of single models in identifying mountain torrent hazards. The F1-values (comprehensively displays the Precision and Recall) of the BMA model under three sets of test samples at different locations were 3.31–24.61% higher than those of single models. The risk assessment results of mountain torrents found that high-risk areas were mainly concentrated in the northern border and southern valleys of Yuanyang County, China. In addition, the feature importance analysis result demonstrated that distance to river and elevation were the most important factors affecting mountain torrent hazards. The construction of projects in mountainous areas should be as far away from rivers and low-lying areas as possible. The results of this study can provide a scientific basis for improving the identification methods of mountain torrent hazards and assisting decision-makers in the implementation of appropriate measures for mountain torrent hazard prevention and reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Stock price index analysis of four OPEC members: a Bayesian approach.
- Author
-
Hatamerad, Saman, Asgharpur, Hossain, Adrangi, Bahram, and Haghighat, Jafar
- Abstract
This study examines the relationship between macroeconomic variables and stock price indices of four prominent OPEC oil-exporting members. Bayesian model averaging (BMA) and regularized linear regression (RLR) are employed to address uncertainties arising from different estimation models and variable selection. Jointness is utilized to determine the nature of relationships among variable pairs. The case study spans macroeconomic variables and stock prices from 1996 to 2018. BMA findings reveal a strong positive association between stock price indices and both consumer price index (CPI) and broad money growth in each analyzed OPEC country. Additionally, the study suggests a weak negative correlation between OPEC oil prices and the stock price index. RLR results align with BMA analysis, offering insights valuable for policymakers and international wealth managers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Modeling the spread of COVID‐19 in New York City
- Author
-
Marcos Sanso-Navarro and Jose Olmo
- Subjects
Bayesian model averaging ,Geography, Planning and Development ,0211 other engineering and technologies ,02 engineering and technology ,Interval (mathematics) ,Environmental Science (miscellaneous) ,Bayesian inference ,Poisson regression ,symbols.namesake ,Full Article ,COVID‐19 ,0502 economics and business ,Statistics ,Covariate ,050207 economics ,C11 ,Mathematics ,I15 ,Pointwise ,J10 ,Full Articles ,05 social sciences ,prediction models ,021107 urban & regional planning ,R10 ,Autoregressive model ,symbols ,C25 ,spatial effects ,Predictive modelling ,Count data - Abstract
This paper proposes an ensemble predictor for the weekly increase in the number of confirmed COVID-19 cases in the city of New York at zip code level. Within a Bayesian model averaging framework, the baseline is a Poisson regression for count data. The set of covariates includes autoregressive terms, spatial effects, and demographic and socioeconomic variables. Our results for the second wave of the coronavirus pandemic show that these regressors are more significant to predict the number of new confirmed cases as the pandemic unfolds. Both pointwise and interval forecasts exhibit strong predictive ability in-sample and out-of-sample.Este artículo propone un predictor de conjunto para el aumento semanal del número de casos confirmados de COVID‐19 en la ciudad de Nueva York a nivel de código postal. Dentro de un marco de promediación de modelo bayesiano, la línea de base es una regresión de Poisson para datos de recuento. El conjunto de covariables incluye términos autorregresivos, efectos espaciales y variables demográficas y socioeconómicas. Los resultados para la segunda ola de la pandemia de coronavirus muestran que estos regresores son más significativos para predecir el número de nuevos casos confirmados a medida que se desarrolla la pandemia. Tanto las previsiones puntuales como las de intervalo muestran una fuerte capacidad de predicción, tanto dentro como fuera de la muestra.本稿は、ZIPコードレベルでのニューヨーク市におけるCOVID‐19の確定症例数の毎週の増加に対するアンサンブル予測因子を提案する。ベイズモデル平均化のフレームワーク内では、ベースラインはカウントデータのポアソン回帰である。共変量のセットには、自己回帰項、空間的影響、人口統計学的変数および社会経済的変数が含まれる。新型コロナウイルスのパンデミックの第2波に関する本稿の結果から、パンデミックが拡大するにつれて、これらのリグレッサーが新たに確認された症例数を予測する上でより有意であることが示される。点予測と間隔ごとの予測は、サンプル内とサンプル外の両方で強力な予測能力を示す。.
- Published
- 2021
38. Can specific policy indicators identify reform priorities?
- Author
-
Kraay, Aart and Tawara, Norikazu
- Subjects
ECONOMIC policy ,ECONOMIC reform ,BAYESIAN analysis ,INSTITUTIONAL environment ,GOVERNMENT regulation ,POLICY sciences ,ECONOMIC development - Abstract
Several detailed cross-country datasets measuring specific policy indicators relevant to business regulation and government integrity have been developed in recent years. The promise of these indicators is that they can be used to identify specific reforms that policymakers and aid donors can target in their efforts to improve the regulatory and institutional environment. Doing so, however, requires evidence on the partial effects of the many specific policy choices reflected in such datasets. In this paper we use Bayesian model averaging (BMA) to document the cross-country partial correlations between detailed policy indicators and several measures of regulatory and institutional outcomes. We find major instability in the set of policy indicators identified by BMA as important partial correlates of similar outcomes: specific policy indicators that matter for one outcome are, on average, not important correlates of other closely-related outcomes. This finding illustrates the difficulties in using highly-specific policy indicators to identify reform priorities using cross-country data. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods.
- Author
-
Haapasaari, Päivi, Mäntyniemi, Samu, and Kuikka, Sakari
- Subjects
BALTIC herring ,FISHERIES ,BAYESIAN analysis ,FISH population measurement ,FISHERY management ,FISHERY resource measurement - Abstract
A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. Real-Time Prediction With U.K. Monetary Aggregates in the Presence of Model Uncertainty.
- Author
-
Garratt, Anthony, Koop, Gary, Mise, Emi, and Vahey, Shaun P.
- Subjects
MONETARY policy ,PRICE inflation ,MONEY ,ECONOMIC development ,REAL-time computing ,VECTOR analysis - Abstract
A popular account for the demise of the U.K.'s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.'s monetary targeting regime. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
41. The Effects of Monetary Policy on Unemployment Dynamics under Model Uncertainty: Evidence from the United States and the Euro Area.
- Author
-
Altavilla, Carlo and Ciccarelli, Matteo
- Subjects
MONETARY policy ,UNEMPLOYMENT & economics ,TAYLOR'S rule - Abstract
This paper explores the role that the imperfect knowledge of the structure of the economy plays in the uncertainty surrounding the effects of rule-based monetary policy on unemployment dynamics in the euro area and the United States. We employ a Bayesian model averaging procedure on a wide range of models which differ in several dimensions to account for the uncertainty that the policymaker faces when setting the monetary policy and evaluating its effect on real economy. We find evidence of a high degree of dispersion across models in both policy rule parameters and impulse response functions. Moreover, monetary policy shocks have very similar recessionary effects on the two economies with a different role played by the participation rate in the transmission mechanism. Finally, we show that a policymaker who does not take model uncertainty into account and selects the results on the basis of a single model may come to misleading conclusions not only about the transmission mechanism, but also about the differences between the euro area and the United States, which are on average essentially small. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
42. Re-Examining the Consumption–Wealth Relationship: The Role of Model Uncertainty.
- Author
-
KOOP, GARY, POTTER, SIMON M., and STRACHAN, RODNEY W.
- Subjects
ECONOMIC models ,CONSUMPTION (Economics) ,ASSETS (Accounting) ,INCOME ,EXOGENEITY (Econometrics) - Abstract
This paper discusses the consumption–wealth relationship. We use data on consumption, assets, and labor income and a vector error correction framework. This framework defines a set of models that differ in the number of co-integrating vectors, the form of deterministic components and lag length. Further models can be defined through parametric restrictions and, in particular, interest centers on a weak exogeneity restriction that says that the co-integrating residuals do not affect consumption and income directly. Key results in previous work relate to the roles of permanent and transitory shocks in driving wealth and how consumption responds to these shocks. We investigate the robustness of these results to model uncertainty and argue for the use of Bayesian model averaging. We find that there is a large degree of model uncertainty. Whether this uncertainty has important empirical implications depends on the researcher's attitude toward the theory used to motivate a co-integrating relationship between consumption, assets and income. If we work with models consistent with this theory and impose the weak exogeneity restriction, we find precisely estimated results that show that permanent shocks have only a small role in driving assets and that the predominant transitory shocks have little effect on consumption. These findings are consistent with the previous literature. However, if we work with a broader set of models and let the data speak, we find that the exact magnitude of the role of permanent shocks is hard to estimate precisely. Thus, although some support exists for the view that their role is small, we cannot rule out the possibility that they have a substantive role to play. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
43. Bootstrap model averaging in time series studies of particulate matter air pollution and mortality.
- Author
-
Martin, Michael A. and Roberts, Steven
- Subjects
MORTALITY ,EMISSION standards ,AIR quality ,PUBLIC health ,STATISTICAL bootstrapping ,BAYESIAN analysis - Abstract
The consensus from time series studies that have investigated the mortality effects of particulate matter air pollution (PM) is that increases in PM are associated with increases in daily mortality. However, recently concerns have been raised that the observed positive association between PM and mortality may be an artefact of model selection due to multiple hypothesis testing. This problem arises when a number of models are investigated, but only the “best” model is reported and all subsequent inference is based on this model, ignoring the model selection process. In this paper, we introduce the use of the bootstrap as a means of addressing the problems of model selection in PM mortality time series studies. Using the bootstrap to perform inference about the effect of PM on mortality is a process based on a set of models rather than on a single model. It is shown that using the bootstrap to overcome the problems of model selection is competitive with the existing methodology of Bayesian model averaging.Journal of Exposure Science and Environmental Epidemiology (2006) 16, 242–250. doi:10.1038/sj.jea.7500454; published online 17 August 2005 [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
44. Improved Medium- and Long-Term Runoff Forecasting Using a Multimodel Approach in the Yellow River Headwaters Region Based on Large-Scale and Local-Scale Climate Information.
- Author
-
Haibo Chu, Jiahua Wei, Jiaye Li, Zhen Qiao, and Jiongwei Cao
- Subjects
MEASUREMENT of runoff ,RUNOFF ,MATHEMATICAL models of hydrodynamics ,MATHEMATICAL models of forecasting ,MULTIMODAL user interfaces ,PREDICTION models - Abstract
Medium- and long-term runoff forecasting is essential for hydropower generation and water resources coordinated regulation in the Yellow River headwaters region. Climate change has a great impact on runoff within basins, and incorporating different climate information into runoff forecasting can assist in creating longer lead-times in planning periods. In this paper, a multimodel approach was developed to further improve the accuracy and reliability of runoff forecasting fully considering of large-scale and local-scale climatic factors. First, with four large-scale atmospheric oscillations, sea surface temperature, precipitation, and temperature as the predictors, multiple linear regression (MLR), radial basis function neural network (RBFNN), and support vector regression (SVR) models were built. Next, a Bayesian model averaging (BMA)-based multimodel was developed using weighted MLR, RBFNN, and SVR models, and the performance of the BMA-based multimodel was compared to those of the MLR, RBFNN, and SVR models. Finally, the high-runoff performance of these four models was further analyzed to prove the effectiveness of each model. The BMA-based multimodel performed better than those of the other models, as well as high-runoff forecasting. The results also revealed that the performance of the forecasting models with multiple climatic factors were generally superior to that without climatic factors. The BMA-based multimodel with climatic factors not only provides a promising, reliable method for medium- and long-term runoff forecasting, but also facilitates uncertainty estimation under different confidence intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Integrating Hydrological and Machine Learning Models for Enhanced Streamflow Forecasting via Bayesian Model Averaging in a Hydro-Dominant Power System.
- Author
-
Torres, Francisca Lanai Ribeiro, Lima, Luana Medeiros Marangon, Reboita, Michelle Simões, de Queiroz, Anderson Rodrigo, and Lima, José Wanderley Marangon
- Subjects
MACHINE learning ,STREAMFLOW ,FORECASTING ,DECISION making ,DISTRIBUTION (Probability theory) ,WATER demand management - Abstract
Streamflow forecasting plays a crucial role in the operational planning of hydro-dominant power systems, providing valuable insights into future water inflows to reservoirs and hydropower plants. It relies on complex mathematical models, which, despite their sophistication, face various uncertainties affecting their performance. These uncertainties can significantly influence both short-term and long-term operational planning in hydropower systems. To mitigate these effects, this study introduces a novel Bayesian model averaging (BMA) framework to improve the accuracy of streamflow forecasts in real hydro-dominant power systems. Designed to serve as an operational tool, the proposed framework incorporates predictive uncertainty into the forecasting process, enhancing the robustness and reliability of predictions. BMA statistically combines multiple models based on their posterior probability distributions, producing forecasts from the weighted averages of predictions. This approach updates weights periodically using recent historical data of forecasted and measured streamflows. Tested on inflows to 139 reservoirs and hydropower plants in Brazil, the proposed BMA framework proved to be more skillful than individual models, showing improvements in forecasting accuracy, especially in the South and Southeast regions of Brazil. This method offers a more reliable tool for streamflow prediction, enhancing decision making in hydropower system operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Bank loans recovery rate in commercial banks:A case study of non-financial corporations
- Author
-
Natalia Nehrebecka
- Subjects
recovery rate ,regulatory requirements ,reserves ,quantile regression ,Bayesian model averaging ,Economic theory. Demography ,HB1-3840 - Abstract
The empirical literature on credit risk is mainly based on modelling the probability of default, omitting the modelling of the loss given default. This paper is aimed to predict recovery rates on the rarely applied nonparametric method of Bayesian Model Averaging and Quantile Regression, developed on the basis of individual prudential monthly panel data in the 2007–2018. The models were created on financial and behavioural data that present the history of the credit relationship of the enterprise with financial institutions. Two approaches are presented in the paper: Point in Time (PIT) and Through-the-Cycle (TTC). A comparison of the Quantile Regression which get a comprehensive view on the entire probability distribution of losses with alternatives reveals advantages when evaluating downturn and expected credit losses. A correct estimation of LGD parameter affects the appropriate amounts of held reserves, which is crucial for the proper functioning of the bank and not exposing itself to the risk of insolvency if such losses occur.
- Published
- 2019
- Full Text
- View/download PDF
47. Bayesian Compressed Vector Autoregression for Financial Time-Series Analysis and Forecasting
- Author
-
Paponpat Taveeapiradeecharoen, Kosin Chamnongthai, and Nattapol Aunsri
- Subjects
Bayesian methods ,compression algorithms ,finance ,autoregressive processes ,forecasting ,Bayesian model averaging ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Advanced time series models have been intensively developed and used to predict in financial data such as foreign exchange data (forex). In this paper, we implement the random compression method to reduce a large dimensional forex data into much smaller matrix form. Then, Bayesian inferences on vector autoregression are used to obtain all interesting parameters. Subsequently, the models are able to perform out-of-sample prediction up to 14 days ahead of forecast. For empirical works, 30 forex pairs are used in this paper. The results show that Bayesian compressed vector autoregression (BCVAR) and time-varying BCVAR (TVP-BCVAR) deliver excellent forecasting on AUD-JPY, CAD-CHF, CAD-JPY, EUR-DKK, EUR-MXN, and EUR-TRY forex datasets according to mean square forecasting error, outperforming the traditional benchmark Bayesian Autoregression.
- Published
- 2019
- Full Text
- View/download PDF
48. ASSESSING THE DETREMINANTS OF ECONOMIC GROWTH IN GHANA.
- Author
-
Bonga-Bonga, Lumengo and Ahiakpor, Ferdinand
- Subjects
- *
ECONOMIC development , *SUSTAINABILITY , *MONETARY policy , *ECONOMIC policy ,GHANAIAN economy, 1979- - Abstract
African economies count among the fastest-growing economies in the world. In particular, West African economies have grown by an average of 5.7% in 2013 and 6% in 2014, despite the battle with the Ebola virus. Ghana has been at the forefront of this growth with an average economic growth of 8% in the period 2001-2014 (AfDB 2015). The challenge faced by African countries, such as Ghana, is to maintain this high economic growth rate in order to finance a number of developmental projects and curb the rampant poverty prevalent in a number of African countries. The sustainability of Ghana's growth path and the economic policy that ensued necessitates a sound knowledge of the drivers and determinants of its economic growth. This paper contribute to the literature on economic growth in Ghana, as a case study of a fast-growing economy in Africa, by applying the BMA analysis in determining the drivers of the country's economic growth. To the best of our knowledge, there is no study that identifies the drivers of economic growth in Ghana by using the BMA approach. The BMA technique provides the advantage over single-model techniques by combining and averaging different existing models and theories to determine the driver of economic growth. Yearly data ranging from 1970 to 2012 was collected from the World Bank Development Indicators (WDI 2012) and International Monetary Fund (IMF) statistics. There are 22 variables used in the model estimation, including the GDP per capita as the dependent variable. Explanatory variables are selected by taking into account their likelihood of determining economic growth in Ghana. Moreover, these variables are selected by accounting for the particularities of the Ghanaian economies, such as the existence of the dual economy and its reliance on natural resources for export revenues. Using the 50% threshold, as suggested by Raftery (1995), for the selection of relevant variables that drive economic growth in Ghana, the empirical results show that variables such as population density, crop production, inflation rate, labor force, current account balance and population growth are the important drivers of economic growth in Ghana. The paper suggests the following recommendation in the light of these results. Firstly, inflation targeting should remain the anchor of monetary policy in Ghana. Secondly, this paper recommends that the government of Ghana develops policies and strategies that enable the crowding-in of the private sector. Finally, this paper recommends that the government of Ghana promotes crop production in both commercial and subsistence sectors. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Electricity Price Forecasting by Averaging Dynamic Factor Models.
- Author
-
Alonso, Andrés M., Bastos, Guadalupe, and García-Martos, Carolina
- Subjects
ELECTRIC rates ,PRICING ,FORECASTING methodology ,DYNAMIC models ,SIMULATION methods & models - Abstract
In the context of the liberalization of electricity markets, forecasting prices is essential. With this aim, research has evolved to model the particularities of electricity prices. In particular, dynamic factor models have been quite successful in the task, both in the short and long run. However, specifying a single model for the unobserved factors is difficult, and it cannot be guaranteed that such a model exists. In this paper, model averaging is employed to overcome this difficulty, with the expectation that electricity prices would be better forecast by a combination of models for the factors than by a single model. Although our procedure is applicable in other markets, it is illustrated with an application to forecasting spot prices of the Iberian Market, MIBEL (The Iberian Electricity Market). Three combinations of forecasts are successful in providing improved results for alternative forecasting horizons. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. A Two-Step System for Dynamic Panel dealing with Endogeneity Issues and Causal Relationships.
- Author
-
PACIFICO, Antonio and DE GIOVANNI, Livia
- Subjects
DYNAMICAL systems ,GENERALIZED method of moments ,PANEL analysis ,DISEASE risk factors ,LABOR market - Abstract
The paper addresses a computational method implementing a standard Dynamic Panel Data model with Generalized Method of Moment estimators to deal with endogeneity issues, because of omitted factors and unobserved heterogeneity, and causal relationships in large and long panel databases. The methodology takes the name of Two-step System Dynamic Panel Data that combines a first-step Bayesian procedure for selecting potential candidate predictors in a static linear regression model with a frequentist second-step procedure for estimating the parameters of a dynamic linear panel data model. An empirical example to the effects of obesity and socioeconomic factors on labor market outcomes among Italian regions is performed. Potential prevention policies and strategies to address key behavioral and diseases risk factors affecting labor market outcomes and social environment are also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.