5,627 results on '"Point estimation"'
Search Results
202. A General Method for Calibrating Stochastic Radio Channel Models with Kernels
- Author
-
François-Xavier Briol, Troels Pedersen, Ayush Bharti, Department of Computer Science, University College London, Aalborg University, Aalto-yliopisto, and Aalto University
- Subjects
Signal Processing (eess.SP) ,maximum mean discrepancy ,Computer science ,Posterior probability ,Radio channel modeling ,approximate Bayesian computation ,Stochastic processes ,Frequency measurement ,Machine learning ,Maximum mean discrepancy (MMD) ,FOS: Electrical engineering, electronic engineering, information engineering ,Point estimation ,Electrical and Electronic Engineering ,Electrical Engineering and Systems Science - Signal Processing ,Cluster analysis ,Channel models ,Data models ,Estimator ,Kernel methods ,Computational modeling ,Kernel ,Calibration ,Approximate Bayesian computation (ABC) ,Probability distribution ,Likelihood-free inference ,Approximate Bayesian computation ,Likelihood function ,Algorithm ,Multipath propagation - Abstract
Publisher Copyright: CCBY Calibrating stochastic radio channel models to new measurement data is challenging when the likelihood function is intractable. The standard approach to this problem involves sophisticated algorithms for extraction and clustering of multipath components, following which, point estimates of the model parameters can be obtained using specialized estimators. We propose a likelihood-free calibration method using approximate Bayesian computation. The method is based on the maximum mean discrepancy, which is a notion of distance between probability distributions. Our method not only by-passes the need to implement any high-resolution or clustering algorithm, but is also automatic in that it does not require any additional input or manual pre-processing from the user. It also has the advantage of returning an entire posterior distribution on the value of the parameters, rather than a simple point estimate. We evaluate the performance of the proposed method by fitting two different stochastic channel models, namely the Saleh-Valenzuela model and the propagation graph model, to both simulated and measured data. The proposed method is able to estimate the parameters of both the models accurately in simulations, as well as when applied to 60 GHz indoor measurement data.
- Published
- 2022
- Full Text
- View/download PDF
203. A New Method For Point Estimating Parameters Of Simple Regression
- Author
-
Boris Nikolaevich Kazakov and Andrei Vyacheslavovich Mikheev
- Subjects
simple regression ,point estimation ,method of least squares ,two-exponential luminescence decay ,boiling point of water ,electrical resistivity ,Bloch–Gruneisen function ,Applied mathematics. Quantitative methods ,T57-57.97 ,Mathematics ,QA1-939 - Abstract
A new method is described for finding parameters of univariate regression model: the greatest cosine method. Implementation of the method involves division of regression model parameters into two groups. The first group of parameters responsible for the angle between the experimental data vector and the regression model vector are defined by the maximum of the cosine of the angle between these vectors. The second group includes the scale factor. It is determined by means of "straightening" the relationship between the experimental data vector and the regression model vector. The interrelation of the greatest cosine method with the method of least squares is examined. Efficiency of the method is illustrated by examples.
- Published
- 2014
- Full Text
- View/download PDF
204. Estimation and testing procedures for the reliability functions of a general class of distributions.
- Author
-
Chaturvedi, Ajit and Kumari, Taruna
- Subjects
- *
DISTRIBUTION (Probability theory) , *STATISTICAL reliability , *FIX-point estimation , *MONTE Carlo method , *MINIMUM variance estimation - Abstract
We consider here the general class of distributions proposed by Sankaran and Gupta (2005) by zeroing in on two measures of reliability, R(t) = P(X > t) and P = P(X > Y). Thereafter, we develop point estimation for R(t) and ‘P’ and develop uniformly minimum variance unbiased estimators (UMVUES). Then we derive testing procedures for the hypotheses related to different parametric functions. Finally, we compare the results using the Monte Carlo simulation method. Using real data set, we illustrate the procedure clearly. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
205. Point estimation in adaptive enrichment designs.
- Author
-
Kunzmann, Kevin, Benner, Laura, and Kieser, Meinhard
- Abstract
Adaptive enrichment designs are an attractive option for clinical trials that aim at demonstrating efficacy of therapies, which may show different benefit for the full patient population and a prespecified subgroup. In these designs, based on interim data, either the subgroup or the full population is selected for further exploration. When selection is based on efficacy data, this introduces bias to the commonly used maximum likelihood estimator. For the situation of two-stage designs with a single prespecified subgroup, we present six alternative estimators and investigate their performance in a simulation study. The most consistent reduction of bias over the range of scenarios considered was achieved by a method combining the uniformly minimum variance conditionally unbiased estimator with a conditional moment estimator. Application of the methods is illustrated by a clinical trial example. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
206. Estimation and testing procedures for the reliability functions of a family of lifetime distributions based on records.
- Author
-
Chaturvedi, Ajit and Malhotra, Ananya
- Abstract
A family of lifetime distributions is considered. Two measures of reliability are considered, R( t) = P( X > t) and P = P( X > Y). Point estimation and testing procedures are developed for R( t) and P based on records. Two types of point estimators are developed-uniformly minimum variance unbiased estimators and maximum likelihood estimators. A comparative study of different methods of estimation is done through simulation studies. Testing procedures are developed for the hypothesis related to different parametric functions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
207. Progressively Type-I interval censored competing risks data for the proportional hazards family.
- Author
-
Ahmadi, K., Yousefzadeh, F., and Rezaei, M.
- Subjects
- *
PROPORTIONAL hazards models , *ESTIMATION theory , *PREDICTION models , *MAXIMUM likelihood statistics , *APPROXIMATION theory - Abstract
In this article, we consider some problems of estimation and prediction when progressive Type-I interval censored competing risks data are from the proportional hazards family. The maximum likelihood estimators of the unknown parameters are obtained. Based on gamma priors, the Lindely's approximation and importance samplingmethods are applied to obtain Bayesian estimators under squared error and linear-exponential loss functions. Several classical and Bayesian point predictors of censored units are provided. Also, based on given producer's and consumer's risks accepting sampling plans are considered. Finally, the simulation study is given by Monte Carlo simulations to evaluate the performances of the different methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
208. Application of Enhanced Point Estimators to a Sample of In Vivo CT-derived Facial Soft Tissue Thicknesses.
- Author
-
Parks, Connie L., Kyllonen, Kelsey M., and Monson, Keith L.
- Subjects
- *
FACIAL anatomy , *TISSUE physiology , *FACIAL reconstruction (Anthropology) , *FIX-point estimation , *COMPUTED tomography - Abstract
Facial approximations based on facial soft tissue depth measurement tables often utilize the arithmetic mean as a central tendency estimator. Stephan et al. (J Forensic Sci 2013;58:1439) suggest that the shorth and 75-shormax statistics are better suited to describe the central tendency of non-normal soft tissue depth data, while also accommodating normal distributions. The shorth, 75-shormax, arithmetic mean, and other central tendency estimators were evaluated using a CT-derived facial soft tissue depth dataset. Differences between arithmetic mean and shorth mean for the tissue depths examined ranged from 0 mm to +2.3 mm (average 0.6 mm). Differences between the arithmetic mean plus one standard deviation (to approximate the same data points covered by the 75-shormax) and 75-shormax values ranged from −0.8 mm to +0.7 mm (average 0.2 mm). The results of this research suggest that few practical differences exist across the central tendency point estimators for the evaluated soft tissue depth dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
209. Markov-modulated multivariate linear regression.
- Author
-
ANDRONOV, ALEXANDER
- Subjects
- *
MARKOV processes , *PARAMETER estimation , *REGRESSION analysis - Abstract
The article concerns parameter estimation for the Markov- modulated multivariate linear regression model. It is supposed that the parameters of the linear regression are dependent from states of a random environment. The last is described as a continuous-time homogeneous irreducible Markov chain with known parameters. The procedure of estimating the regression parameters is established. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
210. To understand a meta‐analysis, best read the fine print
- Author
-
Kevin L. Greason
- Subjects
Pulmonary and Respiratory Medicine ,business.industry ,media_common.quotation_subject ,education ,Odds ratio ,Confidence interval ,Meta-Analysis as Topic ,Fine print ,Reading (process) ,Meta-analysis ,Statistics ,Humans ,Medicine ,Surgery ,Quality (business) ,p-value ,Point estimation ,Cardiology and Cardiovascular Medicine ,business ,media_common - Abstract
The results of a meta-analysis are more than just the reported odds ratio, 95% confidence interval (CI), and p value. Of equal importance is the fine print of the study which should include assessment of the risk of bias, certainty in evidence, and heterogeneity in the individual point estimates and CIs. These areas all have an influence on the quality of the data in the analysis. Reading and understanding the fine print is important.
- Published
- 2021
- Full Text
- View/download PDF
211. Modified information criterion for regular change point models based on confidence distribution
- Author
-
Wei Ning and Suthakaran Ratnasingam
- Subjects
Statistics and Probability ,Estimation ,Large class ,010501 environmental sciences ,01 natural sciences ,010104 statistics & probability ,Statistics ,Confidence distribution ,Test statistic ,Point (geometry) ,Point estimation ,0101 mathematics ,Statistics, Probability and Uncertainty ,0105 earth and related environmental sciences ,General Environmental Science ,Parametric statistics ,Weibull distribution ,Mathematics - Abstract
In this article, we propose procedures based on the modified information criterion and the confidence distribution for detecting and estimating changes in a three-parameter Weibull distribution. Corresponding asymptotic results of the test statistic associated the detection procedure are established. Moreover, instead of only providing point estimates of change locations, the proposed estimation procedure provides the confidence sets for change locations at a given significance level through the confidence distribution. In general, the proposed procedures are valid for a large class of parametric distributions under Wald conditions and the certain regularity conditions being satisfied. Simulations are conducted to investigate the performance of the proposed method in terms of powers, coverage probabilities and average lengths of confidence sets with respect to a three-parameter Weibull distribution. Corresponding comparisons are also made with other existing methods to indicate the advantages of the proposed method. Rainfall data is used to illustrate the application of the proposed method.
- Published
- 2021
- Full Text
- View/download PDF
212. Maximum likelihood estimation of the change point in stationary state of auto regressive moving average (ARMA) models, using SVD-based smoothing
- Author
-
Raza Sheikhrabori and Majid Aminnayeri
- Subjects
Statistics and Probability ,021103 operations research ,Series (mathematics) ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Autoregressive model ,Moving average ,Singular value decomposition ,Econometrics ,Stock market ,Point (geometry) ,Point estimation ,0101 mathematics ,Smoothing ,Mathematics - Abstract
The change point estimation concept is usually useful in time series models. This concept helps to decrease the decision making or production costs by monitoring the stock market and production lin...
- Published
- 2021
- Full Text
- View/download PDF
213. Small‐sample inference for cluster‐based outcome‐dependent sampling schemes in resource‐limited settings: Investigating low birthweight in Rwanda
- Author
-
Bethany Hedt-Gauthier, Claudia Rivera-Rodriguez, Sara M. Sauer, and Sebastien Haneuse
- Subjects
Statistics and Probability ,Sample (statistics) ,Marginal model ,01 natural sciences ,Article ,General Biochemistry, Genetics and Molecular Biology ,010104 statistics & probability ,03 medical and health sciences ,Bias ,Risk Factors ,Statistics ,Birth Weight ,Humans ,Computer Simulation ,Point estimation ,0101 mathematics ,030304 developmental biology ,Mathematics ,0303 health sciences ,General Immunology and Microbiology ,Applied Mathematics ,Inverse probability weighting ,Infant, Newborn ,Rwanda ,Sampling (statistics) ,Estimator ,General Medicine ,Delta method ,Standard error ,General Agricultural and Biological Sciences - Abstract
The neonatal mortality rate in Rwanda remains above the United Nations Sustainable Development Goal 3 target of 12 deaths per 1,000 live births. As part of a larger effort to reduce preventable neonatal deaths in the country, we conducted a study to examine risk factors for low birthweight. The data was collected via a cost-efficient cluster-based outcome-dependent sampling scheme wherein clusters of individuals (health centers) were selected on the basis of, in part, the outcome rate of the individuals. For a given dataset collected via a cluster-based outcome-dependent sampling scheme, estimation for a marginal model may proceed via inverse-probability-weighted generalized estimating equations, where the cluster-specific weights are the inverse probability of the health center's inclusion in the sample. In this paper, we provide a detailed treatment of the asymptotic properties of this estimator, together with an explicit expression for the asymptotic variance and a corresponding estimator. Furthermore, motivated by the study we conducted in Rwanda, we propose a number of small-sample bias corrections to both the point estimates and the standard error estimates. Through simulation, we show that applying these corrections when the number of clusters is small generally reduces the bias in the point estimates, and results in closer to nominal coverage. The proposed methods are applied to data from 18 health centers and 1 district hospital in Rwanda.
- Published
- 2021
- Full Text
- View/download PDF
214. Quantifying Effect Sizes in Randomised and Controlled Trials: A Review
- Author
-
Patrick O Erah, Kehinde A. Ganiyu, and Shakirat O. Bello
- Subjects
Measure (data warehouse) ,Computer science ,business.industry ,05 social sciences ,050301 education ,Publication bias ,Confidence interval ,Clinical trial ,Correlation ,03 medical and health sciences ,0302 clinical medicine ,Software ,Meta-analysis ,Statistics ,030212 general & internal medicine ,Point estimation ,business ,0503 education - Abstract
Meta-analysis aggregates quantitative outcomes from multiple scientific studies to produce comparable effect sizes. The resultant integration of useful information leads to a statistical estimate with higher power and more reliable point estimate when compared to the measure derived from any individual study. Effect sizes are usually estimated using mean differences of the outcomes of treatment and control groups in experimental studies. Although different software exists for the calculations in meta-analysis, understanding how the calculations are done can be useful to many researchers, particularly where the values reported in the literature data is not applicable in the software available to the researcher. In this paper, search was conducted online primarily using Google and PubMed to retrieve relevant articles on the different methods of calculating the effect sizes and the associated confidence intervals, effect size correlation, p values and I2, and how to evaluate heterogeneity and publication bias are presented.
- Published
- 2021
- Full Text
- View/download PDF
215. WattScale
- Author
-
Srinivasan Iyengar, Benjamin Weil, Stephen Lee, David Irwin, and Prashant Shenoy
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,education.field_of_study ,Ground truth ,Operations research ,Computer science ,020209 energy ,Population ,Mode (statistics) ,020207 software engineering ,02 engineering and technology ,General Medicine ,Machine Learning (cs.LG) ,Footprint ,Computer Science - Computers and Society ,Computers and Society (cs.CY) ,0202 electrical engineering, electronic engineering, information engineering ,Point estimation ,Inefficiency ,education ,Building envelope ,Efficient energy use - Abstract
Buildings consume over 40% of the total energy in modern societies, and improving their energy efficiency can significantly reduce our energy footprint. In this paper, we present \texttt{WattScale}, a data-driven approach to identify the least energy-efficient buildings from a large population of buildings in a city or a region. Unlike previous methods such as least-squares that use point estimates, \texttt{WattScale} uses Bayesian inference to capture the stochasticity in the daily energy usage by estimating the distribution of parameters that affect a building. Further, it compares them with similar homes in a given population. \texttt{WattScale} also incorporates a fault detection algorithm to identify the underlying causes of energy inefficiency. We validate our approach using ground truth data from different geographical locations, which showcases its applicability in various settings. \texttt{WattScale} has two execution modes -- (i) individual, and (ii) region-based, which we highlight using two case studies. For the individual execution mode, we present results from a city containing >10,000 buildings and show that more than half of the buildings are inefficient in one way or another indicating a significant potential from energy improvement measures. Additionally, we provide probable cause of inefficiency and find that 41\%, 23.73\%, and 0.51\% homes have poor building envelope, heating, and cooling system faults, respectively. For the region-based execution mode, we show that \texttt{WattScale} can be extended to millions of homes in the US due to the recent availability of representative energy datasets., This paper appeared in the Journal ACM Transactions on Data Science
- Published
- 2021
- Full Text
- View/download PDF
216. Change point estimation in regression model with response missing at random
- Author
-
Hong-Bing Zhou and Han-Ying Liang
- Subjects
Statistics and Probability ,Statistics::Theory ,021103 operations research ,0211 other engineering and technologies ,Asymptotic distribution ,Estimator ,Regression analysis ,02 engineering and technology ,Missing data ,01 natural sciences ,Nonparametric regression ,010104 statistics & probability ,Compound Poisson process ,Kernel smoother ,Statistics::Methodology ,Applied mathematics ,Point estimation ,0101 mathematics ,Mathematics - Abstract
Based on the approach of left and right kernel smoothing with unilateral kernel function, we, in this paper, define estimators of change point and jump size in nonparametric regression model with r...
- Published
- 2021
- Full Text
- View/download PDF
217. Parametric uncertainty analysis on hydrodynamic coefficients in groundwater numerical models using Monte Carlo method and RPEM
- Author
-
Saman Javadi, Abbas Roozbahani, Kourosh Mohammadi, and Maryam Sadat Kahe
- Subjects
Economics and Econometrics ,geography ,geography.geographical_feature_category ,Geography, Planning and Development ,Monte Carlo method ,0211 other engineering and technologies ,Aquifer ,Soil science ,02 engineering and technology ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,01 natural sciences ,Hydraulic conductivity ,Environmental science ,021108 energy ,Point estimation ,Groundwater model ,Uncertainty analysis ,Groundwater ,0105 earth and related environmental sciences ,Parametric statistics - Abstract
Groundwater resources are the only source of water in many arid and semi-arid regions. It is important to manage these resources to have a sustainable development. However, there are many factors influencing the accuracy of the results in groundwater modeling. In this research, the uncertainty of two important groundwater model parameters (hydraulic conductivity and specific yield) were considered as the main sources of uncertainty in estimating water level in an unconfined aquifer, in Iran. For this purpose, a simple method called Rosenblueth Point Estimate Method (RPEM) was used to assess groundwater modeling parametric uncertainty, and its performance was compared with Monte Carlo method as a very complicated and time-consuming method. According to calibrated values of hydraulic conductivity and specific yield, several uncertainty intervals were considered to analyze uncertainty. The results showed that the optimum interval for hydraulic conductivity was 40% increase–30% decrease of the calibrated values in both Monte Carlo and RPEM methods. This interval for specific yield was 200% increase–90% decrease of the calibrated values. RPEM showed better performance using the evaluating indices in comparison with Monte Carlo method for both hydraulic conductivity and specific yield with 43% and 17% higher index values, respectively. These results can be used in groundwater management and future prediction of groundwater level.
- Published
- 2021
- Full Text
- View/download PDF
218. A Pedagogical Note on Asymptotic Normality of a Two-Sample Approximate Pivot for Comparing Means
- Author
-
Nitis Mukhopadhyay
- Subjects
021103 operations research ,Location test ,0211 other engineering and technologies ,Asymptotic distribution ,02 engineering and technology ,General Medicine ,01 natural sciences ,Behrens–Fisher problem ,010104 statistics & probability ,Distribution (mathematics) ,Applied mathematics ,Point estimation ,0101 mathematics ,Standard normal table ,Slutsky's theorem ,Mathematics ,Central limit theorem - Abstract
A two-sample pivot for comparing the means from independent populations is well known. For large sample sizes, the distribution of the pivot is routinely approximated by a standard normal distribution. The question is about the thinking process that may guide one to rationalize invoking the asymptotic theory. In this pedagogical piece, we put forward soft statistical arguments to make users feel more at ease by suitably indexing the sample sizes from a practical standpoint that would allow a valid interpretation and understanding of pointwise convergence of the pivot's cumulative distribution function (c.d.f.) to the c.d.f. of a standard normal random variable.
- Published
- 2021
- Full Text
- View/download PDF
219. Study on Agile Story Point Estimation Techniques and Challenges
- Author
-
Manmohan Sharma and Ravi Kiran Mallidi
- Subjects
business.industry ,Computer science ,Point estimation ,Software engineering ,business ,Agile software development - Published
- 2021
- Full Text
- View/download PDF
220. Testing Moderation in Business and Psychological Studies with Latent Moderated Structural Equations
- Author
-
Rebecca S. Lau, Gordon W. Cheung, Helena D. Cooper-Thomas, and Linda C. Wang
- Subjects
business.industry ,05 social sciences ,050109 social psychology ,Latent variable ,Moderation ,Machine learning ,computer.software_genre ,Interaction ,General Business, Management and Accounting ,Structural equation modeling ,Regression ,0502 economics and business ,Linear regression ,0501 psychology and cognitive sciences ,Industrial and organizational psychology ,Point estimation ,Artificial intelligence ,Business and International Management ,Psychology ,business ,computer ,050203 business & management ,General Psychology ,Applied Psychology - Abstract
Most organizational researchers understand the detrimental effects of measurement errors in testing relationships among latent variables and hence adopt structural equation modeling (SEM) to control for measurement errors. Nonetheless, many of them revert to regression-based approaches, such as moderated multiple regression (MMR), when testing for moderating and other nonlinear effects. The predominance of MMR is likely due to the limited evidence showing the superiority of latent interaction approaches over regression-based approaches combined with the previous complicated procedures for testing latent interactions. In this teaching note, we first briefly explain the latent moderated structural equations (LMS) approach, which estimates latent interaction effects while controlling for measurement errors. Then we explain the reliability-corrected single-indicator LMS (RCSLMS) approach to testing latent interactions with summated scales and correcting for measurement errors, yielding results which approximate those from LMS. Next, we report simulation results illustrating that LMS and RCSLMS outperform MMR in terms of accuracy of point estimates and confidence intervals for interaction effects under various conditions. Then, we show how LMS and RCSLMS can be implemented with Mplus, providing an example-based tutorial to demonstrate a 4-step procedure for testing a range of latent interactions, as well as the decisions at each step. Finally, we conclude with answers to some frequently asked questions when testing latent interactions. As supplementary files to support researchers, we provide a narrated PowerPoint presentation, all Mplus syntax and output files, data sets for numerical examples, and Excel files for conducting the loglikelihood values difference test and plotting the latent interaction effects.
- Published
- 2021
- Full Text
- View/download PDF
221. Stochastic finite element method based on point estimate and Karhunen–Loéve expansion
- Author
-
Zhipeng Lai, Wangbao Zhou, Ping Xiang, Lizhong Jiang, Xiang Liu, and Yulin Feng
- Subjects
Karhunen–Loève theorem ,Stochastic field ,Discretization ,Mechanical Engineering ,Random response ,Structure (category theory) ,02 engineering and technology ,01 natural sciences ,Finite element method ,020303 mechanical engineering & transports ,0203 mechanical engineering ,0103 physical sciences ,Applied mathematics ,Point estimation ,010301 acoustics ,Stochastic finite element method ,Mathematics - Abstract
The present study proposes a new stochastic finite element method. The Karhunen–Loeve expansion is utilized to discretize the stochastic field, while the point estimate method is applied for calculating the random response of the structure. Two illustrative examples, including finite element models with one-dimensional and two-dimensional stochastic fields, are investigated to demonstrate the accuracy and efficiency of the proposed method. Furthermore, two classical finite element analysis methods are used to validate the results. It is proved that the proposed method can model both the one-dimensional and the two-dimensional stochastic finite element problems accurately and efficiently.
- Published
- 2021
- Full Text
- View/download PDF
222. Does social participation improve cognitive abilities of the elderly?
- Author
-
Shu Cai
- Subjects
Economics and Econometrics ,Social activity ,Cognitive score ,05 social sciences ,Cognition ,Community service ,Social engagement ,Standard deviation ,Developmental psychology ,Variation (linguistics) ,0502 economics and business ,Point estimation ,Effects of sleep deprivation on cognitive performance ,050207 economics ,Psychology ,050205 econometrics ,Demography ,Social policy - Abstract
This paper examines the effect of social participation on cognitive performance by using data from a longitudinal survey of Chinese elderly. It addresses the problem of endogenous participation by exploiting the variation in changes of social participation that are driven by change in community service for social activities. The results show that participating in social activity has significantly positive impacts on the cognitive function of the elderly. The point estimates indicate that cognitive score increases by 29% of a standard deviation as a result of engaging in social activity. I also investigate heterogeneity and potential mechanisms of the effect.
- Published
- 2021
- Full Text
- View/download PDF
223. Bayesian Constrained Energy Minimization for Hyperspectral Target Detection
- Author
-
Zhenwei Shi, Rui Zhao, Xinzhong Zhu, Jing Zhang, and Ning Zhang
- Subjects
Atmospheric Science ,Pixel ,Computer science ,business.industry ,QC801-809 ,Feature extraction ,Posterior probability ,Bayesian probability ,Geophysics. Cosmic physics ,Hyperspectral imaging ,Pattern recognition ,distributional estimate ,Bayesian ,Dirichlet distribution ,Object detection ,Ocean engineering ,symbols.namesake ,symbols ,Point estimation ,Artificial intelligence ,Computers in Earth Sciences ,hyperspectral target detection (HTD) ,business ,TC1501-1800 - Abstract
For better performance of hyperspectral target detectors, the prior target spectrum is expected to be accurate and consistent with the target spectrum in the hyperspectral image to be detected. The existing hyperspectral target detection algorithms usually assume that the prior target spectrum is highly reliable. However, the label obtained is not always precise in practice, and pixels of the same object may have quite different spectra. Since it is hard to acquire a highly reliable prior target spectrum in some application scenarios, we propose a Bayesian constrained energy minimization (B-CEM) method for hyperspectral target detection. Instead of the point estimation of the target spectrum, we infer the posterior distribution of the true target spectrum based on the prior target spectrum. Specifically, considering the characteristics of hyperspectral image and target detection task, we adopt the Dirichlet distribution to approximate the true target spectrum. Experimental results on three datasets demonstrate the effectiveness of the proposed B-CEM when the known target spectrum is noisy or inconsistent with the true target spectrum. The necessity to approximate the true target spectrum is also proved. Generally, the distributional estimate achieves better performance than using the known target spectrum directly.
- Published
- 2021
224. Common ownership and competition in product markets
- Author
-
Shawn Thomas, Andrew Koch, and Marios A. Panayides
- Subjects
040101 forestry ,Economics and Econometrics ,Industry classification ,050208 finance ,Product market ,Strategy and Management ,05 social sciences ,Common ownership ,04 agricultural and veterinary sciences ,Microeconomics ,Competition (economics) ,Specification ,Accounting ,0502 economics and business ,Economics ,0401 agriculture, forestry, and fisheries ,Profitability index ,Point estimation ,Proxy (statistics) ,Finance - Abstract
We investigate the relation between common institutional ownership of the firms in an industry and product market competition. We find that common ownership is neither robustly positively related with industry profitability or output prices nor is it robustly negatively related with measures of nonprice competition, as would be expected if common ownership reduces competition. This conclusion holds regardless of industry classification choice, common ownership measure, profitability measure, nonprice competition proxy, or model specification. Our point estimates are close to zero with tight bounds, rejecting even modestly sized economic effects. We conclude that antitrust restrictions seeking to limit intra-industry common ownership are not currently warranted.
- Published
- 2021
- Full Text
- View/download PDF
225. Limit Distribution of the φ-Divergence Based Change Point Estimator
- Author
-
Peter N. Mwita, Mwelu Susan, Charity Wamwea, and Anthony Waititu
- Subjects
Stochastic process ,Homogeneity (statistics) ,Applied mathematics ,Estimator ,Asymptotic distribution ,Point estimation ,Brownian bridge ,Divergence (statistics) ,Asymptotic theory (statistics) ,Mathematics - Abstract
The assumption of stationarity is too restrictive especially for long time series. This paper studies the change point problem through a change point estimator based on the φ-divergence which provides a rich set of distance like measures between pairs of distributions. The change point problem is considered in the following sub-fields: the problem of divergence estimation, testing for the homogeneity between two samples as well as estimating the time of change. The asymptotic distribution of the change point estimator is estimated by the limiting distribution of a stochastic process within given bounds through asymptotic theory surrounding the likelihood theory. The distribution is found to converge to that of a standardized Brownian bridge process.
- Published
- 2021
- Full Text
- View/download PDF
226. BDNNSurv: Bayesian Deep Neural Networks for Survival Analysis Using Pseudo Values
- Author
-
Lili Zhao and Dai Feng
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Bayesian probability ,Python (programming language) ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Methodology (stat.ME) ,Code (cryptography) ,Point (geometry) ,Artificial intelligence ,Point estimation ,business ,computer ,Statistics - Methodology ,Survival analysis ,computer.programming_language - Abstract
There has been increasing interest in modeling survival data using deep learning methods in medical research. In this paper, we proposed a Bayesian hierarchical deep neural networks model for modeling and prediction of survival data. Compared with previously studied methods, the new proposal can provide not only point estimate of survival probability but also quantification of the corresponding uncertainty, which can be of crucial importance in predictive modeling and subsequent decision making. The favorable statistical properties of point and uncertainty estimates were demonstrated by simulation studies and real data analysis. The Python code implementing the proposed approach was provided.
- Published
- 2021
- Full Text
- View/download PDF
227. Stochastic Gradient-Based Distributed Bayesian Estimation in Cooperative Sensor Networks
- Author
-
Priyadip Ray, Hao Chen, Anton Yen, Jose Cadena, Deepak Rajan, Ryan Goldhahn, and Braden Soper
- Subjects
Bayes estimator ,Distributed database ,Computer science ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Bayesian inference ,Distributed algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Point estimation ,Electrical and Electronic Engineering ,Langevin dynamics ,Wireless sensor network - Abstract
Distributed Bayesian inference provides a full quantification of uncertainty offering numerous advantages over point estimates that autonomous sensor networks are able to exploit. However, fully-decentralized Bayesian inference often requires large communication overheads and low network latency, resources that are not typically available in practical applications. In this paper, we propose a decentralized Bayesian inference approach based on stochastic gradient Langevin dynamics, which produces full posterior distributions at each of the nodes with significantly lower communication overhead. We provide analytical results on convergence of the proposed distributed algorithm to the centralized posterior, under typical network constraints. We also provide extensive simulation results to demonstrate the validity of the proposed approach.
- Published
- 2021
- Full Text
- View/download PDF
228. Convergence of a Randomised Change Point Estimator in GARCH Models
- Author
-
Joseph Mung’atu, George Awiakye-Marfo, and Patrick Weke
- Subjects
Pseudolikelihood ,Exchange rate ,Autoregressive conditional heteroskedasticity ,Econometrics ,Asymptotic distribution ,Estimator ,CUSUM ,Point estimation ,Brownian bridge ,Mathematics - Abstract
In this paper, the randomised pseudolikelihood ratio change point estimator for GARCH models in [1] is employed and its limiting distribution is derived as the supremum of a standard Brownian bridge. Data analysis to validate the estimator is carried out using the United states dollar (USD)-Ghana cedi (GHS) daily exchange rate data. The randomised estimator is able to detect and estimate a single change in the variance structure of the data and provides a reference point for historic data analysis.
- Published
- 2021
- Full Text
- View/download PDF
229. Probabilistic small-signal stability analysis of power systems based on Hermite polynomial approximation
- Author
-
Tabrizchi, Ali Mohammad and Rezaei, Mohammad Mahdi
- Published
- 2021
- Full Text
- View/download PDF
230. Sonification of reference markers for auditory graphs: effects on non-visual point estimation tasks
- Author
-
Oussama Metatla, Nick Bryan-Kinns, Tony Stockman, and Fiore Martin
- Subjects
Sonification ,Point estimation ,Auditory graphs ,Non-visual interaction ,Reference markers ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.
- Published
- 2016
- Full Text
- View/download PDF
231. Determinants of Citation in Epidemiological Studies on Phthalates
- Author
-
Bram Duyx, Lex M. Bouter, Maurice P. Zeegers, Gerard M H Swaen, Miriam J.E. Urlings, Research integrity, Epidemiology and Data Science, APH - Methodology, Maastricht University Office, RS: FHML Studio Europa Maastricht, RS: NUTRIM - R3 - Respiratory & Age-related Health, Genetica & Celbiologie, RS: FPN NPPP I, Section Neuropsychology, Complexe Genetica, and RS: CAPHRI - R5 - Optimising Patient Care
- Subjects
Male ,medicine.medical_specialty ,Health (social science) ,IMPACT ,BIOMARKERS ,Phthalic Acids ,INFANTS ,010501 environmental sciences ,Logistic regression ,01 natural sciences ,03 medical and health sciences ,Human health ,Questionable research practice ,0302 clinical medicine ,SDG 17 - Partnerships for the Goals ,Phthalates ,Citation analysis ,STATISTICALLY SIGNIFICANT ,Management of Technology and Innovation ,Epidemiology ,medicine ,Humans ,030212 general & internal medicine ,Point estimation ,REPRODUCTIVE HORMONES ,EXPOSURE ,0105 earth and related environmental sciences ,MEHP ,Original Research/Scholarship ,Actuarial science ,Impact factor ,Health Policy ,Random effects model ,Issues, ethics and legal aspects ,Epidemiologic Studies ,TRIALS ,BIAS ,Network analysis ,Journal Impact Factor ,Citation bias ,Psychology ,Citation - Abstract
Citing of previous publications is an important factor in knowledge development. Because of the great amount of publications available, only a selection of studies gets cited, for varying reasons. If the selection of citations is associated with study outcome this is called citation bias. We will study determinants of citation in a broader sense, including e.g. study design, journal impact factor or the funding source of the publication. As a case study we assess which factors drive citation in the human literature on phthalates, specifically the metabolite mono(2-ethylhexyl) phthalate (MEHP). A systematic literature search identified all relevant publications on human health effect of MEHP. Data on potential determinants of citation were extracted in duplo. Specialized software was used to create a citation network, including all potential citation pathways. Random effect logistic regression was used to assess whether these determinants influence the likelihood of citation. 112 Publications on MEHP were identified, with 5684 potential citation pathways of which 551 were actual citations. Reporting of a harmful point estimate, journal impact factor, authority of the author, a male corresponding author, research performed in North America and self-citation were positively associated with the likelihood of being cited. In the literature on MEHP, citation is mostly driven by a number of factors that are not related to study outcome. Although the identified determinants do not necessarily give strong indications of bias, it shows selective use of published literature for a variety of reasons.
- Published
- 2020
- Full Text
- View/download PDF
232. Considering the uncertainty of hydrothermal wind and solar-based DG
- Author
-
Zhu Yishun, Yang Bo, Pan Jun, Chuangxin Guo, Nitish Chopra, Muhammad Suhail Shaikh, Muhammad Mohsin Ansari, and Huang Xurui
- Subjects
Mathematical optimization ,Computer science ,020209 energy ,02 engineering and technology ,Solar power & Wind power plant ,01 natural sciences ,Wind speed ,010305 fluids & plasmas ,Thermal ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Point estimation ,Beta distribution ,Butterfly optimization algorithm ,Solar power ,Weibull distribution ,business.industry ,Fossil fuel ,General Engineering ,Point estimate method ,Engineering (General). Civil engineering (General) ,Renewable energy ,Work (electrical) ,Hydro ,TA1-2040 ,business - Abstract
Lately, researchers have widely centered around renewable energy assets, for example, wind control and solar units to decrease the utilization of fossil fuels as the fundamental wellspring of ecological natural contaminations. One of the fundamental difficulties for the generation of wind and solar energies is that they frequently including the uncertainty because of the stochastic natures of wind speeds and solar radiation. Along these lines, the current uncertainty to assets the wind and solar units and necessary to assess the arranging strategy of distribution systems for having a dependable execution. To display the uncertainty related along with the wind and solar power, the point estimate method (PEM) is utilized. Weibull and Beta distributions are utilized to deal with uncertain information factors. These fundamental goals present work is to minimize the generation cost of the framework is enhanced dependent on the butterfly optimization algorithm (BOA). Four case test frameworks viewed as it is discovered that the proposed strategy gives better arrangement as far as execution time and normal cost viability. The recreation results demonstrate that the entrance of sustainable energy source builds, the generation cost diminishes. The outcomes acquired along with the butterfly optimization algorithm contrasted and another one understood strategy. Also, the precise distribution of generation cost.
- Published
- 2020
233. A broader class of modified two-stage minimum risk point estimation procedures for a normal mean
- Author
-
Jun Hu and Yan Zhuang
- Subjects
Statistics and Probability ,021103 operations research ,Mean squared error ,0211 other engineering and technologies ,Minimum risk ,Sampling (statistics) ,02 engineering and technology ,01 natural sciences ,Class (biology) ,Absolute deviation ,010104 statistics & probability ,Double sampling ,Modeling and Simulation ,Statistics ,Point estimation ,Stage (hydrology) ,0101 mathematics ,Mathematics - Abstract
In this paper, we design an innovative and general class of modified two-stage sampling schemes to enhance double sampling and modified double sampling procedures. Under the squared error loss plus...
- Published
- 2020
- Full Text
- View/download PDF
234. Estimating publication bias in <scp>meta‐analyses</scp> of <scp>peer‐reviewed</scp> studies: A <scp>meta‐meta‐analysis</scp> across disciplines and journal tiers
- Author
-
Maya B. Mathur and Tyler J. VanderWeele
- Subjects
Percentile ,Databases, Factual ,media_common.quotation_subject ,Publication bias ,01 natural sciences ,Article ,Confidence interval ,Education ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Optimism ,Meta-analysis ,Statistical significance ,Statistics ,030212 general & internal medicine ,Point estimation ,0101 mathematics ,Psychology ,Publication Bias ,media_common - Abstract
Selective publication and reporting in individual papers compromise the scientific record, but are meta-analyses as compromised as their constituent studies? We systematically sampled 63 meta-analyses (each comprising at least 40 studies) in PLoS One, top medical journals, top psychology journals, and Metalab, an online, open-data database of developmental psychology meta-analyses. We empirically estimated publication bias in each, including only the peer-reviewed studies. Across all meta-analyses, we estimated that “statistically significant” results in the expected direction were only 1.17 times more likely to be published than “nonsignificant” results or those in the unexpected direction (95% CI: [0.93, 1.47]), with a confidence interval substantially overlapping the null. Comparable estimates were 0.83 for meta-analyses in PLoS One, 1.02 for top medical journals, 1.54 for top psychology journals, and 4.70 for Metalab. The severity of publication bias did differ across individual meta-analyses; in a small minority (10%; 95% CI: [2%, 21%]), publication bias appeared to favor “significant” results in the expected direction by more than threefold. We estimated that for 89% of meta-analyses, the amount of publication bias that would be required to attenuate the point estimate to the null exceeded the amount of publication bias estimated to be actually present in the vast majority of meta-analyses from the relevant scientific discipline (exceeding the 95th percentile of publication bias). Study-level measures (“statistical significance” with a point estimate in the expected direction and point estimate size) did not indicate more publication bias in higher-tier versus lower-tier journals, nor in the earliest studies published on a topic versus later studies. Overall, we conclude that the mere act of performing a meta-analysis with a large number of studies (at least 40) and that includes non-headline results may largely mitigate publication bias in meta-analyses, suggesting optimism about the validity of meta-analytic results.
- Published
- 2020
- Full Text
- View/download PDF
235. A Practical Method for Rapid Assessment of Reliability in Deep Excavation Projects
- Author
-
Arefeh Arabaninezhad and Ali Fakher
- Subjects
Computer science ,Process (engineering) ,Monte Carlo method ,0211 other engineering and technologies ,020101 civil engineering ,Excavation ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Civil engineering ,Displacement (vector) ,Field (computer science) ,0201 civil engineering ,Set (abstract data type) ,Point estimation ,Reliability (statistics) ,021101 geological & geomatics engineering ,Civil and Structural Engineering - Abstract
Many reliability analysis methods require complicated mathematical process or access to comprehensive datasets. Such shortcomings limit their application to solve the geotechnical problems. It is of advantage to develop simpler reliability analysis methods that can be employed in the design of geotechnical structures. The current study suggests a simple framework for quick reliability analysis of deep excavation projects in urban areas, which is a common geotechnical problem. To investigate the feasibility of the presented method, five case studies were considered. It is worth mentioning that the horizontal displacement at the crest of excavation was set to be the main system response. For verification purposes, the results were compared to the random set, point estimate and Monte Carlo methods results, which are also used for reliability analysis of geotechnical problems. All case studies were recognized as projects of high importance and monitored during the excavation process. The field observations confirmed that the estimated probabilities of excessive deformations were reasonable for all cases. Comparing the modeling results and field measurements suggests the feasibility of the presented method for evaluating the reliability of deep urban excavations and estimating the horizontal displacement at the crest of excavations.
- Published
- 2020
- Full Text
- View/download PDF
236. Elicitation complexity of statistical properties
- Author
-
Rafael M. Frongillo and Ian A. Kash
- Subjects
Statistics and Probability ,021103 operations research ,Applied Mathematics ,General Mathematics ,Financial risk ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Agricultural and Biological Sciences (miscellaneous) ,010104 statistics & probability ,Bayes' theorem ,General theory ,Entropy (information theory) ,Empirical risk minimization ,Point estimation ,0101 mathematics ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Mathematical economics ,Expected loss ,Mathematics - Abstract
Summary A property, or statistical functional, is said to be elicitable if it minimizes the expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limitations of point estimation and empirical risk minimization. While recent work has sought to identify which properties are elicitable, here we investigate a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, which includes several basic results on how elicitation complexity behaves and the complexity of standard properties of interest. Building on this foundation, our main result gives tight complexity bounds for the broad class of Bayes risks. We apply these results to several properties of interest, including variance, entropy, norms and several classes of financial risk measures. The article concludes with a discussion and open questions.
- Published
- 2020
- Full Text
- View/download PDF
237. Testing-optimal kernel choice in HAR inference
- Author
-
Jingjing Yang and Yixiao Sun
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Applied Mathematics ,05 social sciences ,Autocorrelation ,Inference ,Variance (accounting) ,01 natural sciences ,010104 statistics & probability ,Quadratic equation ,Kernel (statistics) ,0502 economics and business ,Applied mathematics ,Point estimation ,0101 mathematics ,050205 econometrics ,Type I and type II errors ,Mathematics - Abstract
The paper investigates the optimal kernel choice in heteroskedasticity and autocorrelation robust tests based on the fixed-b asymptotics. In parallel with the optimality of the quadratic spectral kernel under the asymptotic mean squared error criterion of the point estimator of the long run variance as considered in Andrews (1991) , we show that the optimality of the quadratic spectral kernel continues to hold under the testing-oriented criterion of Sun, Philips and Jin (2008) which takes a weighted average of the probabilities of type I and type II errors of the fixed-b asymptotic test.
- Published
- 2020
- Full Text
- View/download PDF
238. How and how much does expert error matter? Implications for quantitative peace research
- Author
-
Kyle L. Marquardt
- Subjects
021110 strategic, defence & security studies ,Observational error ,Sociology and Political Science ,Computer science ,05 social sciences ,0211 other engineering and technologies ,Library science ,02 engineering and technology ,Structural equation modeling ,0506 political science ,Robustness (computer science) ,Research council ,Item response theory ,Political Science and International Relations ,Econometrics ,050602 political science & public administration ,Multiple Imputation Technique ,Point estimation ,Sociology ,Scale (map) ,Safety Research ,Reliability (statistics) - Abstract
Expert-coded datasets provide scholars with otherwise unavailable cross-national longitudinal data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error; this variation may correlate with outcomes of interest, biasing results in analyses that use these data. This latter concern is particularly acute for key concepts in peace research. In this article, I describe potential sources of expert error, focusing on the measurement of identity-based discrimination. I then use expert-coded data on identity-based discrimination to examine: 1) the implications of measurement error for quantitative analyses that use expert-coded data, and 2) the degree to which different techniques for aggregating these data ameliorate these issues. To do so, I simulate data with different forms and levels of expert error and regress conflict onset on different aggregations of these data. These analyses yield two important results. First, almost all aggregations show a positive relationship between identity-based discrimination and conflict onset consistently across simulations, in line with the assumed true relationship between the concept and outcome. Second, different aggregation techniques vary in their substantive robustness beyond directionality. A structural equation model provides the most consistently robust estimates, while both the point estimates from an Item Response Theory (IRT) model and the average over expert codings provide similar and relatively robust estimates in most simulations. The median over expert codings and a naive multiple imputation technique yield the least robust estimates.
- Published
- 2020
- Full Text
- View/download PDF
239. Probabilistic Weighted Copula Regression Model With Adaptive Sample Selection Strategy for Complex Industrial Processes
- Author
-
Yang Zhou, Shaojun Li, and Xiang Ren
- Subjects
Computer science ,020208 electrical & electronic engineering ,Copula (linguistics) ,Probabilistic logic ,Probability density function ,Regression analysis ,02 engineering and technology ,Bivariate analysis ,computer.software_genre ,Computer Science Applications ,Vine copula ,Distribution function ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Sample space ,Point estimation ,Data mining ,Electrical and Electronic Engineering ,computer ,Information Systems - Abstract
With complicated structures and complex physicochemical reactions, most industrial processes are intrinsically characterized by high dimensionality, nonlinearity, non-Gaussianity, and self-correlation. In this article, a novel probabilistic generative model, called weighted copula regression (WCR), is developed for complex processes. This method employs a probabilistic graphical tool (vine copula) to flexibly handle the underlying patterns via the factorization of complex dependence structures into multiple bivariate copulas. Monte Carlo estimation is incorporated for establishing a fast and reliable computing framework. To avoid overinterpretation of the industrial data, an adaptive sample selection strategy is proposed to explore the underlying distribution of the sample space and to select the most “informative” samples. By considering the copula weights that carry crucial information on the local data, the probabilistic WCR method can provide fast point estimate with prediction uncertainty for every predicted sample. The proposed WCR method is compared with six state-of-the-art methods, and the efficiency is validated using a numerical example and the ethylene cracking furnace process.
- Published
- 2020
- Full Text
- View/download PDF
240. EFFICIENT MAXIMUM POWER POINT ESTIMATION MONITORING OF PHOTOVOLTAIC USING FEED FORWARD NEURAL NETWORK
- Author
-
Mochammad Ari Bagus Nugroho, Anang Tjahjono, Mentari Putri Jati, Novie Ayub Windarko, and Hasnira Hasnira
- Subjects
Maximum power principle ,Computer science ,Control theory ,Photovoltaic system ,Feedforward neural network ,Point estimation - Abstract
Perkembangan pemanfaatan panel surya di masa depan akan terus meningkat. Salah satu bentuk karakteristik panel surya merupakan kurva I-V yang mana dengan kurva tersebut dapat digunakan untuk menganalisa besaran daya keluaran panel surya. Dengan mengetahui kurva I-V tersebut dapat dilakukan Maximum Power Point Estimation (MPPE) yang dapat diampu oleh panel surya. Informasi mengenai nilai estimasi daya maksimum panel surya merupakan bagian penting untuk menentukan kapasitas pembebanan, selain itu juga untuk menjaga umur peralatan yang digunakan. Feed Forward Neural Network dengan Algoritma Back Propagation (FFBP) terbukti dapat memberikan informasi nilai MPPE pada keluaran panel surya. Nilai masukan pada ANN berupa tegangan dan arus dari panel surya, sedangkan keluaran dari ANN tersebut berupa nilai estimasi daya. Hasil dari simulasi MPPE didapatkan galat rata rata sebesar 0.04 poin antara daya aktual (MPP) dan daya estimasi (MPPE).
- Published
- 2020
- Full Text
- View/download PDF
241. RGB-D-based gaze point estimation via multi-column CNNs and facial landmarks global optimization
- Author
-
Zhang Ziheng, Shenghua Gao, and Dongze Lian
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Gaze ,Computer graphics ,Margin (machine learning) ,Face (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Point (geometry) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Point estimation ,business ,Global optimization ,Software - Abstract
In this work, we utilize a multi-column CNNs framework to estimate the gaze point of a person sitting in front of a display from an RGB-D image of the person. Given that gaze points are determined by head poses, eyeball poses, and 3D eye positions, we propose to infer the three components separately and then integrate them for gaze point estimation. The captured depth images, however, usually contain noises and black holes which prevent us from acquiring reliable head pose and 3D eye position estimation. Therefore, we propose to refine the raw depth for 68 facial keypoints by first estimating their relative depths from RGB face images, which along with the captured raw depths are then used to solve the absolute depth for all facial keypoints through global optimization. The refined depths will provide us reliable estimation for both head pose and 3D eye position. Given that existing publicly available RGB-D gaze tracking datasets are small, we also build a new dataset for training and validating our method. To the best of our knowledge, it is the largest RGB-D gaze tracking dataset in terms of the number of participants. Comprehensive experiments demonstrate that our method outperforms existing methods by a large margin on both our dataset and the Eyediap dataset.
- Published
- 2020
- Full Text
- View/download PDF
242. Concussion Risk Between Individual Football Players: Survival Analysis of Recurrent Events and Non-events
- Author
-
Thomas W. McAllister, Alok S. Shah, Kenneth L. Cameron, Steven P. Broglio, Megan N. Houston, Steven J. Svoboda, Alison Brooks, Jaroslaw Harezlak, Michael McCrea, Jason P. Mihalik, Eamon T. Campolettano, Steven Rowson, Larry D. Riggen, Brian D. Stemper, and Stefan M. Duma
- Subjects
medicine.medical_specialty ,Cumulative distribution function ,0206 medical engineering ,Biomedical Engineering ,Psychological intervention ,02 engineering and technology ,Football ,medicine.disease ,020601 biomedical engineering ,Variable (computer science) ,Physical medicine and rehabilitation ,Concussion ,medicine ,Model risk ,Point estimation ,Psychology ,Survival analysis - Abstract
Concussion tolerance and head impact exposure are highly variable among football players. Recent findings highlight that head impact data analyses need to be performed at the subject level. In this paper, we describe a method of characterizing concussion risk between individuals using a new survival analysis technique developed with real-world head impact data in mind. Our approach addresses the limitations and challenges seen in previous risk analyses of football head impact data. Specifically, this demonstrative analysis appropriately models risk for a combination of left-censored recurrent events (concussions) and right-censored recurrent non-events (head impacts without concussion). Furthermore, the analysis accounts for uneven impact sampling between players. In brief, we propose using the Consistent Threshold method to develop subject-specific risk curves and then determine average risk point estimates between subjects at injurious magnitude values. We describe an approach for selecting an optimal cumulative distribution function to model risk between subjects by minimizing injury prediction error. We illustrate that small differences in distribution fit can result in large predictive errors. Given the vast amounts of on-field data researchers are collecting across sports, this approach can be applied to develop population-specific risk curves that can ultimately inform interventions that reduce concussion incidence.
- Published
- 2020
- Full Text
- View/download PDF
243. Group IV humpback whales: their status from aerial and landbased surveys off Western Australia, 2005
- Author
-
Charles G. M. Paxton, John Bannister, and Sharon L. Hedley
- Subjects
Estimation ,biology ,Aerial survey ,Whale ,Aquatic Science ,Water depth ,Geography ,Abundance (ecology) ,biology.animal ,Animal Science and Zoology ,Physical geography ,Point estimation ,Transect ,Southern Hemisphere ,Ecology, Evolution, Behavior and Systematics - Abstract
Single platform aerial line transect and land-based surveys of Southern Hemisphere Group IV humpback whales were undertaken to provide absoluteabundance estimates of animals migrating northward along the western Australian coast during June–August 2005. The aerial survey was designedto cover the whole period of northward migration but the resulting estimates from that survey alone could only, at best, provide relative abundanceestimates as it was not possible to estimate g(0), the detection probability along the trackline, from the data. Owing to logistical constraints, theland-based survey was only possible for a much shorter period (two weeks during the expected peak of the migration in mid-July). This paperproposes three methods that utilise these complementary data in different ways to attempt to obtain absolute abundance estimates. The aerial linetransect data were used to estimate relative whale density (for each day), allowing absolute abundance from the land-based survey to be estimatedfor the short period of its duration. In turn, the land-based survey allowed estimation of g(0) for the aerial survey. Absolute estimates of abundancefor the aerial survey were obtained by combining the g(0) estimate with the relative density estimates, summing over the appropriate number ofdays. The most reliable estimate of northward migrating whales passing the land station for the period of the land-based survey only was 4,700(95% CI 2,700–14,000). The most reliable estimate for the number of whales passing through the aerial survey region for the duration of that survey(55 days from June through to August) was 10,300 (95% CI 6,700–24,500). This is a conservative estimate because the duration of the aerial surveywas almost certainly shorter than the period of the migration. Extrapolation beyond the end of this survey was considered unreliable, but abundancefrom the estimated start of the migration to the end of the survey (87 days from mid-April to August) was estimated to be 12,800 (95% CI 7,500–44,600). The estimated number of whales depends crucially on the assumed migration and period of migration. Results for different migrationparameters are also presented. The point estimates of abundance, whilst higher than those from a previous survey in 1999 (when adjusted for surveyduration) are not significantly so. The peak of the whales’ distribution was found at c.90m water depth.
- Published
- 2020
- Full Text
- View/download PDF
244. What Matters for In-House Tax Planning: Tax Function Power and Status
- Author
-
Matthew Ege, John R. Robinson, and Bradford F. Hepfer
- Subjects
Economics and Econometrics ,050208 finance ,Public economics ,media_common.quotation_subject ,05 social sciences ,Rank (computer programming) ,Relative power ,050201 accounting ,Power (social and political) ,Shock (economics) ,Accounting ,0502 economics and business ,Economics ,Social hierarchy ,Tax planning ,Point estimation ,Function (engineering) ,Finance ,media_common - Abstract
Social hierarchy theory predicts that the power and status of an organizational function have a first-order effect on the function's ability to influence outcomes. We find that the rank of the title of the top tax executive is positively associated with tax planning after controlling for treatment effects. Our inferences remain when (1) using changes in the size of the c-suite as a shock to the relative power and status of the tax function, and (2) examining promotions and demotions in title rank. Point estimates suggest that tax function power and status are up to 2.6 times as important as tax planning resources, up to 4.0 times as important as tax function-specific expertise, and, more often than not, more important than manager fixed effects. Overall, results suggest that the power and status of the tax function is often what matters most in determining tax outcomes. JEL Classifications: H25; L22; M41.
- Published
- 2020
- Full Text
- View/download PDF
245. A Bayesian Inference Framework for Procedural Material Parameter Estimation
- Author
-
Shuang Zhao, Miloš Hašan, Yu Guo, and Ling-Qi Yan
- Subjects
FOS: Computer and information sciences ,Flexibility (engineering) ,Estimation theory ,Continuous modelling ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,020207 software engineering ,Markov chain Monte Carlo ,Sample (statistics) ,02 engineering and technology ,Bayesian inference ,Computer Graphics and Computer-Aided Design ,Graphics (cs.GR) ,Range (mathematics) ,symbols.namesake ,Computer Science - Graphics ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Point estimation ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials---wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints---to both synthetic and real target images., 12 pages, 13 figures, Pacific Graphics 2020
- Published
- 2020
- Full Text
- View/download PDF
246. Purely sequential estimation problems for the mean of a normal population by sampling in groups under permutations within each group and illustrations
- Author
-
Zhe Wang and Nitis Mukhopadhyay
- Subjects
Statistics and Probability ,Sequential estimation ,Group (mathematics) ,Normal population ,Sampling (statistics) ,020206 networking & telecommunications ,02 engineering and technology ,Variance (accounting) ,01 natural sciences ,Confidence interval ,Combinatorics ,010104 statistics & probability ,Permutation ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Point estimation ,0101 mathematics ,Mathematics - Abstract
Purely sequential estimation for unknown mean ( μ ) in a normal population having an unknown variance ( σ 2 ) when observations are gathered in groups has been recently discussed in Mukhopadhyay an...
- Published
- 2020
- Full Text
- View/download PDF
247. Information about COVID-19 among Selected Population of Eastern Nepal: A Descriptive Cross-sectional Study
- Author
-
Abhisekh Bhattarai, Kumud Chapagain, Gajendra Prasad Rauniyar, and Rais Pokharel
- Subjects
medicine.medical_specialty ,Health Knowledge, Attitudes, Practice ,Coronavirus disease 2019 (COVID-19) ,Cross-sectional study ,Population ,Social Welfare ,Disease ,information ,Nepal ,eastern Nepal ,Surveys and Questionnaires ,medicine ,Humans ,Point estimation ,education ,News media ,education.field_of_study ,lcsh:R5-920 ,business.industry ,SARS-CoV-2 ,COVID-19 ,General Medicine ,Confidence interval ,Cross-Sectional Studies ,Family medicine ,Original Article ,business ,lcsh:Medicine (General) - Abstract
Introduction: Rapid spread of COVID-19 has become a major concern worldwide. Strong adherence to preventive measures can help to break the chain of the spread of coronavirus. We conducted this study to find out the extent of information general people of Eastern Nepal have regarding COVID-19 and their attitude and practice towards preventing its spread. Methods: A descriptive cross-sectional online study was done among the people of Eastern Nepal on knowledge, attitude, and practice related to COVID-19 from May 1st to May 15th after obtaining ethical clearance from the ethical review board (ERB) (ref no. 319/2020 P). A 20 item survey instrument was adapted using WHO course materials on an emerging COVID-19. A convenience sample method was used. Data were collected and entered in Statistical Packages for Social Services version 11.5. Point estimate at 95% Confidence Interval was calculated along with frequency and proportion for binary data. Results: Among 1069 respondents, the correct answer on the COVID-19 related knowledge questionnaire was 958 (89.61%), 487 (93.11%) were health professionals, and 471 (86.26%) non-health professionals. Preventive measures were strictly followed by 1044 (97.66%) participants. A wrong perception about the disease was present in 390 (36.48%). Health ministry website 356 (33.30%) followed by news media 309 (29%) was the major source of information among the people. Conclusions: Knowledge regarding COVID-19 among people the selected population of eastern is satisfactory which was similar to other studies done. However, people still have misperceptions regarding the disease and do not strictly follow the preventive measures.
- Published
- 2020
248. Reliability Range Through Upgraded Operation with Trapezoidal Fuzzy Number
- Author
-
Pooja Dhiman and Amit Kumar
- Subjects
Fault tree analysis ,Mathematics::General Mathematics ,Logic ,Computer science ,Applied Mathematics ,Interval estimation ,Fuzzy control system ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Theoretical Computer Science ,Reliability engineering ,Artificial Intelligence ,Control and Systems Engineering ,Range (statistics) ,Fuzzy number ,Point estimation ,Reliability (statistics) ,Information Systems - Abstract
This work presents an application of upgraded arithmetic operations on trapezoidal fuzzy number to extend the reliability from point estimation to interval estimation. A realistic example of an acc...
- Published
- 2020
- Full Text
- View/download PDF
249. Change point detection in Cox proportional hazards mixture cure model
- Author
-
Bing Wang, Jialiang Li, and Xiaoguang Wang
- Subjects
Statistics and Probability ,Models, Statistical ,Epidemiology ,Proportional hazards model ,Estimator ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Standard error ,Health Information Management ,Research Design ,Consistency (statistics) ,Statistics ,Expectation–maximization algorithm ,Covariate ,Humans ,Computer Simulation ,030212 general & internal medicine ,Point estimation ,0101 mathematics ,Algorithms ,Change detection ,Proportional Hazards Models ,Mathematics - Abstract
The mixture cure model has been widely applied to survival data in which a fraction of the observations never experience the event of interest, despite long-term follow-up. In this paper, we study the Cox proportional hazards mixture cure model where the covariate effects on the distribution of uncured subjects’ failure time may jump when a covariate exceeds a change point. The nonparametric maximum likelihood estimation is used to obtain the semiparametric estimates. We employ a two-step computational procedure involving the Expectation-Maximization algorithm to implement the estimation. The consistency, convergence rate and asymptotic distributions of the estimators are carefully established under technical conditions and we show that the change point estimator is n consistency. The m out of n bootstrap and the Louis algorithm are used to obtain the standard errors of the estimated change point and other regression parameter estimates, respectively. We also contribute a test procedure to check the existence of the change point. The finite sample performance of the proposed method is demonstrated via simulation studies and real data examples.
- Published
- 2020
- Full Text
- View/download PDF
250. Comparing Kaplan‐Meier curves with the probability of agreement
- Author
-
Lu Lu and Nathaniel T. Stevens
- Subjects
Statistics and Probability ,Kaplan-Meier Estimate ,Epidemiology ,Uncertainty ,Nonparametric statistics ,Reproducibility of Results ,Contrast (statistics) ,Survival Analysis ,01 natural sciences ,Censoring (statistics) ,Confidence interval ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Similarity (network science) ,030220 oncology & carcinogenesis ,Statistics ,Humans ,Computer Simulation ,Point estimation ,0101 mathematics ,Probability ,Statistical hypothesis testing ,Mathematics - Abstract
The probability of agreement has been used as an effective strategy for quantifying the similarity between the reliability of two populations. By contrast to hypothesis testing approaches based on P-values, the probability of agreement provides a more realistic assessment of similarity by emphasizing practically important differences. In this article, we propose the use of the probability of agreement to evaluate the similarity of two Kaplan-Meier curves, which estimate the survival functions in two populations. This article extends the probability of agreement paradigm to right censored data and explores three different methods of quantifying uncertainty in the probability of agreement estimate. The first approach provides a convenient assessment based on large-sample normal-theory (LSNT), while the other two approaches are nonparametric alternatives based on ordinary and fractional random-weight bootstrap (FRWB) techniques. All methods are illustrated with examples for which comparing the survival curves of related populations is of interest and the efficacy of the methods are also evaluated through simulation studies. Based on these simulations we recommend point estimation using the proposed LSNT calculation and confidence interval estimation via the FRWB approach. We also provide a Shiny app that facilitates an automated implementation of the methodology.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.