1,372 results on '"Hypothesis test"'
Search Results
2. A Secure Image Watermarking Framework with Statistical Guarantees via Adversarial Attacks on Secret Key Networks
- Author
-
Chen, Feiyu, Lin, Wei, Liu, Ziquan, Chan, Antoni B., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Parameter estimation and hypothesis tests in logistic model for complex correlated data
- Author
-
Mou, Keyi, Li, Zhiming, and Cheng, Jinlong
- Published
- 2025
- Full Text
- View/download PDF
4. A formal goodness-of-fit test for spatial binary Markov random field models.
- Author
-
Biswas, Eva, Kaplan, Andee, Kaiser, Mark S, and Nordman, Daniel J
- Subjects
- *
MARKOV random fields , *CONDITIONAL probability , *GOODNESS-of-fit tests , *ENVIRONMENTAL sciences , *NEIGHBORHOODS - Abstract
Binary spatial observations arise in environmental and ecological studies, where Markov random field (MRF) models are often applied. Despite the prevalence and the long history of MRF models for spatial binary data, appropriate model diagnostics have remained an unresolved issue in practice. A complicating factor is that such models involve neighborhood specifications, which are difficult to assess for binary data. To address this, we propose a formal goodness-of-fit (GOF) test for diagnosing an MRF model for spatial binary values. The test statistic involves a type of conditional Moran's I based on the fitted conditional probabilities, which can detect departures in model form, including neighborhood structure. Numerical studies show that the GOF test can perform well in detecting deviations from a null model, with a focus on neighborhoods as a difficult issue. We illustrate the spatial test with an application to Besag's historical endive data as well as the breeding pattern of grasshopper sparrows across Iowa. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Empirical likelihood based heteroscedasticity diagnostics for varying coefficient partially nonlinear models.
- Author
-
Wang, Cuiping, Zhou, Xiaoshuang, and Zhao, Peixin
- Subjects
INFERENTIAL statistics ,EMPIRICAL research ,DATA analysis ,HYPOTHESIS - Abstract
Heteroscedasticity diagnostics of error variance is essential before performing some statistical inference work. This paper is concerned with the statistical diagnostics for the varying coefficient partially nonlinear model. We propose a novel diagnostic approach for heteroscedasticity of error variance in the model by combining it with the empirical likelihood method. Under some mild conditions, the nonparametric version of the Wilks theorem is obtained. Furthermore, simulation studies and a real data analysis are implemented to evaluate the performances of our proposed approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Empirical likelihood based heteroscedasticity diagnostics for varying coefficient partially nonlinear models
- Author
-
Cuiping Wang, Xiaoshuang Zhou, and Peixin Zhao
- Subjects
empirical likelihood ,heteroscedasticity diagnostics ,hypothesis test ,varying coefficient partially nonlinear model ,Mathematics ,QA1-939 - Abstract
Heteroscedasticity diagnostics of error variance is essential before performing some statistical inference work. This paper is concerned with the statistical diagnostics for the varying coefficient partially nonlinear model. We propose a novel diagnostic approach for heteroscedasticity of error variance in the model by combining it with the empirical likelihood method. Under some mild conditions, the nonparametric version of the Wilks theorem is obtained. Furthermore, simulation studies and a real data analysis are implemented to evaluate the performances of our proposed approaches.
- Published
- 2024
- Full Text
- View/download PDF
7. A statistical test suite for random and pseudorandom number generators for cryptographic applications
- Author
-
Rukhin, Andrew
- Subjects
Hypothesis test ,P-value ,Random number generator ,Statistical tests - Abstract
Abstract: This paper discusses some aspects of selecting and testing random and pseudorandom number generators. The outputs of such generators may be used in many cryptographic applications, such as the generation of key material. Generators suitable for use in cryptographic applications may need to meet stronger requirements than for other applications. In particular, their outputs must be unpredictable in the absence of knowledge of the inputs. Some criteria for characterizing and selecting appropriate generators are discussed in this document. The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.
- Published
- 2000
8. Testing common degree-correction parameters of multilayer networks.
- Author
-
Yuan, Mingao and Yao, Qianqian
- Abstract
Graph (or network) is a mathematical structure that has been widely used to model relational data. As real-world systems get more complex, multilayer (or multiple) networks are employed to represent diverse patterns of relationships among the objects in the systems. One active research problem in multilayer networks analysis is to study the common properties of the networks. In this paper, we study whether multilayer networks share the same degree-correction parameters, which is a special case of the widely studied common invariant subspace problem. We first attempt to answer this question by means of hypothesis testing. The null hypothesis states that the multilayer networks share the same degree-correction parameters, and under the alternative hypothesis, there exist at least two networks that have different degree-correction parameters. We propose a weighted degree difference test, derive the limiting distribution of the test statistic and provide an analytical analysis of the power. Simulation study shows that the proposed test has satisfactory performance, and a real data application is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Agroclimatic Modeling of the Drought Trend in the State of Sinaloa, Mexico, Using the PDO--AMO Indices.
- Author
-
Cárdenas, Omar Llanes, González, Gabriel E. González, Gastélum, Rosa D. Estrella, Serrano, Luz A. García, and Galaviz, Román E. Parra
- Subjects
- *
ATLANTIC multidecadal oscillation , *EVIDENCE gaps , *METEOROLOGICAL stations , *IRRIGATION scheduling , *PRINCIPAL components analysis , *DROUGHT management - Abstract
The goal was to model the trend of meteorological droughts (MeD) using the Pacific decadal oscillation (PDO) and Atlantic multidecadal oscillation (AMO) indices. PDO-AMO series were obtained from the National Oceanic and Atmospheric Administration. In 12 weather stations in the state of Sinaloa (1981-2017), the agricultural standardized precipitation index (aSPI) and reconnaissance drought index (RDI) were calculated. The linear (SLT) and non-parametric (SNT) significant trends of the aSPI and RDI were calculated. A principal component analysis was applied to SLT-SNT and the first observed principal component (Z PC-1o) was extracted. The first calculated principal component was modeled through a linear regression of Z PC-1c (dependent variable) on PDO-AMO (independent variables). The correlations between Z PC-1o vs Z PC-1c = 0.522 and the linear trend of Z PC-1c = 0.501, were significantly different from zero. This study contributes to addressing a research gap not otherwise explored to date in Sinaloa: modeling of the trend in MeD through aSPI-RDI and PDO-AMO. The model can be used to help schedule agricultural irrigation at the most productive times. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Tests of covariate effects under finite Gaussian mixture regression models.
- Author
-
Gan, Chong, Chen, Jiahua, and Feng, Zeny
- Subjects
- *
FALSE positive error , *CLUSTER analysis (Statistics) , *REGRESSION analysis , *ERROR rates , *BATS , *GAUSSIAN mixture models - Abstract
Mixture of regression model is widely used to cluster subjects from a suspected heterogeneous population due to differential relationships between response and covariates over unobserved subpopulations. In such applications, statistical evidence pertaining to the significance of a hypothesis is important yet missing to substantiate the findings. In this case, one may wish to test hypotheses regarding the effect of a covariate such as its overall significance. If confirmed, a further test of whether its effects are different in different subpopulations might be performed. This paper is motivated by the analysis of Chiroptera dataset, in which, we are interested in knowing how forearm length development of bat species is influenced by precipitation within their habitats and living regions using finite Gaussian mixture regression (GMR) model. Since precipitation may have different effects on the evolutionary development of the forearm across the underlying subpopulations among bat species worldwide, we propose several testing procedures for hypotheses regarding the effect of precipitation on forearm length under finite GMR models. In addition to the real analysis of Chiroptera data, through simulation studies, we examine the performances of these testing procedures on their type I error rate, power, and consequently, the accuracy of clustering analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Change-point detection of the Kumaraswamy skew-t distribution based on modified information criterion.
- Author
-
Wang, Jun and Ning, Wei
- Subjects
- *
ASYMPTOTIC distribution , *HYPOTHESIS , *CHANGE-point problems - Abstract
In this paper, we study the change-point problem of the Kumaraswamy skew-t distribution. An approach based on the modified information criterion is proposed to detect the changes of the parameters of this distribution. Simulations have been conducted to investigate the performance of the proposed method. The proposed method is applied to real data to illustrate the detection procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Simulation-Based, Finite-Sample Inference for Privatized Data.
- Author
-
Awan, Jordan and Wang, Zhanyu
- Subjects
- *
FALSE positive error , *PRIVATE investigators , *PRIVACY , *STATISTICS , *HYPOTHESIS - Abstract
AbstractPrivacy protection methods, such as differentially private mechanisms, introduce noise into resulting statistics which often produces complex and intractable sampling distributions. In this article, we propose a simulation-based “repro sample” approach to produce statistically valid confidence intervals and hypothesis tests, which builds on the work of Xie and Wang. We show that this methodology is applicable to a wide variety of private inference problems, appropriately accounts for biases introduced by privacy mechanisms (such as by clamping), and improves over other state-of-the-art inference methods such as the parametric bootstrap in terms of the coverage and Type I error of the private inference. We also develop significant improvements and extensions for the repro sample methodology for general models (not necessarily related to privacy), including (a) modifying the procedure to ensure guaranteed coverage and Type I errors, even accounting for Monte Carlo error, and (b) proposing efficient numerical algorithms to implement the confidence intervals and
p -values. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
13. Unsupervised methods for size and shape.
- Author
-
Honório, Jerfson Bruno do Nascimento and Amaral, Getúlio José Amorim do
- Subjects
- *
K-means clustering , *HOMINIDS , *VERTEBRAE , *SKULL , *ALGORITHMS - Abstract
The aim of this article is to propose unsupervised classification methods for size-and-shape considering two-dimensional images (planar shapes). We present new methods based on hypothesis testing and the K-means algorithm. We also propose combinations of algorithms using ensemble methods: bagging and boosting. We consider simulated data in three scenarios in order to evaluate the performance of the proposed methods. The numerical results have indicated that for the data sets, when the centroid sizes change, the performance of the algorithms improves. In addition, bagging-based algorithms outperform their basic versions. Moreover, two real-world datasets have been considered: great ape skull and mice vertebrae references. These datasets have different configurations, such as multiple reference points and variability. Bagged K-means and boosted K-means methods achieved the best performance on the datasets. Lastly, considering the synthetic and real data, the bagged K-means proved to be the best method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Number of Repetitions in Re‐Randomization Tests.
- Author
-
Zhang, Yilong, Zhao, Yujie, Wang, Bingjun, and Luo, Yiwen
- Subjects
- *
MONTE Carlo method , *NUMERICAL analysis , *INFERENTIAL statistics , *SAMPLE size (Statistics) , *CLINICAL trials - Abstract
ABSTRACT In covariate‐adaptive or response‐adaptive randomization, the treatment assignment and outcome can be correlated. Under this situation, the re‐randomization test is a straightforward and attractive method to provide valid statistical inferences. In this paper, we investigate the number of repetitions in tests. This is motivated by a group sequential design in clinical trials, where the nominal significance bound can be very small at an interim analysis. Accordingly, re‐randomization tests lead to a very large number of required repetitions, which may be computationally intractable. To reduce the number of repetitions, we propose an adaptive procedure and compare it with multiple approaches under predefined criteria. Monte Carlo simulations are conducted to show the performance of different approaches in a limited sample size. We also suggest strategies to reduce total computation time and provide practical guidance in preparing, executing, and reporting before and after data are unblinded at an interim analysis, so one can complete the computation within a reasonable time frame. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Testing for Sufficient Follow‐Up in Censored Survival Data by Using Extremes.
- Author
-
Xie, Ping, Escobar‐Bach, Mikael, and Van Keilegom, Ingrid
- Abstract
In survival analysis, it often happens that some individuals, referred to as cured individuals, never experience the event of interest. When analyzing time‐to‐event data with a cure fraction, it is crucial to check the assumption of "sufficient follow‐up," which means that the right extreme of the censoring time distribution is larger than that of the survival time distribution for the noncured individuals. However, the available methods to test this assumption are limited in the literature. In this article, we study the problem of testing whether follow‐up is sufficient for light‐tailed distributions and develop a simple novel test. The proposed test statistic compares an estimator of the noncure proportion under sufficient follow‐up to one without the assumption of sufficient follow‐up. A bootstrap procedure is employed to approximate the critical values of the test. We also carry out extensive simulations to evaluate the finite sample performance of the test and illustrate the practical use with applications to leukemia and breast cancer data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Beyond Pearson's Correlation: Modern Nonparametric Independence Tests for Psychological Research.
- Author
-
Karch, Julian D., Perez-Alonso, Andres F., and Bergsma, Wicher P.
- Subjects
- *
PSYCHOLOGICAL tests , *RANK correlation (Statistics) , *PSYCHOLOGICAL research , *STATISTICAL correlation , *PRIOR learning - Abstract
When examining whether two continuous variables are associated, tests based on Pearson's, Kendall's, and Spearman's correlation coefficients are typically used. This paper explores modern nonparametric independence tests as an alternative, which, unlike traditional tests, have the ability to potentially detect any type of relationship. In addition to existing modern nonparametric independence tests, we developed and considered two novel variants of existing tests, most notably the Heller-Heller-Gorfine-Pearson (HHG-Pearson) test. We conducted a simulation study to compare traditional independence tests, such as Pearson's correlation, and the modern nonparametric independence tests in situations commonly encountered in psychological research. As expected, no test had the highest power across all relationships. However, the distance correlation and the HHG-Pearson tests were found to have substantially greater power than all traditional tests for many relationships and only slightly less power in the worst case. A similar pattern was found in favor of the HHG-Pearson test compared to the distance correlation test. However, given that distance correlation performed better for linear relationships and is more widely accepted, we suggest considering its use in place or additional to traditional methods when there is no prior knowledge of the relationship type, as is often the case in psychological research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Functional quantile regression with missing data in reproducing kernel Hilbert space.
- Author
-
Yu, Xiao-Ge and Liang, Han-Ying
- Subjects
- *
ASYMPTOTIC normality , *ASYMPTOTIC distribution , *HILBERT space , *NULL hypothesis , *MISSING data (Statistics) , *QUANTILE regression - Abstract
AbstractWe, in this article, focus on functional partially linear quantile regression, where the observations are missing at random, which allows the response or covariates or response and covariates simultaneously missing. Estimation of the unknown function is done based on reproducing kernel method. Under suitable assumptions, we discuss consistency with rates of the estimators, and establish asymptotic normality of the estimator for the parameter. At the same time, we study hypothesis test of the parameter, and prove asymptotic distributions of restricted estimators of the parameter and test statistic under null hypothesis and local alternative hypothesis, respectively. Also, we study variable selection of the linear part of the model. By simulation and real data, finite sample performance of the proposed methods is analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Group sequential hypothesis tests with variable group sizes: Optimal design and performance evaluation.
- Author
-
Novikov, Andrey
- Subjects
- *
ALGORITHMS , *ERROR probability , *PROGRAMMING languages , *HYPOTHESIS , *SEQUENTIAL analysis - Abstract
In this article, we propose a computer-oriented method of construction of optimal group sequential hypothesis tests with variable group sizes. In particular, for independent and identically distributed observations, we obtain the form of optimal group sequential tests which turn to be a particular case of sequentially planned probability ratio tests (SPPRTs, see Schmitz 1993). Formulas are given for computing the numerical characteristics of general SPPRTs, like error probabilities, average sampling cost, etc. A numerical method of designing the optimal tests and evaluation of the performance characteristics is proposed, and computer algorithms of its implementation are developed. For a particular case of sampling from a Bernoulli population, the proposed method is implemented in R programming language, the code is available in a public GitHub repository. The proposed method is compared numerically with other known sampling plans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Testing Coefficient Randomness in Multivariate Random Coefficient Autoregressive Models Based on Locally Most Powerful Test.
- Author
-
Bi, Li, Wang, Deqi, Cheng, Libo, and Qi, Dequan
- Subjects
- *
AUTOREGRESSIVE models , *RANDOM variables , *TIME series analysis , *NULL hypothesis , *TIME management - Abstract
The multivariate random coefficient autoregression (RCAR) process is widely used in time series modeling applications. Random autoregressive coefficients are usually assumed to be independent and identically distributed sequences of random variables. This paper investigates the issue of coefficient constancy testing in a class of static multivariate first-order random coefficient autoregressive models. We construct a new test statistic based on the locally most powerful-type test and derive its limiting distribution under the null hypothesis. The simulation compares the empirical sizes and powers of the LMP test and the empirical likelihood test, demonstrating that the LMP test outperforms the EL test in accuracy by 10.2%, 10.1%, and 30.9% under conditions of normal, Beta-distributed, and contaminated errors, respectively. We provide two sets of real data to illustrate the practical effectiveness of the LMP test. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Hypothesis Test for the Long-Term Calibration in Rating Systems with Overlapping Time Windows.
- Author
-
Kurth, Patrick, Nendel, Max, and Streicher, Jan
- Subjects
CREDIT risk ,DEFAULT (Finance) ,STATISTICAL correlation ,PROBLEM solving ,FINANCIAL institutions - Abstract
We present a statistical test for the long-term calibration in rating systems that can deal with overlapping time windows as required by the guidelines of the European Banking Authority (EBA), which apply to major financial institutions in the European System. In accordance with regulation, rating systems are to be calibrated and validated with respect to the long-run default rate. The consideration of one-year default rates on a quarterly basis leads to correlation effects which drastically influence the variance of the long-run default rate. In a first step, we show that the long-run default rate is approximately normally distributed. We then perform a detailed analysis of the correlation effects caused by the overlapping time windows and solve the problem of an unknown distribution of default probabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A classical hypothesis test for assessing the homogeneity of disease transmission in stochastic epidemic models.
- Author
-
Aristotelous, Georgios, Kypraios, Theodore, and O'Neill, Philip D.
- Subjects
- *
STOCHASTIC models , *INFECTIOUS disease transmission , *CENTRAL limit theorem , *DISEASE vectors , *HOMOGENEITY , *SOCIAL groups , *STOCHASTIC processes - Abstract
This paper addresses the problem of assessing the homogeneity of the disease transmission process in stochastic epidemic models in populations that are partitioned into social groups. We develop a classical hypothesis test for completed epidemics which assesses whether or not there is significant within‐group transmission during an outbreak. The test is based on time‐ordered group labels of individuals. The null hypothesis is that of homogeneity of disease transmission among individuals, a hypothesis under which the discrete random vector of groups labels has a known sampling distribution that is independent of any model parameters. The test exhibits excellent performance when applied to various scenarios of simulated data and is also illustrated using two real‐life epidemic data sets. We develop some asymptotic theory including a central limit theorem. The test is practically very appealing, being computationally cheap and straightforward to implement, as well as being applicable to a wide range of real‐life outbreak settings and to related problems in other fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Statistical Inference for Aggregation of Malmquist Productivity Indices.
- Author
-
Pham, Manh, Simar, Léopold, and Zelenyuk, Valentin
- Subjects
INFERENTIAL statistics ,CENTRAL limit theorem ,ECONOMETRICS - Abstract
A Comprehensive Set of Asymptotic Properties for a Meaningful Aggregation of Malmquist Indices The Malmquist productivity index (MPI) has become one of the most widely used tools for analyzing dynamic performance of decision-making units. Whereas accounting for economic weights of individual units in aggregations of indices is emphasized in the literature, statistical theory for constructing confidence intervals and performing hypothesis tests based on weighted aggregation of the MPI are still unavailable. In "Statistical Inference for Aggregation of Malmquist Productivity Indices," Pham, Simar, and Zelenyuk use a novel approach (based on the uniform delta method) to develop new asymptotic theory (including new central limit theorems) for aggregate MPIs as the basis for the statistical inference and test. They also verify the finite-sample performance of their approach via extensive Monte Carlo experiments and provide an illustration using real-world data. The Malmquist productivity index (MPI) has gained popularity among studies on the dynamic change of productivity of decision-making units (DMUs). In practice, this index is frequently reported at aggregate levels (e.g., public and private firms) in the form of simple, equally weighted arithmetic or geometric means of individual MPIs. A number of studies emphasize that it is necessary to account for the relative importance of individual DMUs in the aggregations of indices in general and of the MPI in particular. Whereas more suitable aggregations of MPIs have been introduced in the literature, their statistical properties have not been revealed yet, preventing applied researchers from making essential statistical inferences, such as confidence intervals and hypothesis testing. In this paper, we fill this gap by developing a full asymptotic theory for an appealing aggregation of MPIs. On the basis of this, meaningful statistical inferences are proposed, their finite-sample performances are verified via extensive Monte Carlo experiments, and the importance of the proposed theoretical developments is illustrated with an empirical application to real data. Funding: M. Pham acknowledges support from an Australian Government Research Training Program Scholarship. V. Zelenyuk acknowledges financial support from the University of Queensland and the Australian Research Council [Grant FT170100401]. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2022.2424. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Testing for conditional independence of survival time from covariate.
- Author
-
Kwak, Minjung
- Abstract
This study examined the test of independence of survival time from a covariate in a more general setting using empirical process techniques. Previous research has been extended in several ways: (1) allow incompleteness of observation owing to censoring (2) allow the time-dependent covariate (3) allow the non-uniform covariate (4) prove the validity of weighted bootstrap to implement the proposed testing procedure. Certain classes of test statistics that are functionals of a natural empirical process were studied, and the limiting distribution of these statistics was then derived using the functional delta method. The limiting distributions included some linear functionals of zero mean tight Brownian bridges under the null hypothesis, and the tests were consistent against general alternatives. Tests implemented using weighted bootstrap were shown to be valid. The proposals are illustrated via simulation studies and an application to acute leukemia data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Corrections to the two-sided probability and hypothesis test statistics on binomial distributions.
- Author
-
WANG Jian-Kang
- Abstract
Binomial distributions widely exist in nature and human society, which is classified as discrete by probability theory. In theoretical studies in mathematical statistics, a random variable of binomial distribution B(n, p) is equivalent to the sum of a number of n independent and identical variables of Bernoulli distribution B(1, p). Estimation and testing on parameter p of binomial distribution B(n, p) is therefore equivalent to those of Bernoulli distribution B(1, p). Three corrections were made in this article, relevant to the calculation of two-tailed probability, and the construction of hypothesis test statistics. (1) Assume pk(k = 0, 1, ···, n) is the probability list of binomial distribution B(n, p), and the probability by ascending order is given by p(k). The two-tailed exact probability is equal to ∑
i=0 k p(i) , given the value of the observed k. (2) When testing the difference between parameter p of B(n, p) against a given value p0 , the test statistic was corrected by ..., which asymptotically approaches to normal distribution N(p-p0 , 1) under the condition of large samples. (3) When testing the difference between two parameters of binomial distributions B(n1 , p1 ) and B(n2 , p2 ), the test statistic was corrected by ..., which asymptotically approaches to normal distribution N(p1 -p2 , 1) under the condition of large samples. By the correction, the two-tailed probability has the exact value, and avoids the embarrassing situation of a probability exceeding one. Under either the null or alternative hypothesis conditions, the asymptotical normal distributions always have the variance at one, and therefore are more suitable to study the statistical power in testing the alternative hypothesis. Exact test on binomial distributions under the condition of small samples was also introduced, together with the comparison between exact and approximate tests. Probability theory underlying the corrections was provided. Comparison was made between the tests on parameter of Bernoulli distribution and mean of normal distribution. The general rule in determining the small probability and large sample was present as well. By doing so, the author wishes to provide the readers with a perspective picture on hypothesis testing and statistical inference, consisting of the core content of modern statistics. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
25. Hypothesis testing for points of impact in functional linear regression.
- Author
-
Shirvani, Alireza, Khademnoe, Omid, and Hosseini-Nasab, Mohammad
- Subjects
ASYMPTOTIC distribution ,STATISTICAL hypothesis testing ,ATTENTION testing ,REGRESSION analysis - Abstract
Recently, there has been increased interest in issues related to functional linear regression models with points of impact. While the estimation of model parameters with a scalar response has been considered in past studies, there has been no attention on the hypothesis testing for these impact points. To test this hypothesis, one needs to determine the asymptotic distribution of the impact points coefficients estimators. In recent literature, the asymptotic distribution has been pointed out in a special case, but the proof has not been provided. Taking into account the necessary conditions, this study establishes the asymptotic distribution for the estimators of impact points coefficients in a general setting. It also offers a method to test the significance of these impact points. To validate the proposed test's asymptotic properties, a simulation study is conducted to assess its performance under various parameter settings. Furthermore, the study analyzes Iranian weather data collected from January 1st to 31st, 2023. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. On the validity of the bootstrap hypothesis testing in functional linear regression.
- Author
-
Khademnoe, Omid and Hosseini-Nasab, S. Mohammad E.
- Subjects
ASYMPTOTIC distribution ,CENTRAL limit theorem ,DEVELOPMENT banks ,REGRESSION analysis - Abstract
We consider a functional linear regression model with functional predictor and scalar response. For this model, a procedure to test the slope function based on projecting the slope function onto an arbitrary L 2 basis has been introduced in the literature. We propose its bootstrap counterpart for testing the slope function, and obtain the asymptotic null distributions of the tests statistics and the asymptotic powers of the tests. Finally, we conduct a simulation study to evaluate the accuracy of the two tests procedures. As a practical illustration, we use the Export Development Bank of Iran dataset, and test the nullity of the slope function of a model predicting total annual noncurrent balance of facilities based on current balance of facilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Distributions and Their Approximations for p-Values
- Author
-
Zhou, Ningkun, Blaha, Ondrej, Zelterman, Daniel, and Chen, Ding-Geng, editor
- Published
- 2024
- Full Text
- View/download PDF
28. Statistical Hypothesis Testing and Modelling of Peoples’ Power: A Causal Study of the #BlackLivesMatter Movement via Hawkes Processes on Social and Mass Media
- Author
-
Lindström, Alfred, Lindgren, Simon, Sainudiin, Raazesh, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Gusikhin, Oleg, editor, Hammoudi, Slimane, editor, and Cuzzocrea, Alfredo, editor
- Published
- 2024
- Full Text
- View/download PDF
29. Testing Relationships Between Categorical Variables
- Author
-
Sanders, Ashley R. and Sanders, Ashley R.
- Published
- 2024
- Full Text
- View/download PDF
30. An Accurate and Robust Effectiveness Evaluation Method for an Unmanned Swarm in Performing a Specific Task
- Author
-
Wu, Wenliang, Tuo, Mingfu, Zhao, Yue, Wu, Jinqiao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qu, Yi, editor, Gu, Mancang, editor, Niu, Yifeng, editor, and Fu, Wenxing, editor
- Published
- 2024
- Full Text
- View/download PDF
31. Reconstruction of Bremsstrahlung γ-rays spectrum in heavy ion reactions with Richardson-Lucy algorithm
- Author
-
Junhuai Xu, Yuhao Qin, Zhi Qin, Dawei Si, Boyuan Zhang, Yijie Wang, Qinglin Niu, Chang Xu, and Zhigang Xiao
- Subjects
High momentum tail ,Bremsstrahlung γ-ray ,Richardson-Lucy algorithm ,Hypothesis test ,Physics ,QC1-999 - Abstract
The high momentum tail (HMT) in the momentum distribution of nucleons above the Fermi surface has been regarded as an evidence of short-range correlations (SRCs) in atomic nuclei. It has been showcased recently that the np Bremsstrahlung radiation in heavy ion reactions can be used to extract HMT information. The Richardson-Lucy (RL) algorithm is introduced to the reconstruction of the original Bremsstrahlung γ-ray energy spectrum from experimental measurements. By solving the inverse problem of the detector response to the γ-rays, the original energy spectrum of the Bremsstrahlung γ in 25 MeV/u 86Kr + 124Sn has been reconstructed and compared to the isospin- and momentum-dependent Boltzmann-Uehling-Uhlenbeck (IBUU) simulations. The analysis based on hypothesis test suggests the existence of the HMT of nucleons in nuclei, in accordance with the previous conclusions. With its effectiveness being demonstrated, it is feasible to apply the RL algorithm in future experiments of measuring the Bremsstrahlung γ-rays in heavy ion reactions.
- Published
- 2024
- Full Text
- View/download PDF
32. Hypothesis testing for performance evaluation of probabilistic seasonal rainfall forecasts
- Author
-
Ke-Sheng Cheng, Gwo‑Hsing Yu, Yuan-Li Tai, Kuo-Chan Huang, Sheng‑Fu Tsai, Dong‑Hong Wu, Yun-Ching Lin, Ching-Teng Lee, and Tzu-Ting Lo
- Subjects
Probabilistic forecast ,Seasonal rainfall ,Performance evaluation ,Hypothesis test ,Science ,Geology ,QE1-996.5 - Abstract
Abstract A hypothesis testing approach, based on the theorem of probability integral transformation and the Kolmogorov–Smirnov one-sample test, for performance evaluation of probabilistic seasonal rainfall forecasts is proposed in this study. By considering the probability distribution of monthly rainfalls, the approach transforms the tercile forecast probabilities into a forecast distribution and tests whether the observed data truly come from the forecast distribution. The proposed approach provides not only a quantitative measure for performance evaluation but also a cumulative probability plot for insightful interpretations of forecast characteristics such as overconfident, underconfident, mean-overestimated, and mean-underestimated. The approach has been applied for the performance evaluation of probabilistic season rainfall forecasts in northern Taiwan, and it was found that the forecast performance is seasonal dependent. Probabilistic seasonal rainfall forecasts of the Meiyu season are likely to be overconfident and mean-underestimated, while forecasts of the winter-to-spring season are overconfident. A relatively good forecast performance is observed for the summer season.
- Published
- 2024
- Full Text
- View/download PDF
33. Hypothesis testing for performance evaluation of probabilistic seasonal rainfall forecasts.
- Author
-
Cheng, Ke-Sheng, Yu, Gwo‑Hsing, Tai, Yuan-Li, Huang, Kuo-Chan, Tsai, Sheng‑Fu, Wu, Dong‑Hong, Lin, Yun-Ching, Lee, Ching-Teng, and Lo, Tzu-Ting
- Subjects
DISTRIBUTION (Probability theory) ,SEASONS ,SUMMER ,FORECASTING ,RAINFALL ,HYPOTHESIS - Abstract
A hypothesis testing approach, based on the theorem of probability integral transformation and the Kolmogorov–Smirnov one-sample test, for performance evaluation of probabilistic seasonal rainfall forecasts is proposed in this study. By considering the probability distribution of monthly rainfalls, the approach transforms the tercile forecast probabilities into a forecast distribution and tests whether the observed data truly come from the forecast distribution. The proposed approach provides not only a quantitative measure for performance evaluation but also a cumulative probability plot for insightful interpretations of forecast characteristics such as overconfident, underconfident, mean-overestimated, and mean-underestimated. The approach has been applied for the performance evaluation of probabilistic season rainfall forecasts in northern Taiwan, and it was found that the forecast performance is seasonal dependent. Probabilistic seasonal rainfall forecasts of the Meiyu season are likely to be overconfident and mean-underestimated, while forecasts of the winter-to-spring season are overconfident. A relatively good forecast performance is observed for the summer season. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Statistical tests suites analysis methods. Cryptographic recommendations.
- Author
-
Almaraz Luengo, Elena
- Abstract
In many applications, it is necessary to work with long sequences of random numbers, or at least those that behave as such (pseudo-random numbers). For this purpose, it is essential to verify the goodness of the sequences under study, e.g., to verify whether the sequences meet the properties of randomness, uniformity, and, in the case of cryptographic applications, unpredictability. To verify these properties, hypothesis tests are used, which are usually grouped into sets of tests known as batteries or suites. The design of these suites is a task of vital importance, and some rules must be followed. On the one hand, the coverage of a suite must be broad; it must check the properties of the sequences from different points of view. On the other hand, a suite with a very large number of tests is not uncommonly expensive in terms of execution time and computational performance. However, this consideration is ignored in most of the test suites in use. There are approximately 50 randomization tests in the literature, and each test suite collects many of them without performing any further analysis on the suite construction. It is important to perform an analysis of the possible relationships between the tests that constitute a suite to eliminate, if necessary, those tests that are redundant and that would slow down the performance of the suite. This paper reviews all of the methods that have been used in the literature to analyze statistical test suites and establishes recommendations for their use in cryptography. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Additive autoregressive models for matrix valued time series.
- Author
-
Zhang, Hong‐Fan
- Subjects
- *
TIME series analysis , *LEAST squares , *ASYMPTOTIC distribution , *AUTOREGRESSION (Statistics) , *MATRIX effect , *STATIONARY processes , *AUTOREGRESSIVE models - Abstract
In this article, we develop additive autoregressive models (Add‐ARM) for the time series data with matrix valued predictors. The proposed models assume separable row, column and lag effects of the matrix variables, attaining stronger interpretability when compared with existing bilinear matrix autoregressive models. We utilize the Gershgorin's circle theorem to impose some certain conditions on the parameter matrices, which make the underlying process strictly stationary. We also introduce the alternating least squares estimation method to solve the involved equality constrained optimization problems. Asymptotic distributions of the parameter estimators are derived. In addition, we employ hypothesis tests to run diagnostics on the parameter matrices. The performance of the proposed models and methods is further demonstrated through simulations and real data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Küçük Örneklem Büyüklüğüne Sahip 2x2 Çapraz Tablolar İçin Ki-Kare Yöntemlerinin Karşılaştırılması: Bir Simülasyon Çalışması.
- Author
-
DOĞAN, İsmet and DOĞAN, Nurhan
- Subjects
- *
CONTINGENCY tables , *SAMPLE size (Statistics) , *DEGREES of freedom , *TEST methods , *HYPOTHESIS - Abstract
Objective: The aim of this study is to introduce and compare the classical and continuity corrected chi-square tests used in 2x2 contingency tables. Material and Methods: Chi-square tests with 1 (one) degree of freedom were taken into consideration in the study. Because these tests are seriously affected by the discontinuity of the data. Using the Python-random library, data was derived for 4 different values in the range of 10 ≤ n ≤25. In deriving the data, first the value to be assigned to the cells indicated by a, b, c and d was determined, and then the value to be assigned to the relevant cell was determined. 246 different data sets for n=10, 756 for n=15, 958 for n=20, and 963 for n=25 were used in the study. In the comparison of the methods, both the rejection percentages of each method for the hypothetical H0 hypothesis for different sample sizes and significance levels, and the rejection/rejection, rejection/ acception, acception/rejection and acception/acception rates of the hypothetical H0 hypothesis of the methods relative to each other were used. Results: The results of the methods considered in the study do not enable one of the methods to be chosen as the best method among all the proposed methods. Different methods stand out at different sample sizes and significance levels. This means that the result of a study cannot be interpreted correctly. It has been determined that all methods are affected by the sample size and significance level, and if the sample size increases and the significance level changes from 0.01 to 0.10, the rejection rates of the H0 hypothesis also increase. Conclusion: It was concluded that the chi-square test is suitable for large samples, and if at least one of the expected values is less than 5, both the classical and continuity corrected chi-square methods should not be used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. An Indoor Localization Algorithm of UWB and INS Fusion based on Hypothesis Testing.
- Author
-
Long Cheng, Yuanyuan Shi, Chen Cui, and Yuqing Zhou
- Subjects
INERTIAL navigation systems ,WIRELESS sensor networks ,INFORMATION technology ,KALMAN filtering ,POSITION sensors - Abstract
With the rapid development of information technology, people's demands on precise indoor positioning are increasing. Wireless sensor network, as the most commonly used indoor positioning sensor, performs a vital part for precise indoor positioning. However, in indoor positioning, obstacles and other uncontrollable factors make the localization precision not very accurate. Ultra-wide band (UWB) can achieve high precision centimeter-level positioning capability. Inertial navigation system (INS), which is a totally independent system of guidance, has high positioning accuracy. The combination of UWB and INS can not only decrease the impact of non-line-of-sight (NLOS) on localization, but also solve the accumulated error problem of inertial navigation system. In the paper, a fused UWB and INS positioning method is presented. The UWB data is firstly clustered using the Fuzzy C-means (FCM). And the Z hypothesis testing is proposed to determine whether there is a NLOS distance on a link where a beacon node is located. If there is, then the beacon node is removed, and conversely used to localize the mobile node using Least Squares localization. When the number of remaining beacon nodes is less than three, a robust extended Kalman filter with M-estimation would be utilized for localizing mobile nodes. The UWB is merged with the INS data by using the extended Kalman filter to acquire the final location estimate. Simulation and experimental results indicate that the proposed method has superior localization precision in comparison with the current algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Stability Analysis of GNSS Stations Affected by Samos Earthquake.
- Author
-
Özarpacı, Seda
- Subjects
GLOBAL Positioning System ,EARTHQUAKES - Abstract
An earthquake cycle can cause meters of displacement on the surface and at Global Navigation Satellite System (GNSS) stations. This study focuses on the identification of GNSS stations that have significant displacement because of a Mw 7.0 earthquake near Samos Island on 30 October 2020. The S-transformation method is used to examine 3D, 2D and 1D coordinate systems along with threshold and statistical test approaches. The highest coseismic offset among the 21 GNSS stations is displayed by SAMO, and CESM, MNTS, IZMI and IKAR also experience significant displacement. Significantly displaced stations are successfully identified in both 3D and 2D analyses. In the up component, SAMO is the only unstable station. The coordinate S-transformation method can be used in detecting unstable points in a GNSS network and provide valuable information about the effects of an earthquake on GNSS stations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. MoST: model specification test by variable selection stability.
- Author
-
Hu, Xiaonan
- Abstract
AbstractOne of the major challenges in empirical studies is to construct an appropriate statistical model that accounts for the uncertainty of statistical methods and model specifications. An improperly specified model may bring the potential risk for the interpretation of conclusions and subsequent decision-making. In this paper, we introduce a Model Specification Test (MoST) inspired by the concept of model confidence bounds and variable selection deviation. To obtain the p-value of our proposed test, we develop an efficient single-layer bootstrap procedure. Our method can be readily applied to existing variable selection strategies without additional assumptions. Extensive numerical experiments demonstrate the feasibility and interpretability of our approach in various scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Statistical Depth in Spatial Point Process.
- Author
-
Zhou, Xinyu and Wu, Wei
- Subjects
- *
POINT processes , *MEASURING instruments - Abstract
Statistical depth is widely used as a powerful tool to measure the center-outward rank of multivariate and functional data. Recent studies have introduced the notion of depth to the temporal point process, which exhibits randomness in the cardinality as well as distribution in the observed events. The proposed methods can well capture the rank of a point process in a given time interval, where a critical step is to measure the rank by using inter-arrival events. In this paper, we propose to extend the depth concept to multivariate spatial point process. In this case, the observed process is in a multi-dimensional location and there are no conventional inter-arrival events in the temporal process. We adopt the newly developed depth in metric space by defining two different metrics, namely the penalized metric and the smoothing metric, to fully explore the depth in the spatial point process. The mathematical properties and the large sample theory, as well as depth-based hypothesis testings, are thoroughly discussed. We then use several simulations to illustrate the effectiveness of the proposed depth method. Finally, we apply the new method in a real-world dataset and obtain desirable ranking performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Two New Tests for the Relationships of Stochastic Dominance.
- Author
-
Zhuang, Weiwei, He, Haowei, and Qiu, Guoxin
- Subjects
- *
STOCHASTIC dominance , *MONTE Carlo method - Abstract
To test the relationships of stochastic dominance, this paper firstly proposes two new consistent K-S-type statistics based on a quantile rule. Then, we consider their asymptotic properties. We introduce the bootstrap method to calculate the p-values of our proposed tests and use the Monte Carlo method to compare the power of our proposed test with the DD test and the BD test. The simulations showed that our test performed better than these two tests. Finally, we applied our proposed tests to compare the visibility of four provinces in China and compared the effects with the DD test and BD test. The empirical results showed that the conclusions of our proposed method are more-consistent with the dominance relationships of the four provinces. For a given significance level, the results of our proposed test demonstrate that the visibility of Jiangxi province is the best. The next one is the visibility of Hubei province, which outperforms the visibility of Anhui province. The visibility of Hunan province is the poorest. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Hypothesis Test to Compare Two Paired Binomial Proportions: Assessment of 24 Methods.
- Author
-
Roldán-Nofuentes, José Antonio, Sheth, Tulsi Sagar, and Vera-Vera, José Fernando
- Subjects
- *
MONTE Carlo method , *BINOMIAL distribution , *LIKELIHOOD ratio tests , *CORONARY disease , *HEART disease diagnosis , *SAMPLE size (Statistics) - Abstract
The comparison of two paired binomial proportions is a topic of interest in statistics, with important applications in medicine. There are different methods in the statistical literature to solve this problem, and the McNemar test is the best known of all of them. The problem has been solved from a conditioned perspective, only considering the discordant pairs, and from an unconditioned perspective, considering all of the observed values. This manuscript reviews the existing methods to solve the hypothesis test of equality for the two paired proportions and proposes new methods. Monte Carlo simulation methods were carried out to study the asymptotic behaviour of the methods studied, giving some general rules of application depending on the sample size. In general terms, the Wald test, the likelihood-ratio test, and two tests based on association measures in 2 × 2 tables can always be applied, whatever the sample size is, and if the sample size is large, then the McNemar test without a continuity correction and the modified Wald test can also be applied. The results have been applied to a real example on the diagnosis of coronary heart disease. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Transparência pública de governos locais: uma análise baseada na Escala Brasil Transparente.
- Author
-
Rodrigues de Siqueira, Wender, de Souza Bermejo, Paulo Henrique, and Almeida da Silva, Luiz
- Subjects
- *
REGRESSION analysis , *GOVERNMENT information , *ACCESS to information , *LOCAL government , *SECONDARY analysis , *PUBLIC records - Abstract
To analyze whether local governments are more inclined to respond to requests for information when they have already regulated the law on access to information (LAI). A quantitative research was carried out, with hypothesis testing and regression analysis, on secondary data from the Brazil Transparent Scale, with the support of SPSS software. The results show that LAI regulation impacts the passive transparency of Brazilian municipalities, influences the degree of compliance with requests for public records, and affects the deadline for complying with government information requests. The research contributed to reduce the lack of knowledge about passive transparency, mainly relating it to the regulation of LAI in local governments; It also presented empirical evidence that proves the existence of a relationship between LAI regulation and the results of requests for information. The results of this research can support and encourage practices that guarantee the right of access to information in the public sector. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Selective Review on Information Criteria in Multiple Change Point Detection.
- Author
-
Gao, Zhanzhongyu, Xiao, Xun, Fang, Yi-Ping, Rao, Jing, and Mo, Huadong
- Subjects
- *
CHANGE-point problems , *WIND turbines , *STATISTICS , *AKAIKE information criterion - Abstract
Change points indicate significant shifts in the statistical properties in data streams at some time points. Detecting change points efficiently and effectively are essential for us to understand the underlying data-generating mechanism in modern data streams with versatile parameter-varying patterns. However, it becomes a highly challenging problem to locate multiple change points in the noisy data. Although the Bayesian information criterion has been proven to be an effective way of selecting multiple change points in an asymptotical sense, its finite sample performance could be deficient. In this article, we have reviewed a list of information criterion-based methods for multiple change point detection, including Akaike information criterion, Bayesian information criterion, minimum description length, and their variants, with the emphasis on their practical applications. Simulation studies are conducted to investigate the actual performance of different information criteria in detecting multiple change points with possible model mis-specification for the practitioners. A case study on the SCADA signals of wind turbines is conducted to demonstrate the actual change point detection power of different information criteria. Finally, some key challenges in the development and application of multiple change point detection are presented for future research work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A hypothesis test for the domain of attraction of a random variable.
- Author
-
Olivero, Héctor and Talay, Denis
- Subjects
- *
BROWNIAN bridges (Mathematics) , *DISTRIBUTION (Probability theory) , *RANDOM variables , *STOCHASTIC processes , *STOCHASTIC systems , *MARTINGALES (Mathematics) - Abstract
In this work, we address the problem of detecting whether a sampled probability distribution of a random variable V has infinite first moment. This issue is notably important when the sample results from complex numerical simulation methods. For example, such a situation occurs when one simulates stochastic particle systems with complex and singular McKean–Vlasov interaction kernels. As stated, the detection problem is ill-posed. We thus propose and analyze an asymptotic hypothesis test for independent copies of a given random variable which is supposed to belong to an unknown domain of attraction of a stable law. The null hypothesis H0 is: '√V is in the domain of attraction of the Normal law' and the alternative hypothesis is H1: 'X is in the domain of attraction of a stable law with index smaller than 2'. Our key observation is that X cannot have a finite second moment when H0 is rejected (and therefore H1 is accepted). Surprisingly, we find it useful to derive our test from the statistics of random processes. More precisely, our hypothesis test is based on a statistic which is inspired by methodologies to determine whether a semimartingale has jumps from the observation of one single path at discrete times. We justify our test by proving asymptotic properties of discrete time functionals of Brownian bridges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A coincidence detection perspective for the maximum mean discrepancy.
- Author
-
Montalvão, Jugurta, Duarte, Dami, and Boccato, Levy
- Subjects
- *
COINCIDENCE circuits , *NULL hypothesis , *COINCIDENCE , *INTUITION , *DETECTORS - Abstract
An alternative perspective is proposed for the Maximum Mean Discrepancy (MMD), in which coincidence detectors replace Gaussian kernels. It is conjectured that coincidence-based statistics may be a relevant factor behind MMD, for it may explain why MMD works even for small high-dimensional sets of observations. It is further shown how this proposed perspective can be used to simplify interpretations in MMD-based tests, including a straightforward computation of thresholds for hypothesis tests, which is done through the Grassberger–Procaccia method, originally proposed for intrinsic dimensionality estimation. Experimental results corroborate the conjecture that an MMD based on coincidence detection would perform almost equivalently to the MMD based on (frequently used) Gaussian kernels, with advantages in terms of interpretability and computational load. • Gaussian kernels in MMD can be replaced with coincidence detectors. • The use of coincidence detectors simplifies MMD intuition. • Coincidence based MMD allows problem-tuned coincidence and threshold definitions. • The "birthday problem" effect has a relevant role in how MMD works. • Grassberger–Procaccia analysis can be used to set null hypothesis thresholds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A new bivariate autoregressive model driven by logistic regression.
- Author
-
Wang, Zheqi, Wang, Dehui, and Cheng, Jianhua
- Subjects
- *
AUTOREGRESSIVE models , *LOGISTIC regression analysis , *STATISTICAL models , *CHRONIC myeloid leukemia , *TIME series analysis , *BIVARIATE analysis , *AUTOREGRESSION (Statistics) - Abstract
In this paper, we propose a new bivariate random coefficient autoregressive (BOD-RCAR(1)) process driven by both explanatory variables and past observations. Firstly, some statistical properties of this model are derived. Secondly, three methods are used for estimating the unknown parameters: conditional least squares (CLS), conditional maximum likelihood (CML) and maximum empirical likelihood (MEL). The asymptotic properties of the estimators are given. Besides, two kinds of test based on empirical likelihood (EL) are established. A simulation experiment is presented to demonstrate the performance of the proposed method. Finally, an application to a real data example is investigated to assess the performance of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Statistical Rules in Scientific Reports (The basics).
- Author
-
Pakgohar, Alireza and Mehrannia, Hossein
- Subjects
- *
TECHNICAL reports , *DESCRIPTIVE statistics , *TECHNICAL writing , *STATISTICS - Abstract
Objective: Scientific papers usually contain information and data that we call them statistics. We expect statistics to provide us with a suitable description of the data by summarizing. Scientific journals have specific frameworks for this work in mention to writers and readers can understand statistical concepts with a common terminology. In this paper we guide reader to write a scientific issue without any confusion and crowding and we propose some notes to report a scientific descriptive statistics, and make a table properly and draw a visual graph. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Test of dominance relations based on kernel smoothing method.
- Author
-
Weiwei Zhuang, Suming Yao, and Guoxin Qiu
- Subjects
- *
SOCIAL dominance , *EMPIRICAL research , *KERNEL functions - Abstract
New test statistics for weakly Lorenz dominance and weakly generalised Lorenz dominance are proposed, and some asymptotic properties of test statistics are obtained. The simulation results show that our test statistics can improve test power in comparison with the non-smoothed empirical methods. Finally, we apply our inference framework to an actual example. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. An Accurate and Robust Comparison Method of the Intelligence for Two Unmanned Swarms Based on the Improved CRITIC and Hypothesis Test
- Author
-
Wu, Wenliang, Wang, Chenyi, Tuo, Mingfu, Zhou, Xingshe, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Fu, Wenxing, editor, Gu, Mancang, editor, and Niu, Yifeng, editor
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.