77 results on '"Statistical hypothesis testing"'
Search Results
2. Prediction of changes in seafloor depths based on time series of bathymetry observations: Dutch north sea case
- Abstract
Guaranteeing safety of navigation within the Netherlands Continental Shelf (NCS), while efficiently using its ocean mapping resources, is a key task of Netherlands Hydrographic Service (NLHS) and Rijkswaterstaat (RWS). Resurvey frequencies depend on seafloor dynamics and the aim of this research is to model the seafloor dynamics to predict changes in seafloor depth that would require resurveying. Characterisation of the seafloor dynamics is based on available time series of bathymetry data obtained from the acoustic remote sensing method of both single-beam echosounding (SBES) and multibeam echosounding (MBES). This time series is used to define a library of mathematical models describing the seafloor dynamics in relation to spatial and temporal changes in depth. An adaptive, functional model selection procedure is developed using a nodal analysis (0D) approach, based on statistical hypothesis testing using a combination of the Overall Model Test (OMT) statistic and Generalised Likelihood Ratio Test (GLRT). This approach ensures that each model has an equal chance of being selected, when more than one hypothesis is plausible for areas that exhibit varying seafloor dynamics. This ensures a more flexible and rigorous decision on the choice of the nominal model assumption. The addition of piecewise linear models to the library offers another characterisation of the trends in the nodal time series. This has led to an optimised model selection procedure and parameterisation of each nodal time series, which is used for the spatial and temporal predictions of the changes in the depths and associated uncertainties. The model selection results show that the models can detect the changes in the seafloor depths with spatial consistency and similarity, particularly in the shoaling areas where tidal sandwaves are present. The predicted changes in depths and uncertainties are translated into a probability risk-alert map by evaluating the probabilities of an indicator variable, Mathematical Geodesy and Positioning
- Published
- 2021
- Full Text
- View/download PDF
3. Efecto del entrenamiento pliométrico en la fuerza explosiva de niñas puberes practicantes de voleibol
- Published
- 2021
4. Prediction of changes in seafloor depths based on time series of bathymetry observations: Dutch north sea case
- Abstract
Guaranteeing safety of navigation within the Netherlands Continental Shelf (NCS), while efficiently using its ocean mapping resources, is a key task of Netherlands Hydrographic Service (NLHS) and Rijkswaterstaat (RWS). Resurvey frequencies depend on seafloor dynamics and the aim of this research is to model the seafloor dynamics to predict changes in seafloor depth that would require resurveying. Characterisation of the seafloor dynamics is based on available time series of bathymetry data obtained from the acoustic remote sensing method of both single-beam echosounding (SBES) and multibeam echosounding (MBES). This time series is used to define a library of mathematical models describing the seafloor dynamics in relation to spatial and temporal changes in depth. An adaptive, functional model selection procedure is developed using a nodal analysis (0D) approach, based on statistical hypothesis testing using a combination of the Overall Model Test (OMT) statistic and Generalised Likelihood Ratio Test (GLRT). This approach ensures that each model has an equal chance of being selected, when more than one hypothesis is plausible for areas that exhibit varying seafloor dynamics. This ensures a more flexible and rigorous decision on the choice of the nominal model assumption. The addition of piecewise linear models to the library offers another characterisation of the trends in the nodal time series. This has led to an optimised model selection procedure and parameterisation of each nodal time series, which is used for the spatial and temporal predictions of the changes in the depths and associated uncertainties. The model selection results show that the models can detect the changes in the seafloor depths with spatial consistency and similarity, particularly in the shoaling areas where tidal sandwaves are present. The predicted changes in depths and uncertainties are translated into a probability risk-alert map by evaluating the probabilities of an indicator variable, Mathematical Geodesy and Positioning
- Published
- 2021
- Full Text
- View/download PDF
5. Supervivència dels jugadors de “La Liga”
- Abstract
[cat] Aquest treball recull l’anàlisi de supervivència dels jugadors de futbol que van participar a “La Liga”, primera divisió de futbol espanyola, des del principi de la temporada 2009/2010. En aquest cas, quan ens referim a supervivència no parlem de vida, sinó al temps que aguanten formant part de la competició, per això l’esdeveniment fallida és el fet d’abandonar la competició. Durant l’estudi, primer s’ha observat com afecta cada variable predictora al temps de supervivència dels jugadors, en segon lloc, s’ha utilitzat el model de Cox per trobar quines variables són les millors predictores del temps que duren els jugadors abans d’abandonar la competició.
- Published
- 2020
6. Application of a model-free ANN approach for SHM of the Old Lidingö Bridge
- Abstract
This paper explores the decision making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behaviour is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance., QC 20200914
- Published
- 2019
7. A large sample test for the length of memory of stationary symmetric stable random fields via nonsingular Zd-actions
- Abstract
Based on the ratio of two block maxima, we propose a large sample test for the length of memory of a stationary symmetric α-stable discrete parameter random field. We show that the power function converges to 1 as the sample-size increases to ∞ under various classes of alternatives having longer memory in the sense of Samorodnitsky (2004). Ergodic theory of nonsingular Zd-actions plays a very important role in the design and analysis of our large sample test.
- Published
- 2018
- Full Text
- View/download PDF
8. Condition monitoring of wind turbine structures through univariate and multivariate hypothesis testing
- Abstract
This chapter presents a fault detection method through uni- and multivariate hypothesis testing for wind turbine (WT) faults. A data-driven approach is used based on supervisory control and data acquisition (SCADA) data. First, using a healthy WT data set, a model is constructed through multiway principal component analysis (MPCA). Afterward, given a WT to be diagnosed, its data are projected into the MPCA model space. Since the turbu- lent wind is a random process, the dynamic response of the WT can be considered as a stochastic process, and thus, the acquired SCADA measurements are treated as a random process. The objective is to determine whether the distribution of the multivariate random samples that are obtained from the WT to be diagnosed (healthy or not) is related to the distribution of the baseline. To this end, a test for the equality of population means is performed in both the univariate and the multivariate cases. Ultimately, the test results establish whether the WT is healthy or faulty. The performance of the proposed method is validated using an advanced benchmark that comprehends a 5-MW WT subject to various actuators and sensor faults of different types., Postprint (published version)
- Published
- 2018
9. Tests for the parallelism and flatness hypotheses of multi-group profile analysis for high-dimensional elliptical populations
- Abstract
application/pdf, Article, Journal of Multivariate Analysis. 2017, 162, p.82-92
- Published
- 2017
10. On the application of discrete-time Volterra series for the damage detection problem in initially nonlinear systems
- Author
-
Shiki, Sidney B and Shiki, Sidney B
- Abstract
Nonlinearities in the dynamical behavior of mechanical systems can degrade the performance of damage detection features based on a linearity assumption. In this article, a discrete Volterra model is used to monitor the prediction error of a reference model representing the healthy structure. This kind of model can separate the linear and nonlinear components of the response of a system. This property of the model is used to compare the consequences of assuming a nonlinear model during the nonlinear regime of a magneto-elastic system. Hypothesis tests are then employed to detect variations in the statistical properties of the damage features. After these analyses, conclusions are made about the application of Volterra series in damage detection.
- Published
- 2017
11. Does RAIM with correct exclusion produce unbiased positions?
- Abstract
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely., Mathematical Geodesy and Positioning
- Published
- 2017
- Full Text
- View/download PDF
12. Scientific Theory and Practice : Six Introductory Texts
- Abstract
Detta är en lärobok i vetenskapsfilosofi.QC 20180326
- Published
- 2017
13. Does RAIM with correct exclusion produce unbiased positions?
- Abstract
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely., Mathematical Geodesy and Positioning
- Published
- 2017
- Full Text
- View/download PDF
14. On the application of discrete-time Volterra series for the damage detection problem in initially nonlinear systems
- Author
-
Shiki, Sidney B and Shiki, Sidney B
- Abstract
Nonlinearities in the dynamical behavior of mechanical systems can degrade the performance of damage detection features based on a linearity assumption. In this article, a discrete Volterra model is used to monitor the prediction error of a reference model representing the healthy structure. This kind of model can separate the linear and nonlinear components of the response of a system. This property of the model is used to compare the consequences of assuming a nonlinear model during the nonlinear regime of a magneto-elastic system. Hypothesis tests are then employed to detect variations in the statistical properties of the damage features. After these analyses, conclusions are made about the application of Volterra series in damage detection.
- Published
- 2017
15. Wind turbine fault detection through principal component analysis and statistical hypothesis testing
- Abstract
This work addresses the problem of online fault detection of an advanced wind turbine benchmark under actuators (pitch and torque) and sensors (pitch angle measurement) faults of different type. The fault detection scheme starts by computing the baseline principal component analysis (PCA) model from the healthy wind turbine. Subsequently, when the structure is inspected or supervised, new measurements are obtained and projected into the baseline PCA model. When both sets of data are compared, a statistical hypothesis testing is used to make a decision on whether or not the wind turbine presents some fault. The effectiveness of the proposed fault-detection scheme is illustrated by numerical simulations on a well-known large wind turbine in the presence of wind turbulence and realistic fault scenarios., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
16. Wind turbine fault detection through principal component analysis and statistical hypothesis testing
- Abstract
This work addresses the problem of online fault detection of an advanced wind turbine benchmark under actuators (pitch and torque) and sensors (pitch angle measurement) faults of different type. The fault detection scheme starts by computing the baseline principal component analysis (PCA) model from the healthy wind turbine. Subsequently, when the structure is inspected or supervised, new measurements are obtained and projected into the baseline PCA model. When both sets of data are compared, a statistical hypothesis testing is used to make a decision on whether or not the wind turbine presents some fault. The effectiveness of the proposed fault-detection scheme is illustrated by numerical simulations on a well-known large wind turbine in the presence of wind turbulence and realistic fault scenarios., Postprint (published version)
- Published
- 2016
17. Wind turbine fault detection through principal component analysis and statistical hypothesis testing
- Abstract
This paper addresses the problem of online fault detection of an advanced wind turbine benchmark under actuators (pitch and torque) and sensors (pitch angle measurement) faults of different type: fixed value, gain factor, offset and changed dynamics. The fault detection scheme starts by computing the baseline principal component analysis (PCA) model from the healthy or undamaged wind turbine. Subsequently, when the structure is inspected or supervised, new measurements are obtained are projected into the baseline PCA model. When both sets of data—the baseline and the data from the current wind turbine—are compared, a statistical hypothesis testing is used to make a decision on whether or not the wind turbine presents some damage, fault or misbehavior. The effectiveness of the proposed fault-detection scheme is illustrated by numerical simulations on a well-known large offshore wind turbine in the presence of wind turbulence and realistic fault scenarios. The obtained results demonstrate that the proposed strategy provides and early fault identification, thereby giving the operators sufficient time to make more informed decisions regarding the maintenance of their machines., Peer Reviewed, Postprint (published version)
- Published
- 2016
18. Der zentrale Grenzwertsatz der Statistik
- Abstract
Zentrale Grenzwertsätze zählen zu den bedeutendsten Resultaten der modernen Wahrscheinlichkeitstheorie. Sie sind unverzichtbar für grundlegende Methoden der angewandten Mathematik – so auch maßgeblich für die mathematische Statistik. In der vorliegenden Arbeit werden in einem ersten Schritt – mit den Sätzen von Moivre- Laplace, Feller-Lévy und Lindeberg-Feller – die fundamentalen theoretischen Konzepte ausführlich erläutert und bewiesen. Darüberhinaus findet sich, insbesondere für den Zentralen Grenzwertsatz von Feller-Lévy, eine kurze Diskussion der Konvergenzgüte. Im zweiten Teil werden mögliche Anwendungsbereiche der Zentralen Grenzwertsätze in der Inferenzstatistik aufgezeigt und mit praktischen Beispielen unterlegt. Hierbei stehen die statistische Intervallschätzung und statistische Hypothesentests für den Erwartungswert im Mittelpunkt. Diese Arbeit versteht sich als eine Art erster Überblick bzw. als Einstieg in die Thematik für Studenten der Mathematik und Statistik mit maßtheoretischen Vorkenntnissen., Central limit theorems are among the most important results of modern probability theory. They are indispensable for basic methods in applied mathematics – such as mathematical statistics. In a first step, the fundamental theoretical concepts, as there are the theorems of Moivre-Laplace, Feller-Lévy and Lindeberg-Feller, are comprehensively presented and proven. Furthemore there is a brief discussion of the quality of convergence, especially for the central limit theorem of Feller-Lévy. In the second part possible areas of application in inferential statistics are pointed out and underlined by practical examples. This work focuses thereby on statistical interval estimation and statistical hypothesis testing for the expected value. This work must be seen as a first overview and introduction to the subject for students of mathematics and statistics with basic knowledge in measure-theory.
- Published
- 2015
19. 'Choose your tool wisely: U charts are more informative than run charts with or without tests of significance. A comment on Unbeck et al. (2014), Unbeck et al. (2013) and Kottner (2014)' : Authors' response (Unbeck and colleagues)
- Published
- 2015
- Full Text
- View/download PDF
20. Закон Бенфорда и атрибуция текстов
- Abstract
Исследовано распределение первой значащей цифры в числительных связных текстов. Обнаружено, что закон Бенфорда приближённо выполняется для них. Отклонения от закона Бенфорда являются статистически устойчивыми авторскими особенностями, позволяющими при некоторых условиях различить части текста с разным авторством., The distribution of the first significant digit in numerals of connected texts is considered. Benford's law is found to hold approximately for them. Deviations from Benford's law are statistically significant author peculiarities that allow, under certain conditions, to distinguish between parts of the text with a different authorship.
- Published
- 2015
21. Der zentrale Grenzwertsatz der Statistik
- Abstract
Zentrale Grenzwertsätze zählen zu den bedeutendsten Resultaten der modernen Wahrscheinlichkeitstheorie. Sie sind unverzichtbar für grundlegende Methoden der angewandten Mathematik – so auch maßgeblich für die mathematische Statistik. In der vorliegenden Arbeit werden in einem ersten Schritt – mit den Sätzen von Moivre- Laplace, Feller-Lévy und Lindeberg-Feller – die fundamentalen theoretischen Konzepte ausführlich erläutert und bewiesen. Darüberhinaus findet sich, insbesondere für den Zentralen Grenzwertsatz von Feller-Lévy, eine kurze Diskussion der Konvergenzgüte. Im zweiten Teil werden mögliche Anwendungsbereiche der Zentralen Grenzwertsätze in der Inferenzstatistik aufgezeigt und mit praktischen Beispielen unterlegt. Hierbei stehen die statistische Intervallschätzung und statistische Hypothesentests für den Erwartungswert im Mittelpunkt. Diese Arbeit versteht sich als eine Art erster Überblick bzw. als Einstieg in die Thematik für Studenten der Mathematik und Statistik mit maßtheoretischen Vorkenntnissen., Central limit theorems are among the most important results of modern probability theory. They are indispensable for basic methods in applied mathematics – such as mathematical statistics. In a first step, the fundamental theoretical concepts, as there are the theorems of Moivre-Laplace, Feller-Lévy and Lindeberg-Feller, are comprehensively presented and proven. Furthemore there is a brief discussion of the quality of convergence, especially for the central limit theorem of Feller-Lévy. In the second part possible areas of application in inferential statistics are pointed out and underlined by practical examples. This work focuses thereby on statistical interval estimation and statistical hypothesis testing for the expected value. This work must be seen as a first overview and introduction to the subject for students of mathematics and statistics with basic knowledge in measure-theory.
- Published
- 2015
22. Der zentrale Grenzwertsatz der Statistik
- Abstract
Zentrale Grenzwertsätze zählen zu den bedeutendsten Resultaten der modernen Wahrscheinlichkeitstheorie. Sie sind unverzichtbar für grundlegende Methoden der angewandten Mathematik – so auch maßgeblich für die mathematische Statistik. In der vorliegenden Arbeit werden in einem ersten Schritt – mit den Sätzen von Moivre- Laplace, Feller-Lévy und Lindeberg-Feller – die fundamentalen theoretischen Konzepte ausführlich erläutert und bewiesen. Darüberhinaus findet sich, insbesondere für den Zentralen Grenzwertsatz von Feller-Lévy, eine kurze Diskussion der Konvergenzgüte. Im zweiten Teil werden mögliche Anwendungsbereiche der Zentralen Grenzwertsätze in der Inferenzstatistik aufgezeigt und mit praktischen Beispielen unterlegt. Hierbei stehen die statistische Intervallschätzung und statistische Hypothesentests für den Erwartungswert im Mittelpunkt. Diese Arbeit versteht sich als eine Art erster Überblick bzw. als Einstieg in die Thematik für Studenten der Mathematik und Statistik mit maßtheoretischen Vorkenntnissen., Central limit theorems are among the most important results of modern probability theory. They are indispensable for basic methods in applied mathematics – such as mathematical statistics. In a first step, the fundamental theoretical concepts, as there are the theorems of Moivre-Laplace, Feller-Lévy and Lindeberg-Feller, are comprehensively presented and proven. Furthemore there is a brief discussion of the quality of convergence, especially for the central limit theorem of Feller-Lévy. In the second part possible areas of application in inferential statistics are pointed out and underlined by practical examples. This work focuses thereby on statistical interval estimation and statistical hypothesis testing for the expected value. This work must be seen as a first overview and introduction to the subject for students of mathematics and statistics with basic knowledge in measure-theory.
- Published
- 2015
23. Personality and perceptions of situations from the thematic apperception test: quantifying alpha and beta press
- Abstract
Summary: Theoretical models posit that the perception of situations consists of two components: an objective component attributable to the situation being perceived and a subjective component attributable to the person doing the perceiving (Murray, 1938; Rauthmann, 2012; Sherman, Nave & Funder, 2013; Wagerman & Funder, 2009). In this study participants (N = 186) viewed three pictures from the Thematic Apperception Test (TAT; Murray, 1938) and rated the situations contained therein using a new measure of situations, the Riverside Situational Q-Sort (RSQ; Wagerman & Funder, 2009). The RSQ was used to calculate the overall agreement among ratings of situations and to examine the objective and subjective properties of the pictures. These results support a twocomponent theory of situation perception. Both the objective situation and the person perceiving that situation contributed to overall perception. Further, distinctive perceptions of situations were consistent across pictures and were associated with the Big Five personality traits in a theoretically meaningful manner. For instance, individuals high in Openness indicated that these pictures contained comparatively more humor (r = .26), intellectual stimuli (r = .20), and raised moral or ethical issues (r = .19) than individuals low on this trait., Includes bibliography., Thesis (M.A.)--Florida Atlantic University, 2013.
- Published
- 2013
24. Simulation study for change-point of AR-GARCH models and rank tests based trading strategies
- Abstract
It is very important to estimate the location of the change-point in statistical models. This thesis first gives simulation studies for the performance of the estimating coefficients and change-point in the AR-GARCH model. As an application, the structural change AR-GARCH models are used to analyze the Hang Seng Index. This thesis also proposes a new indicator, called Moving Signed-Rank Statistic, to detect the buy and sell signals in price series. We use the bootstrap approach to estimate the quantiles of our test statistic and then use them as trading signals. The performance of the proposed trading strategies is given.
- Published
- 2012
25. Simulation study for change-point of AR-GARCH models and rank tests based trading strategies
- Abstract
It is very important to estimate the location of the change-point in statistical models. This thesis first gives simulation studies for the performance of the estimating coefficients and change-point in the AR-GARCH model. As an application, the structural change AR-GARCH models are used to analyze the Hang Seng Index. This thesis also proposes a new indicator, called Moving Signed-Rank Statistic, to detect the buy and sell signals in price series. We use the bootstrap approach to estimate the quantiles of our test statistic and then use them as trading signals. The performance of the proposed trading strategies is given.
- Published
- 2012
26. Bayesian inference for the lognormal distribution
- Author
-
Harvey, Justin and Harvey, Justin
- Abstract
This thesis is concerned with objective Bayesian analysis (primarily estimation hypothesis testing and confidence statements) of data that are lognormally distributed. The lognormal distribution is currently used extensively to describe the distribution of positive random variables that are right-skewed. This is especially the case with data pertaining to occupational health and other biological data. In Chapter 1 we begin with inference on the products of means and medians as discussed in Menzefricke (1991). Exposure risk modeling is a particular application of this setting. Exact posterior moments are derived and compared to the Monte Carlo simulation techniques. Chapters 2 to 4 are concerned with inference on the mean of the lognormal distributtions in various settings. Other authors, namely Zou, Taleban and Huo (2009), have proposed procedures involving the so-called "method of variance estimates recovery" (MOVER), while an alternative approach based on simulation is the so-called generalized confidence interval, discussed by Krishnamoorthy and Mathew (2003). In this thesis we compare the performance of the MOVER-based confidence interval estimates and the generalized confidence interval procedure to coverage of credibility intervals obtained using Bayesian methodology using a variety of different prior distributions to estimate the appropriateness of each. An extensive simulation study is conducted to evaluate the coverage accuracy and interval width of the proposed methods. For the Bayesian approach both the equal-tailed and highest posterior density (HPD) credibility intervals are presented. Various prior distributions (independence Jeffreys' prior, the Jeffreys-rule prior, namely, the square root of the determinant of the Fisher Information matrix, Reference and Probability-Matching priors) are evaluated and compared to determine which give the best coverage with the most efficient interval width. The simulation studies show that the constructed Bayesian con
- Published
- 2012
27. Simulation study for change-point of AR-GARCH models and rank tests based trading strategies
- Abstract
It is very important to estimate the location of the change-point in statistical models. This thesis first gives simulation studies for the performance of the estimating coefficients and change-point in the AR-GARCH model. As an application, the structural change AR-GARCH models are used to analyze the Hang Seng Index. This thesis also proposes a new indicator, called Moving Signed-Rank Statistic, to detect the buy and sell signals in price series. We use the bootstrap approach to estimate the quantiles of our test statistic and then use them as trading signals. The performance of the proposed trading strategies is given.
- Published
- 2012
28. Multimodal evidence
- Author
-
Stegenga, Jacob and Stegenga, Jacob
- Abstract
We often have a variety of evidence available for a given hypothesis. For example, the efficacy of pharmaceuticals is studied with diverse experiments on animals, humans, and cells. I call evidence like this multimodal; a "mode" is a particular way of learning about the world: a technique, apparatus, or experiment. Philosophers have appealed to multimodal evidence to make robustness claims to advance various forms of scientific realism and to resist skeptical worries. The depth of such arguments, though, has advanced little since Whewell. What are the conditions under which such arguments are compelling? I raise methodological and epistemological arguments, and use examples from biology and medicine, to identify demanding constraints for successful appeals to multimodal evidence
- Published
- 2011
29. Novel pseudo random bit generator for improved security. (c2008)
- Abstract
Due to the emergence of new sensitive communication applications like eCommerce and online banking; secure communication, electronic identification and authentication are becoming a must. Cryptography is the technique used to secure communication through encryption of data messages. Pseudo Random Bit Generators (PRBG) are used by different cryptographic techniques as a tool to encrypt messages. While the well known Linear Feedback Shift Register (LFSR) PRBG offers good statistical propel1ies it offers poor security since the knowledge of 2n outputs bits make the whole generated sequence predictable. This work proposes a new digital PRBG based on AD (Analog to Digital) conversion of a composed sinusoidal signal. The aim behind the new proposal is to improve the inviolability and the security offered by the classical LFSR. The autocorrelation properties of the proposed generator are studied and compared to the propel1ies of the famous LFSR. Moreover, the effect of six different variables which are namely the bandwidth of the analog signal, the sampling frequency, the period of the analog signal, the coding, the quantizing intervals and the numbers of bits per sample are studied. The proposed PRBG will be tested using the famous "Five Basic Test" set of statistical testing.
- Published
- 2011
30. Multimodal evidence
- Author
-
Stegenga, Jacob and Stegenga, Jacob
- Abstract
We often have a variety of evidence available for a given hypothesis. For example, the efficacy of pharmaceuticals is studied with diverse experiments on animals, humans, and cells. I call evidence like this multimodal; a "mode" is a particular way of learning about the world: a technique, apparatus, or experiment. Philosophers have appealed to multimodal evidence to make robustness claims to advance various forms of scientific realism and to resist skeptical worries. The depth of such arguments, though, has advanced little since Whewell. What are the conditions under which such arguments are compelling? I raise methodological and epistemological arguments, and use examples from biology and medicine, to identify demanding constraints for successful appeals to multimodal evidence
- Published
- 2011
31. Prueba de hipótesis frente a intervalos de confianza
- Abstract
The p values, generally 0.05 or 0.01, that in the statistical hypothesis test are used to distinguish the significant and non-significant statistical results, it is considered of few information and few practical value, when the biomedical researcher and epidemiologist are interesting in knowing the magnitude of a result of study. This communication shows the advantage of the intervals to compare the hypothesis test with the interview confidence for inferring the difference between two sample proportions., Los valores P, generalmente, 0,05 ó 0,01, que en la prueba de hipótesis estadística se usan para diferenciar resultados estadísticamente significativos de los no significativos, se considera de poco valor informativo y práctico cuando el investigador biomédico y epidemiólogo están interesados en conocer la magnitud de un resultado de un estudio. Este artículo muestra la ventaja de los intervalos al comparar la prueba de hipótesis con la estimación de intervalos de confianza para inferir la diferencia entre dos proporciones muestrales.
- Published
- 2010
32. Essays on testing conditional independence
- Author
-
Huang, Meng and Huang, Meng
- Abstract
Conditional independence is of interest for testing unconfoundedness assumptions in causal inference, for selecting among semiparametric models, and for testing Granger noncausality, etc. This dissertation propose flexible tests for conditional independence, which are simple to implement yet powerful in the sense that they are consistent and achieve root n local power. In the literature, there are many tests available for the case in which the variables are categorical. But there are only a few nonparametric tests for the continuous case. On the other hand, in economics applications, it is common to condition on continuous variables. Chapter 1 provides a nonparametric test for continuous variables. The test statistic is a Wald type test based on an estimator of the topological "distance" between the restricted and unrestricted probability measures corresponding to conditional independence or its absence. The distance is evaluated using a family of Generically Comprehensively Revealing (GCR) functions indexed by a nuisance parameter vector. Although the test in chapter 1 is easy to calculate and has a tractable limiting null distribution, its consistency relies on the randomization of the choice of the nuisance parameters. In chapter 2, I obtain a Bierens type Integrated Conditional Moment test by integrating out the nuisance parameters. The test still achieves root n local power and its consistency does not rely on the randomization any more. Its limiting null distribution is a functional of a mean zero Gaussian process. I simulate the critical values by a conditional simulation approach. As an example of application, I test the key assumption of unconfoundedness in the context of estimating the returns to schooling. In applied microeconomics, many variables are categorical or binary. For example, in the returns-to-schooling example, the conditioning variables usually include a number of discrete variables such as sex, race, union or industry. However, in previous c
- Published
- 2009
33. Essays on testing conditional independence
- Author
-
Huang, Meng and Huang, Meng
- Abstract
Conditional independence is of interest for testing unconfoundedness assumptions in causal inference, for selecting among semiparametric models, and for testing Granger noncausality, etc. This dissertation propose flexible tests for conditional independence, which are simple to implement yet powerful in the sense that they are consistent and achieve root n local power. In the literature, there are many tests available for the case in which the variables are categorical. But there are only a few nonparametric tests for the continuous case. On the other hand, in economics applications, it is common to condition on continuous variables. Chapter 1 provides a nonparametric test for continuous variables. The test statistic is a Wald type test based on an estimator of the topological "distance" between the restricted and unrestricted probability measures corresponding to conditional independence or its absence. The distance is evaluated using a family of Generically Comprehensively Revealing (GCR) functions indexed by a nuisance parameter vector. Although the test in chapter 1 is easy to calculate and has a tractable limiting null distribution, its consistency relies on the randomization of the choice of the nuisance parameters. In chapter 2, I obtain a Bierens type Integrated Conditional Moment test by integrating out the nuisance parameters. The test still achieves root n local power and its consistency does not rely on the randomization any more. Its limiting null distribution is a functional of a mean zero Gaussian process. I simulate the critical values by a conditional simulation approach. As an example of application, I test the key assumption of unconfoundedness in the context of estimating the returns to schooling. In applied microeconomics, many variables are categorical or binary. For example, in the returns-to-schooling example, the conditioning variables usually include a number of discrete variables such as sex, race, union or industry. However, in previous c
- Published
- 2009
34. The role of short-term memory capacity and task experience for overconfidence in judgment under uncertainty
- Abstract
Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from limited task experience or from short-term memory limitations. As predicted by the naïve sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), overconfidence with probability judgment is rapidly reduced by additional task experience, whereas overconfidence with intuitive confidence intervals is minimally affected even by extensive task experience. In contrast to the minor bias with probability judgment, the extreme overconfidence bias with intuitive confidence intervals is correlated with short-term memory capacity. The proposed interpretation is that increased task experience is not sufficient to cure the overconfidence with confidence intervals because it stems from short-term memory limitations.
- Published
- 2008
- Full Text
- View/download PDF
35. The role of short-term memory capacity and task experience for overconfidence in judgment under uncertainty
- Abstract
Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from limited task experience or from short-term memory limitations. As predicted by the naïve sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), overconfidence with probability judgment is rapidly reduced by additional task experience, whereas overconfidence with intuitive confidence intervals is minimally affected even by extensive task experience. In contrast to the minor bias with probability judgment, the extreme overconfidence bias with intuitive confidence intervals is correlated with short-term memory capacity. The proposed interpretation is that increased task experience is not sufficient to cure the overconfidence with confidence intervals because it stems from short-term memory limitations.
- Published
- 2008
- Full Text
- View/download PDF
36. Sample size determination for kernel regression estimation using sequential fixed-width confidence bands
- Abstract
We consider a random design model based on independent and identically distributed pairs of observations (Xi, Yi), where the regression function m(x) is given by m(x) = E(Yi|Xi = x) with one independent variable. In a nonparametric setting the aim is to produce a reasonable approximation to the unknown function m(x) when we have no precise information about the form of the true density, f(x) of X. We describe an estimation procedure of non-parametric regression model at a given point by some appropriately constructed fixed-width (2d) confidence interval with the confidence coefficient of at least 1−. Here, d(> 0) and 2 (0, 1) are two preassigned values. Fixed-width confidence intervals are developed using both Nadaraya-Watson and local linear kernel estimators of nonparametric regression with data-driven bandwidths. The sample size was optimized using the purely and two-stage sequential procedures together with asymptotic properties of the Nadaraya-Watson and local linear estimators. A large scale simulation study was performed to compare their coverage accuracy. The numerical results indicate that the confi dence bands based on the local linear estimator have the better performance than those constructed by using Nadaraya-Watson estimator. However both estimators are shown to have asymptotically correct coverage properties.
- Published
- 2008
37. Small sample properties of transmission disequilibrium test and related tests.
- Abstract
Cheung, Ka Wai Ricker., Thesis (M.Phil.)--Chinese University of Hong Kong, 2007., Includes bibliographical references (leaves 68-69)., s in English and Chinese., Chapter 1 --- Introduction --- p.1, Chapter 1.1 --- Basic Concepts --- p.1, Chapter 1.2 --- Linkage Disequilibrium --- p.5, Chapter 1.3 --- Transmission Disequilibrium Test --- p.7, Chapter 1.4 --- Scope of Thesis --- p.8, Chapter 2 --- Transmission Disequilibrium Test --- p.9, Chapter 2.1 --- The Model --- p.9, Chapter 2.2 --- The Data Structure and The Statistic --- p.12, Chapter 3 --- Small Sample Properties of Transmission Disequilibrium Test --- p.16, Chapter 3.1 --- Exact Distribution of TDT Statistic --- p.16, Chapter 3.2 --- Power under Alternative Hypothesis --- p.20, Chapter 3.3 --- P-Value --- p.29, Chapter 4 --- Exact P-Value and Power --- p.35, Chapter 5 --- Haplotype Relative Risk --- p.61, Chapter 6 --- Conclusion --- p.66, References --- p.68, http://library.cuhk.edu.hk/record=b5893384, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2007
38. Small sample properties of transmission disequilibrium test and related tests.
- Abstract
Cheung, Ka Wai Ricker., Thesis (M.Phil.)--Chinese University of Hong Kong, 2007., Includes bibliographical references (leaves 68-69)., s in English and Chinese., Chapter 1 --- Introduction --- p.1, Chapter 1.1 --- Basic Concepts --- p.1, Chapter 1.2 --- Linkage Disequilibrium --- p.5, Chapter 1.3 --- Transmission Disequilibrium Test --- p.7, Chapter 1.4 --- Scope of Thesis --- p.8, Chapter 2 --- Transmission Disequilibrium Test --- p.9, Chapter 2.1 --- The Model --- p.9, Chapter 2.2 --- The Data Structure and The Statistic --- p.12, Chapter 3 --- Small Sample Properties of Transmission Disequilibrium Test --- p.16, Chapter 3.1 --- Exact Distribution of TDT Statistic --- p.16, Chapter 3.2 --- Power under Alternative Hypothesis --- p.20, Chapter 3.3 --- P-Value --- p.29, Chapter 4 --- Exact P-Value and Power --- p.35, Chapter 5 --- Haplotype Relative Risk --- p.61, Chapter 6 --- Conclusion --- p.66, References --- p.68, http://library.cuhk.edu.hk/record=b5893384, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2007
39. Essays on hypothesis testing in the presence of nearly integrated variables
- Author
-
Miyanishi, Masako and Miyanishi, Masako
- Abstract
Many economic variables are observed to be highly persistent (Nelson and Plosser (1982)). In hypothesis testing, the nearly integrated regressor may cause a huge size distortion. The first chapter examines this problem in the predictive regression setting. We show how the rejection of hypothesis test is caused in the presence of a highly persistent regressor, by using four typical empirical settings including predicting stock returns. The rejections may not indicate predictability or rejection of the model. The second chapter considers cointegration when the regressor has a near but not exact unit root. Elliott (1998) pointed out that when the data is not exactly integrated, the test statistics will be severely biased. We examine the relevance of this explanation in four empirical settings. We will show that the rejection of the test in the presence of a near unit root variable is a common problem. We also consider alternative procedures. The third chapter examines alternative testing procedures in the predictive regression setting. One is a variable- addition method by Toda and Yamamoto (1995). Others are Supbound t-test, Bonferroni t-test, and Bonferroni Q-test. In terms of power, Bonferroni Q-test outperforms
- Published
- 2006
40. Essays on hypothesis testing in the presence of nearly integrated variables
- Author
-
Miyanishi, Masako and Miyanishi, Masako
- Abstract
Many economic variables are observed to be highly persistent (Nelson and Plosser (1982)). In hypothesis testing, the nearly integrated regressor may cause a huge size distortion. The first chapter examines this problem in the predictive regression setting. We show how the rejection of hypothesis test is caused in the presence of a highly persistent regressor, by using four typical empirical settings including predicting stock returns. The rejections may not indicate predictability or rejection of the model. The second chapter considers cointegration when the regressor has a near but not exact unit root. Elliott (1998) pointed out that when the data is not exactly integrated, the test statistics will be severely biased. We examine the relevance of this explanation in four empirical settings. We will show that the rejection of the test in the presence of a near unit root variable is a common problem. We also consider alternative procedures. The third chapter examines alternative testing procedures in the predictive regression setting. One is a variable- addition method by Toda and Yamamoto (1995). Others are Supbound t-test, Bonferroni t-test, and Bonferroni Q-test. In terms of power, Bonferroni Q-test outperforms
- Published
- 2006
41. Exact conditional tests under inverse sampling.
- Abstract
Chan For Yee., Thesis (M.Phil.)--Chinese University of Hong Kong, 2005., Includes bibliographical references (leaves 88-90)., s in English and Chinese., p.i, Acknowledgement --- p.iv, Chapter 1 --- Introduction --- p.1, Chapter 2 --- Basic Concepts --- p.6, Chapter 2.1 --- Binomial vs Inverse Sampling --- p.6, Chapter 2.2 --- Equivalence / Non-inferiority Test --- p.7, Chapter 3 --- Testing Procedures --- p.9, Chapter 3.1 --- The Model --- p.9, Chapter 3.2 --- Asymptotic Behaviors of the Estimators --- p.10, Chapter 3.2.1 --- Asymptotic Test Statistic based on Unconditional Maximum Likelihood Estimate --- p.12, Chapter 3.2.2 --- Asymptotic Test Statistic based on restricted maximum likelihood estimate --- p.13, Chapter 3.3 --- Conditional Exact Procedures --- p.16, Chapter 3.3.1 --- Non-test-statistic-based procedure --- p.17, Chapter 3.3.2 --- Test-statistic-based procedure --- p.17, Chapter 4 --- Simulation Study --- p.19, Chapter 4.1 --- Simulation Results - Type I error rate --- p.21, Chapter 4.1.1 --- Asymptotic Test Statistic based on Unconditional MLE . . --- p.21, Chapter 4.1.2 --- Asymptotic Test Statistic based on Restricted MLE . . . . --- p.22, Chapter 4.1.3 --- Non-test-statistic-based Conditional Exact Test --- p.23, Chapter 4.1.4 --- Test-statistic-based Conditional Exact Test --- p.24, Chapter 4.2 --- Simulation Results - Power --- p.25, Chapter 4.2.1 --- Asymptotic Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.25, Chapter 4.2.2 --- Conditional Exact Tests - Similarity and Difference be- tween using Non-test-statistic-based and Test-statistic-based Procedures --- p.30, Chapter 4.2.3 --- Test-statistic-based Conditional Exact Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.31, Chapter 5 --- Conclusion --- p.32, Appendices --- p.36, Chapter A. --- Simulation Result - Type I error rate --- p.36, Chapter B. --- Simulation Result - Power value --- p.42, Bibliography --- p.88, http://library.cuhk.edu.hk/record=b5892696, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2005
42. Exact conditional tests under inverse sampling.
- Abstract
Chan For Yee., Thesis (M.Phil.)--Chinese University of Hong Kong, 2005., Includes bibliographical references (leaves 88-90)., s in English and Chinese., p.i, Acknowledgement --- p.iv, Chapter 1 --- Introduction --- p.1, Chapter 2 --- Basic Concepts --- p.6, Chapter 2.1 --- Binomial vs Inverse Sampling --- p.6, Chapter 2.2 --- Equivalence / Non-inferiority Test --- p.7, Chapter 3 --- Testing Procedures --- p.9, Chapter 3.1 --- The Model --- p.9, Chapter 3.2 --- Asymptotic Behaviors of the Estimators --- p.10, Chapter 3.2.1 --- Asymptotic Test Statistic based on Unconditional Maximum Likelihood Estimate --- p.12, Chapter 3.2.2 --- Asymptotic Test Statistic based on restricted maximum likelihood estimate --- p.13, Chapter 3.3 --- Conditional Exact Procedures --- p.16, Chapter 3.3.1 --- Non-test-statistic-based procedure --- p.17, Chapter 3.3.2 --- Test-statistic-based procedure --- p.17, Chapter 4 --- Simulation Study --- p.19, Chapter 4.1 --- Simulation Results - Type I error rate --- p.21, Chapter 4.1.1 --- Asymptotic Test Statistic based on Unconditional MLE . . --- p.21, Chapter 4.1.2 --- Asymptotic Test Statistic based on Restricted MLE . . . . --- p.22, Chapter 4.1.3 --- Non-test-statistic-based Conditional Exact Test --- p.23, Chapter 4.1.4 --- Test-statistic-based Conditional Exact Test --- p.24, Chapter 4.2 --- Simulation Results - Power --- p.25, Chapter 4.2.1 --- Asymptotic Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.25, Chapter 4.2.2 --- Conditional Exact Tests - Similarity and Difference be- tween using Non-test-statistic-based and Test-statistic-based Procedures --- p.30, Chapter 4.2.3 --- Test-statistic-based Conditional Exact Tests - Similarity and Difference between using Unconditional and Restricted MLE --- p.31, Chapter 5 --- Conclusion --- p.32, Appendices --- p.36, Chapter A. --- Simulation Result - Type I error rate --- p.36, Chapter B. --- Simulation Result - Power value --- p.42, Bibliography --- p.88, http://library.cuhk.edu.hk/record=b5892696, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2005
43. Detection theory and psychophysics
- Abstract
"October 30, 1956." "This report is based on a thesis submitted to the Department of Economics and Social Science, M.I.T., in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Psychology, June 1956.", Bibliography: p. 73., Army Signal Corps Contract DA36-039-sc-64637 Dept. of the Army Task No. 3-99-06-108 Project No. 3-99-00-100, Thomas Marill.
- Published
- 2004
44. 'Black-Box' Probabilistic Verification
- Abstract
The authors explore the concept of a "black-box" stochastic system, and propose an algorithm for verifying probabilistic properties of such systems based on very weak assumptions regarding system dynamics. The properties are expressed using a variation of PCTL, the Probabilistic Computation Tree Logic. They present a general model of stochastic discrete event systems that encompasses both discrete-time and continuous-time processes, and also provide a semantics for PCTL interpreted over this model. Their presentation is both a generalization of, and an improvement over, some recent work by Sen et al. on probabilistic verification of "black-box" systems., Sponsored in part by the Defense Advanced Research Projects Agency (DARPA).
- Published
- 2004
45. Multiple test procedures for testing of unity odds ratios in multi-centre studies.
- Abstract
Lee Ka-ming., Thesis (M.Phil.)--Chinese University of Hong Kong, 2001., Includes bibliographical references (leaves 68-70)., s in English and Chinese., Chapter 1 --- Introduction --- p.1, Chapter 2 --- Multiple Test Procedure --- p.7, Chapter 2.1 --- Hypothesis Test for Individual Centre --- p.8, Chapter 2.2 --- Multiple Hypothesis Test for Multi-Centre --- p.11, Chapter 2.2.1 --- Single-step Multiple Test Procedure --- p.11, Chapter 2.2.2 --- Sequentially Rejective Multiple Test Procedure --- p.12, Chapter 2.2.3 --- Multiple Test Procedure for Discrete Distribution --- p.14, Chapter 2.2.4 --- Summary of various Multiple Test Procedures --- p.17, Chapter 3 --- Simulation Study --- p.19, Chapter 3.1 --- Comparisons of Sizes --- p.19, Chapter 3.1.1 --- Based on Asymptotic Approach --- p.22, Chapter 3.1.2 --- Based on Exact Approach --- p.23, Chapter 3.1.3 --- Based on Mid-P Approach --- p.25, Chapter 3.1.4 --- "Comparisons between Asymptotic, Exact and Mid-P Approaches" --- p.26, Chapter 3.2 --- Comparisons of Power --- p.29, Chapter 3.2.1 --- Based on Asymptotic Approach --- p.32, Chapter 3.2.2 --- Based on Exact Approach --- p.33, Chapter 3.2.3 --- Based on Mid-P Approach --- p.33, Chapter 3.2.4 --- Asymptotic vs. Exact Approaches --- p.34, Chapter 3.2.5 --- Exact vs. Mid-P Approaches --- p.34, Chapter 3.2.6 --- Asymptotic vs. Mid-P Approaches --- p.34, Chapter 4 --- Illustrative Examples --- p.36, Chapter 5 --- Conclusions and Discussions --- p.43, Figures --- p.45, References --- p.68, http://library.cuhk.edu.hk/record=b5890804, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2001
46. A comparison of tests of heterogeneity in meta-analysis.
- Abstract
Lee Shun-yi., Thesis (M.Phil.)--Chinese University of Hong Kong, 2001., Includes bibliographical references (leaves 57-61)., s in English and Chinese., Chapter 1 --- Introduction --- p.1, Chapter 1.1 --- Introduction --- p.1, Chapter 1.2 --- Tests of Hypotheses --- p.4, Chapter 1.2.1 --- Likelihood Ratio Statistic --- p.4, Chapter 1.2.2 --- The Rao´ة s Score Statistic --- p.5, Chapter 1.2.3 --- Wald's Statistic --- p.6, Chapter 1.3 --- Notation --- p.6, Chapter 2 --- Fixed Effects Model --- p.8, Chapter 2.1 --- Introduction --- p.8, Chapter 2.2 --- Pearson Chi-square Statistic --- p.9, Chapter 2.3 --- Logistic Regression Model --- p.11, Chapter 2.3.1 --- Testing Linear Hypotheses about the Regression Coefficients --- p.12, Chapter 2.4 --- Combining Proportions --- p.16, Chapter 2.4.1 --- Classical Estimators --- p.17, Chapter 2.4.2 --- Jackknife Estimator --- p.18, Chapter 2.4.3 --- Cross-validatory estimators --- p.19, Chapter 3 --- Random Effects Model --- p.21, Chapter 3.1 --- Introduction --- p.21, Chapter 3.2 --- DerSimonian and Laird Method --- p.22, Chapter 3.3 --- Generalized linear model with random effect --- p.24, Chapter 3.3.1 --- Quasi-Likelihood --- p.25, Chapter 3.3.2 --- Testing Linear Hypotheses about the Regression Coefficients --- p.26, Chapter 3.3.3 --- MINQUE --- p.27, Chapter 3.3.4 --- Score Test --- p.31, Chapter 4 --- Overdispersion and Intraclass Correlation --- p.36, Chapter 4.1 --- Introduction --- p.36, Chapter 4.2 --- C(α) Test --- p.39, Chapter 4.2.1 --- Correlated Binomial model and Beta-Binomial model --- p.40, Chapter 4.2.2 --- C(α) Statistic Based On Quasi-likclihood --- p.46, Chapter 4.3 --- Donner Statistic --- p.48, Chapter 4.4 --- Rao and Scott Statistic --- p.51, Chapter 5 --- Example and Discussion --- p.53, Bibliography --- p.57, http://library.cuhk.edu.hk/record=b5895892, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2001
47. Multiple test procedures for testing of unity odds ratios in multi-centre studies.
- Abstract
Lee Ka-ming., Thesis (M.Phil.)--Chinese University of Hong Kong, 2001., Includes bibliographical references (leaves 68-70)., s in English and Chinese., Chapter 1 --- Introduction --- p.1, Chapter 2 --- Multiple Test Procedure --- p.7, Chapter 2.1 --- Hypothesis Test for Individual Centre --- p.8, Chapter 2.2 --- Multiple Hypothesis Test for Multi-Centre --- p.11, Chapter 2.2.1 --- Single-step Multiple Test Procedure --- p.11, Chapter 2.2.2 --- Sequentially Rejective Multiple Test Procedure --- p.12, Chapter 2.2.3 --- Multiple Test Procedure for Discrete Distribution --- p.14, Chapter 2.2.4 --- Summary of various Multiple Test Procedures --- p.17, Chapter 3 --- Simulation Study --- p.19, Chapter 3.1 --- Comparisons of Sizes --- p.19, Chapter 3.1.1 --- Based on Asymptotic Approach --- p.22, Chapter 3.1.2 --- Based on Exact Approach --- p.23, Chapter 3.1.3 --- Based on Mid-P Approach --- p.25, Chapter 3.1.4 --- "Comparisons between Asymptotic, Exact and Mid-P Approaches" --- p.26, Chapter 3.2 --- Comparisons of Power --- p.29, Chapter 3.2.1 --- Based on Asymptotic Approach --- p.32, Chapter 3.2.2 --- Based on Exact Approach --- p.33, Chapter 3.2.3 --- Based on Mid-P Approach --- p.33, Chapter 3.2.4 --- Asymptotic vs. Exact Approaches --- p.34, Chapter 3.2.5 --- Exact vs. Mid-P Approaches --- p.34, Chapter 3.2.6 --- Asymptotic vs. Mid-P Approaches --- p.34, Chapter 4 --- Illustrative Examples --- p.36, Chapter 5 --- Conclusions and Discussions --- p.43, Figures --- p.45, References --- p.68, http://library.cuhk.edu.hk/record=b5890804, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 2001
48. A new method of testing hypotheses in linear models.
- Abstract
by Tsz-Kit Keung., Thesis (M.Phil.)--Chinese University of Hong Kong, 1996., Includes bibliographical references (leaf 81)., Chapter Chapter 1 --- Introduction --- p.1, Chapter Chapter 2 --- Testing Testable Hypotheses in Linear Models --- p.8, Chapter 2.1 --- A General Theory --- p.9, Chapter 2.2 --- The Method of Peixoto --- p.17, Chapter 2.3 --- The Method of Chan and Li --- p.23, Chapter Chapter 3 --- A New Method of Obtaining Equivalent Hypotheses --- p.32, Chapter Chapter 4 --- Constrained Linear Models --- p.44, Chapter 4.1 --- Hypothesis Testing in Constrained Linear Models --- p.44, Chapter 4.2 --- Linear Models with Missing Observations --- p.50, Chapter Chapter 5 --- Conclusions --- p.71, Appendix --- p.74, References --- p.81, http://library.cuhk.edu.hk/record=b5888985, Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)
- Published
- 1996
49. Hypothesis testing procedures for non-nested regression models
- Abstract
Theory often indicates that a given response variable should be a function of certain explanatory variables yet fails to provide meaningful information as to the specific form of this function. To test the validity of a given functional form with sensitivity toward the feasible alternatives, a procedure is needed for comparing non-nested families of hypotheses. Two hypothesized models are said to be non-nested when one model is neither a restricted case nor a limiting approximation of the other. These non-nested hypotheses cannot be tested using conventional likelihood ratio procedures. In recent years, however, several new approaches have been developed for testing non-nested regression models. A comprehensive review of the procedures for the case of two linear regression models was presented. Comparisons between these procedures were made on the basis of asymptotic distributional properties, simulated finite sample performance and computational ease. A modification to the Fisher and McAleer JA-test was proposed and its properties investigated. As a compromise between the JA-test and the Orthodox F-test, it was shown to have an exact non-null distribution. Its properties, both analytically and empirically derived, exhibited the practical worth of such an adjustment. A Monte Carlo study of the testing procedures involving non-nested linear regression models in small sample situations (n ≤ 40) provided information necessary for the formulation of practical guidelines. It was evident that the modified Cox procedure, N̄ , was most powerful for providing correct inferences. In addition, there was strong evidence to support the use of the adjusted J-test (AJ) (Davidson and MacKinnon's test with small-sample modifications due to Godfrey and Pesaran), the modified JA-test (NJ) and the Orthodox F-test for supplemental information. Under non normal disturbances, similar results were yielded. An empirical study of spending patterns for household food consumption provided a pr
- Published
- 1987
50. Distribution-free tests of subhypotheses
- Published
- 1984
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.