2,206 results on '"significance level"'
Search Results
2. SampSizeCal: The platform-independent computational tool for sample sizes in the paradigm of new statistics
- Author
-
WenJun Zhang
- Subjects
sampling ,sample size ,computational tool ,significance level ,statistical power ,new statistics ,Biology (General) ,QH301-705.5 - Abstract
Dependent upon the statistical significance p-value and statistical power, the sample size estimation is widely used in various experimental sciences. Nevertheless, the p-value based paradigm, which has resulted in numerous fake conclusions that originate partly from insufficient sample sizes, has been widely criticized in recent years for serious problems. Therefore, I developed a platform-independent computational tool, SampSizeCal, for sample sizes in the paradigm of new statistics. In this tool, both default p-values and the maximum p-values were greatly enhanced, which will lead to the reasonable increase of sample sizes. The computational tool harbors more than 120 sample size methods for experimental designs. SampSizeCal includes both online and offline versions, and can be used for various computing devices (PCs, iPads, smartphones, etc.), operating systems (Windows, Mac, Android, Harmony, etc.) and web browsers (Chrome, Firefox, Sougo, 360, etc). It is currently the most comprehensive platform-independent computational tool for sample sizes, and can be used in experimental sciences such as medicine (clinical medicine, experimental zoology, public health, pharmacy, etc.), biology, ecology, agronomy, psychology and engineering technology.
- Published
- 2024
3. Prediction of reservoir pressure and study of its behavior in the development of oil fields based on the construction of multilevel multidimensional probabilistic-statistical models
- Author
-
V. I. Galkin, I. N. Ponomareva, and D. A. Martyushev
- Subjects
statistical analysis ,well testing ,significance level ,well operation ,formation permeability ,current reservoir pressure ,Geology ,QE1-996.5 - Abstract
Determination of the current reservoir pressure in oil production wells selection zones is an urgent task of field development monitoring. The main method for its determination is hydrodynamic studies under unsteady conditions. At the same time, the process of restoring bottomhole pressure to the value of reservoir pressure often lasts a significant period of time, which leads to long downtime of the fund and significant shortfalls in oil production. In addition, it seems rather difficult to compare reservoir pressures with each other in the wells due to the different timing of the studies, since it is impossible to simultaneously stop the entire fund for measuring the reservoir pressure in the field. The article proposes a new method for determining the current reservoir pressure in the extraction zones, based on the construction of multidimensional mathematical models using the data of geological and technological development indicators. As the initial data, the values of reservoir pressure, determined during processing of the materials of hydrodynamic studies of wells, as well as a set of geological and technological indicators, probably affecting its value, were used (initial reservoir pressure for each well, the duration of its operation at the time of study, liquid rate, bottomhole pressure, the initial permeability and the current collector in the drainage area, GOR accumulated values oil, and liquid water, and skin factor). In the course of the research, several variants of statistical modeling were used, in the process of which the regularities of the reservoir pressure behavior during the development of reserves were established, individual for the object of development. The obtained models are characterized by a high degree of reliability and make it possible to determine the desired value with an error of no more than 1.0 MPa.
- Published
- 2024
- Full Text
- View/download PDF
4. Transformação de dados sob normalidade e heterocedasticidade: Um estudo de simulação.
- Author
-
Cavalcante de Oliveira, Carlos Augusto
- Subjects
ANALYSIS of variance ,HETEROSCEDASTICITY ,STATISTICAL hypothesis testing ,AGRICULTURE - Abstract
Copyright of Sigmae is the property of Universidade Federal de Alfenas (UNIFAL-MG) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
5. Osnovna načela određivanja veličine uzoraka u veterinarskim istraživanjima.
- Author
-
Vlahek, I., Sušić, V., Piplica, A., Kabalin, A. Ekert, Menčik, S., and Maljković, M. Maurić
- Subjects
VETERINARY medicine ,SAMPLE size (Statistics) ,DATA analysis ,PROFESSIONAL ethics ,VARIABILITY (Psychometrics) - Abstract
Copyright of Veterinarska Stanica is the property of Croatian Veterinary Institute and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
6. A Bayesian robustness measure in significance tests for equivalence tests.
- Author
-
Tatiane da Silva, Josimara and de Castro, Mário
- Abstract
AbstractIn this article, the local sensitivity of non linear prior quantities in Bayesian significance tests with respect to the choice of a prior distribution is considered. We propose sensitivity indices using the Gâteaux derivative to evaluate the rate of change of statistical functionals defined over the space of prior probability measures. These sensitivity indices are easy to interpret and calculate. We apply the proposed methodology to equivalence tests for two independent binomial proportions to quantify the local sensitivity of quantities in significance tests, such as adaptive significance level and power, with respect to the choice of the prior distribution. We present an application in sensory analysis and consumer research to illustrate the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Number of Replications during Monitoring of the Soil Organic Carbon Content in Forest.
- Author
-
Samsonova, V. P., Meshalkina, J. L., Kondrashkina, M. I., and Dyadkina, S. E.
- Abstract
The estimation of the required number of soil samples to assess the content of soil organic carbon (SOC) in forest biogeocenosis during monitoring studies is considered using the example of data given in an article by E.A. Dmitriev et al. Primary data on the SOC content were obtained in a spruce forest at 166 sites in layers of 0–10, 10–20, and 20–30 cm after removal of the litter. The sampling was performed at points of a regular grid formed by equilateral triangles with 1-m sides within a regular hexagon with a side of 7 m. The SOC content was determined by the method by Tyurin. The original article presents statistics for three zones: near-trunk, under-crown, and inter-crown. The spatial variation of carbon content in all the zones and at all depths is high, the coefficients of variation are about 50%. It is shown that the number of replications required to estimate the mean SOC content at a 95% confidence level is hundreds of samples in the 0- to 10-cm layer and decreases to tens of samples in the 20- to 30-cm layer. Since the number of replications for testing hypotheses about the equality of means depends not only on the confidence level, but also on the power of the criterion used, the required number of replications increases several times. Testing by samples taken from the entire vertical 0- to 30-cm-thick layer with the formation of composite samples reduces the number of required replications. However, careful observation of sample preparation, including primary mixing of samples, is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. ارزیابی مدل خطی تعمیم یافته به داده های درصد جوانه زنی و مقایسه آن با روش تبدیل جذری.
- Author
-
فرشید قادری فر, مجید عظیم محسنی, and سید حمیدرضا باقر
- Subjects
MONTE Carlo method ,SQUARE root ,WHEAT ,RESEARCH personnel ,GERMINATION ,LAVENDERS - Abstract
Introduction: In seed research, germination percentage data is the result of counting and has a binomial distribution. Therefore, seed researchers use data transformation, especially square root transformation, to stabilize the variance and normalize the data before performing analysis of variance and comparison of treatments. Despite the use of data transformation, this method has fundamental issues in the structure that misleads the test results. Therefore, it is important to introduce and replace a method that preserves the research assumptions and provides acceptable results for researchers without using data transformation. The use of generalized linear model is an alternative method for analyzing germination data with binomial distribution. In this research, the generalized linear model will be introduced first. Then, the efficiency of this method will be illustrated using simulated and actual germination data. Materials and Methods: In this research, first the simulated data was generated by the Monte Carlo method. Based on the simulated data, the significance level and the power of test of generalized linear model were computed. Then the actual data related to three experiments including the effect of acidity on germination of wheat varieties, the effect of water stress and salinity on germination of yellow sweet clover seeds, and the effect of alternating temperatures on germination of three lavender populations were used and the results of the generalized linear model were compared with the square root transformation method based on the data of three experiments. Results: The simulation results showed that the generalized linear model has a high efficiency to preserve the predetermined significance level and a high power in detecting significant differences in germination of the treatments. Moreover, the results of the comparison of the generalized linear model with the square root transformation method illustrated that the generalized linear model had a higher capability to detect significant differences between various treatments, especially in the treatments with unequal seeds in the Petri dish, and in the treatments in which the square root transformation method resulted in no significant difference among treatments, the generalized linear method showed a significant difference. Conclusions: Generally, the results of this research demonstrated that the generalized linear model can be used as an alternative method to square root transformation in studies on the germination percentage of seeds with binomial distribution, without having the problems of the square root transformation method. Moreover, this model outperforms the square root transformation in detecting significant differences in germination of treatments with fixed and different seeds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Statistics
- Author
-
Maurits, Natasha, Maurits, Natasha, and Ćurčić-Blake, Branislava
- Published
- 2023
- Full Text
- View/download PDF
10. Multiple Testing Corrections
- Author
-
Emmert-Streib, Frank, Moutari, Salissou, Dehmer, Matthias, Emmert-Streib, Frank, Moutari, Salissou, and Dehmer, Matthias
- Published
- 2023
- Full Text
- View/download PDF
11. Hypothesis Testing
- Author
-
Emmert-Streib, Frank, Moutari, Salissou, Dehmer, Matthias, Emmert-Streib, Frank, Moutari, Salissou, and Dehmer, Matthias
- Published
- 2023
- Full Text
- View/download PDF
12. A systematic study on the correlation between governance structure and strategic performance of Chinese listed companies in the context of deep learning
- Author
-
Peng Lei, Qu Liang, and Xu Yuanjie
- Subjects
correlation analysis ,regression analysis ,significance level ,governance structure ,strategic performance ,01a25 ,Mathematics ,QA1-939 - Abstract
The analysis of the links and uncertainties between independent variables and dependent variables is effective. Then, the factor analysis method is applied to establish a comprehensive performance evaluation system that takes into account four dimensions: corporate profitability, solvency, development ability, and operation ability. Finally, using a regression analysis model to analyze performance factors and correlation analysis, the correlation and significance level between corporate governance structure and enterprise performance were explored. The regression coefficient of the social performance of board-size enterprises is -0.034, which negatively affects economic performance. The significance of corporate compensation incentives, equity incentives and economic performance is 5% and 1%, respectively, which has a positive effect. This study is important for improving corporate performance and optimizing corporate governance structure.
- Published
- 2024
- Full Text
- View/download PDF
13. On sensitivity of exponentiality tests to data rounding: a Monte Carlo simulation study.
- Author
-
Ushakov, N. G. and Ushakov, V. G.
- Subjects
- *
FALSE positive error , *DISTRIBUTION (Probability theory) - Abstract
Different statistical procedures are differently sensitive to data rounding. It turns out that tests for exponentiality are more sensitive to the data rounding than many classical parametric tests or than nonparametric tests for normality. In this work we find out which exponentiality tests are more robust and which ones are less robust to the rounding. The main tool is Monte Carlo simulation. We estimate and compare the probability of Type I error of nineteen exponentiality tests for different rounding levels and different sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Verifying Hypotheses of Drug Bioequivalence.
- Author
-
Dranitsyna, M. A., Zakharova, T. V., and Panov, P. V.
- Abstract
The problem of testing pharmaceuticals on their bioequivalence is considered. Investigations of drug bioequivalence are the basis for reproducing drugs that confirm their efficiency and safety. The main way of verifying drug bioequivalence is to conduct two one-sided Schuirmann tests. Two one-sided tests have been used for many years and proven their validity for proving equivalent bioavailability, but in some situations (missing data, the necessity of considering not only the aggregate metrics but also the shape of the concentration–time curve) there is a need to establish more accurately the differences between the concentration–time curves. The authors present a new criterion that is more sensitive to the differences between characteristics that affect drug bioavailability and reduce the risk for patients. In should be noted that the new criterion generalizes the Schuirmann criterion and preserves its useful properties. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Multivariate Regression Modeling for Coastal Urban Air Quality Estimates.
- Author
-
Choi, Soo-Min, Choi, Hyo, and Paik, Woojin
- Subjects
REGRESSION analysis ,AIR quality ,SCATTER diagrams ,STATISTICAL correlation ,PARTICULATE matter - Abstract
Multivariate regression models for real-time coastal air quality forecasting were suggested from 18 to 27 March 2015, with a total of 15 kinds of hourly input data (three-hours-earlier data of PM and gas with meteorological parameters from Kangnung (Korea), associated with two-days-earlier data of PM and gas from Beijing (China)). Multiple correlation coefficients between the predicted and measured PM
10 , PM2.5 , NO2 , SO2 , CO and O3 concentrations were 0.957, 0.906, 0.886, 0.795, 0.864 and 0.932 before the yellow sand event at Kangnung, 0.936, 0.982, 0.866, 0.917, 0.887 and 0.916 during the event and 0.919, 0.945, 0.902, 0.857, 0.887 and 0.892 after the event. As the significance levels (p) from multi-regression analyses were less than 0.001, all correlation coefficients were very significant. Partial correlation coefficients presenting the contribution of 15 input variables to 6 output variables using the models were presented for the three periods in detail. Scatter plots and their hourly distributions between the predicted and measured values showed the quite good accuracy of the modeling performance for the current time forecasting of six output values and their high applicability. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
16. Sample size for the evaluation of selection traits of nuts in hazelnut varieties
- Author
-
S. G. Biganova, Yu. I. Sukhorukih, E. K. Pchikhachev, and N. A. Trusheva
- Subjects
hazelnuts ,variety ,selection ,nuts ,sample size ,significance level ,measurement error ,quantitative indicators ,qualitative indicators ,physical quantities ,scoring ,Technology - Abstract
Planning works on the study of coblnut (hazelnut) requires establishing the sample size, taking into account the magnitude of the error and the significance of the results. Such studies have not been carried out in full with regard to plant varieties, taking into account modern requirements. The purpose of the research is to determine the sample size for calculating the value of selection traits of nuts in hazelnut varieties with different error rates and significance levels. For this purpose 12 main economically valuable breeding traits of nuts have been studied in 8 varieties – taste, weight, indestructibility, kernel yield, the presence of husks on it, strength and color of the shell, nut weight, one-dimensionality of fruits in size and shape, overall score. A well-known method has been used to calculate the sample size for evaluating each indicator with different relative and absolute errors for different levels of significance. The largest contribution to the overall maximum score of 59 points is made by the core indicators: taste – 25.42%, weight – 22.29%, indestructibility – 16.9%, yield – 11.31%. The share of other features is estimated at 2.25 – 6.78%. For a relative estimate of 5% accepted in Biology the estimated sample size for weight indicators was at a significance level of α = 0.05 / α = 0.1: walnut weight 30/21, kernel weight – 43/30 nuts; scoring indicators: shell color – 18/13, shell strength – 22/16, presence of husks on the kernel – 137/95, indestructibility of the kernel – 27/19, taste of the kernel – 25/17, total score of the selection category – 11/7 nuts. For indicators expressed as a percentage for α = 0.05: kernel yield with an error of 1% – 98, damaged by diseases, pests with an error of 10% – 62, one-dimensional in size with an error of 10% – 57, one-dimensional in shape – with 10%, error – 54 hazelnuts.
- Published
- 2023
- Full Text
- View/download PDF
17. Multiversal Methods in Observational Studies: The Case of COVID-19
- Author
-
Tomaselli, Venera, Cantone, Giulio Giacomo, Miracula, Vincenzo, Salvati, Nicola, editor, Perna, Cira, editor, Marchetti, Stefano, editor, and Chambers, Raymond, editor
- Published
- 2022
- Full Text
- View/download PDF
18. Single Valued Neutrosophic Kruskal-Wallis and Mann Whitney Tests
- Author
-
Mahmoud Miari, Mohamad Taher Anan, and Mohamed Bisher Zeina
- Subjects
kruskal-wallis ,test statistic ,chi square distribution ,hypothesis testing ,significance level ,single valued neutrosophic number ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this paper, Kruskal-Wallis test is extended to deal with neutrosophic data in single valued form using score, accuracy and certainty functions to calculate ranks of SVNNs, also MannWhitney test is extended to deal with same data type which makes it possible to do a post-hoc test after rejecting null hypothesis using Neutrosophic Statistics Kruskal-Wallis test. Numerical examples were successfully solved showing the power of this new idea to deal with SVNNs and make statistical decisions on them.
- Published
- 2022
- Full Text
- View/download PDF
19. Applying generalized funnel plots to help design statistical analyses.
- Author
-
Aisbett, Janet, Drinkwater, Eric J., Quarrie, Kenneth L., and Woodcock, Stephen
- Subjects
EXPERIMENTAL design ,SAMPLE size (Statistics) ,NULL hypothesis ,CONFORMANCE testing ,DESIGN thinking - Abstract
Researchers across many fields routinely analyze trial data using Null Hypothesis Significance Tests with zero null and p < 0.05. To promote thoughtful statistical testing, we propose a visualization tool that highlights practically meaningful effects when calculating sample sizes. The tool re-purposes and adapts funnel plots, originally developed for meta-analyses, after generalizing them to cater for meaningful effects. As with traditional sample size calculators, researchers must nominate anticipated effect sizes and variability alongside the desired power. The advantage of our tool is that it simultaneously presents sample sizes needed to adequately power tests for equivalence, for non-inferiority and for superiority, each considered at up to three alpha levels and in positive and negative directions. The tool thus encourages researchers at the design stage to think about the type and level of test in terms of their research goals, costs of errors, meaningful effect sizes and feasible sample sizes. An R-implementation of the tool is available on-line. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Skill versus Luck: : An Analysis of Excess Returns on the Swedish Stock Market Using Fama-French Three-Factor Model and T-tests
- Author
-
Berg Wadsten, Olivia, Boman, Agnes, Berg Wadsten, Olivia, and Boman, Agnes
- Abstract
This thesis explores whether excess returns of funds in the Swedish stock market can be attributed to skill or luck by employing the Fama-French Three-Factor Model and t-tests. The study analyzes data from a selection of mutual Swedish funds during the time period 2004 and 2019 to determine if differences in returns result from managerial skill or random variations. Findings suggest that it is rarely the skill of fund managers that leads to positive excess returns. Instead, market factors and luck appear to be decisive. By offering a deeper understanding of the dynamics between skill and luck, this thesis contributes to the ongoing debate on active management in financial markets., Denna rapport undersöker om överavkastning av fonder på den svenska aktiemarknaden kan tillskrivas skicklighet eller tur genom att använda Fama-French trefaktormodell och t-tester. Studien analyserar data från ett urval av svenska fonder under tidsperioden 2004 till 2019 för att avgöra om skillnader i avkastning beror på skicklighet eller tur. Resultaten tyder på att det sällan är fondförvaltarnas skicklighet som leder till positiv överavkastning. Istället verkar marknadsfaktorer och tur vara avgörande. Genom att erbjuda en djupare förståelse för dynamiken mellan skicklighet och tur, bidrar denna rapport till den pågående debatten om aktiv förvaltning på finansiella marknader.
- Published
- 2024
21. Application of Queuing Theory to a Toll Plaza-A Case Study
- Author
-
Malipatil, Naveen, Avati, Soumya Iswar, Vinay, Hosahally Nanjegowda, Sunil, S., di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Mathew, Tom V., editor, Joshi, Gaurang J., editor, Velaga, Nagendra R., editor, and Arkatkar, Shriniwas, editor
- Published
- 2020
- Full Text
- View/download PDF
22. Novel Statistical Techniques for Conducting Accelerated Life Test to Demonstrate Product Reliability.
- Author
-
Basha, Mustaq and Ware, Nilesh R.
- Subjects
ACCELERATED life testing ,CONDUCT of life ,COMMERCIAL product testing ,RELIABILITY in engineering ,SAMPLE size (Statistics) - Abstract
In Reliability Demonstration Testing (RDT), finding the right sample size is very important since the cost of the prototypes is high and difficult to make. If the sample size for the RDT is test is less, the amount of information obtained from the test will be insufficient, and the conclusion will be meaningless; on contrary, if the sample size is big/huge, the amount of information obtained from the test will be in excess of what is required, resulting in unnecessary costs. Most of the time, the required sample size and test time are decided based on the RDT test design. Resources required for RDT in terms of batch size and long testing-time is practically not feasible, due to limitation of the project schedule and budget. The reliability engineers must have a sound knowledge of type challenge/risk that is allowed for conducting RDT. The research paper with a case study provides the required information about the modern techniques adopted in reducing the sample-size and testing time with the help of accelerated test models such as Arrhenius, Erying etc., for conducting accelerated life test to demonstrate the product reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. 圆锥滚子轴承振动数据改进Huber M 稳健化处理方法研究.
- Author
-
党凤魁, 徐永智, 丁慧玲, and 刘 建
- Subjects
ROLLER bearings ,DATA distribution ,MATHEMATICAL models ,SOIL vibration - Abstract
Copyright of Journal of Henan University of Science & Technology, Natural Science is the property of Editorial Office of Journal of Henan University of Science & Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
24. Single Valued Neutrosophic Kruskal-Wallis and Mann Whitney Tests.
- Author
-
Miari, Mahmoud, Anan, Mohamad Taher, and Zeina, Mohamed Bisher
- Subjects
- *
KRUSKAL-Wallis Test , *NULL hypothesis , *NEUTROSOPHIC logic , *STATISTICAL hypothesis testing , *DECISION making , *STATISTICS , *CERTAINTY - Abstract
In this paper, Kruskal-Wallis test is extended to deal with neutrosophic data in single valued form using score, accuracy and certainty functions to calculate ranks of SVNNs, also Mann-Whitney test is extended to deal with same data type which makes it possible to do a post-hoc test after rejecting null hypothesis using Neutrosophic Statistics Kruskal-Wallis test. Numerical examples were successfully solved showing the power of this new idea to deal with SVNNs and make statistical decisions on them. [ABSTRACT FROM AUTHOR]
- Published
- 2022
25. Robustness of Normality Criteria with Respect to Rounding Observations.
- Author
-
Ushakov, V. G. and Ushakov, N. G.
- Abstract
In some areas of statistical analysis (e.g., biology, medicine),data is often available in a roughly rounded form and relatively large samples. Statistical procedures vary in their sensitivities of data rounding. Different criteria of normality control the probability of Type I errors in rounding data. It is found that criteria based on a sample's moments are robust when it comes to rounding, while the criteria based on order statistics are unstable. The situation is opposite to contrasts the spike robustness where sample moments are more sensitive and less robust than the order statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Testing hypotheses for multivariate normal distribution with fuzzy random variables.
- Author
-
Hesamian, Gholamreza and Ghasem Akbari, Mohamad
- Subjects
- *
DISTRIBUTION (Probability theory) , *GAUSSIAN distribution , *HYPOTHESIS , *RANDOM variables , *FUZZY sets - Abstract
There are several studies on fuzzy univariate hypothesis tests corresponding to a normal distribution. A fuzzy statistical test was proposed in this study for mean and variance–covariance matrix of a multivariate normal with fuzzy random variables. For this purpose, a notion of fuzzy multivariate normal random variable with fuzzy mean and non-fuzzy variance–covariance matrix was first developed. Then, the concepts of the fuzzy type-I error, fuzzy type-II error, fuzzy power, non-fuzzy significance level and fuzzy p-value were extended. A degree-based criterion was also suggested to compare the fuzzy p-values as well as a specific significance level to decide whether accepting or rejecting the underlying hypotheses. The effectiveness of the proposed fuzzy hypothesis test was also examined through some numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Using statistical tests to compare the coefficient of performance of air source heat pump water heaters.
- Author
-
Tangwe, S. and Kusakana, K.
- Subjects
- *
WATER heaters , *HEAT pumps , *WATER pumps , *KRUSKAL-Wallis Test , *FLOW meters - Abstract
The study compared the coefficient of performance (COP) of two residential types of air source heat pump (ASHP) water heaters using statistical tests. The COPs were determined from the controlled volume of hot water (150, 50 and 100 L) drawn off from each tank at different time of use (morning, afternoon and evening) periods during summer and winter. Power meters, flow meters, and temperature sensors were installed on both types of ASHP water heater to measure the data needed to determine the COPs. The results showed that the mean COPs of the split and integrated type ASHP water heaters were 2.965 and 2.652 for summer and 2.657 and 2.202 for winter. In addition, the p-values of the groups COPs for the split and integrated type ASHP water heaters during winter and summer were 7.09 x 10-24 and 1.01 x 10-11, based on the one-way ANOVA and the Kruskal-Wallis tests. It can be concluded that, despite the year-round performance of both the split and integrated type ASHP water heaters, there is a significant difference in COP at 1% significance level among the four groups. Furthermore, both statistical tests confirmed these outcomes in the comparisons of the mean COPs among the four groups based on the multiple comparison algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. How to calculate statistical power for vegetation research
- Author
-
Mohammad Mousaei Sanjerehei
- Subjects
effect size ,statistical power ,sample size ,significance level ,vegetation ,Technology (General) ,T1-995 ,Science - Abstract
Calculation of statistical power is important for proper interpretation of research results. Statistical Power depends on the selected significance level, sample size and effect size. Selection of an appropriate formula for calculating power of a test is dependent on the study design, type and statistical distribution of data and the statistical test. In this paper, several formulas are presented with examples for calculating power for one sample mean test and one sample proportion test, comparing between two independent groups and two paired groups, correlation analysis, simple and multiple linear regression, simple and multiple logistic regression, contingency tables and analysis of variance (ANOVA).
- Published
- 2020
29. Common inaccuracies and errors in the application of statistical methods in soil science
- Author
-
V. P. Samsonova and J. L. Meshalkina
- Subjects
statistics designations ,pseudoreplication ,significance level ,confidence interval ,hypothesis testing ,criterion power ,correlation coefficient ,Agriculture (General) ,S1-972 - Abstract
The most common inaccuracies and errors in the application of statistical methods found in Russian publications on soil science are considered. When designating random variables and distribution parameters in Greek letters, it is necessary to designate those that refer to general populations, and Latin letters – to sampling ones. A detailed description of the experiment and what the replications relate to allows you to draw correct conclusions from the study. It is necessary to avoid pseudoreplication when results at closely located sampling points are considered as characteristics of soil variability over large distances. Expanding the list of descriptive statistics will allow you to use a specific study in meta-analysis. Calculating the confidence interval for the average using the Student's test at different significance levels expands the scope of possible values of the average, but this approach is justified only if the indicator does not differ too much from the normal distribution. When testing statistical hypotheses, it is necessary to pay attention not only to the level of significance, but also to the power of the criterion. The normality distribution hypothesis can be tested using various criteria. The success of applying the criterion depends not only on the validity of the null hypothesis (a truly normal distribution), but also on other reasons: on the sample size and on the alternatives for which the criterion tests the hypothesis. Any statement about the type of relationship between features based on the correlation coefficient (Pearson or Spearman) is meaningless without specifying the number of replicates, since it is the number of replicates that determines the significance of the difference between the correlation coefficient and zero. It is proposed that authors and reviewers pay closer attention to such errors.
- Published
- 2020
- Full Text
- View/download PDF
30. Investigating Temporal and Spatial Precipitation Patterns in the Southern Mid-Atlantic United States
- Author
-
Ishrat Jahan Dollan, Viviana Maggioni, and Jeremy Johnston
- Subjects
extreme precipitation ,trends ,non-parametric ,significance level ,seasonal ,Environmental sciences ,GE1-350 - Abstract
The investigation of regional vulnerability to extreme hydroclimatic events (e.g., floods and hurricanes) is quite challenging due to its dependence on reliable precipitation estimates. Better understanding of past precipitation trends is crucial to examine changing precipitation extremes, optimize future water demands, stormwater infrastructure, extreme event measures, irrigation management, etc., especially if combined with future climate and population projections. The objective of the study is to investigate the spatial-temporal variability of average and extreme precipitation at a sub-regional scale, specifically in the Southern Mid-Atlantic United States, a region characterized by diverse topography and is among the fastest-growing areas in North America. Particularly, this work investigates past precipitation trends and patterns using the North American Land Data Assimilation System, Version 2 (NLDAS-2, 12 km/1 h resolution) reanalysis dataset during 1980–2018. Both parametric (linear regression) and non-parametric (e.g., Theil-Sen) robust statistical tools are employed in the study to analyze trend magnitudes, which are tested for statistical significance using the Mann-Kendall test. Standard precipitation indices from ETCCDI are also used to characterize trends in the relative contribution of extreme events to precipitation in the area. In the region an increasing trend (4.3 mm/year) is identified in annual average precipitation with ~34% of the domain showing a significant increase (at the 0.1 significance level) of +3 to +5 mm/year. Seasonal and sub-regional trends are also investigated, with the most pronounced increasing trends identified during summers along the Virginia and Maryland border. The study also finds a statistically significant positive trend (at a 0.05 significance level) in the annual maximum precipitation. Furthermore, the number of daily extremes (daily total precipitation higher than the 95th and 99th percentiles) also depicts statistically significant increases, indicating the increased frequency of extreme precipitation events. Investigations into the proportion of annual precipitation occurring on wet days and extremely wet days (95th and 99th percentile) also indicate a significant increase in their relative contribution. The findings of this study have the potential to improve local-scale decision-making in terms of river basin management, flood control, irrigation scheme scheduling, and stormwater infrastructure planning to address urban resilience to hydrometeorological hazards.
- Published
- 2022
- Full Text
- View/download PDF
31. 基于 SVD-AVMD 的液膜密封声发射特征提取.
- Author
-
孙鑫晖, 刘怀顺, 王明洋, 李勇凡, 王增丽, 郝木明, 力 宁, and 任宝杰
- Subjects
SINGULAR value decomposition ,LIQUID films ,ACOUSTIC emission ,LIQUID surfaces ,SIGNAL processing ,RANDOM noise theory - Abstract
Copyright of Journal of China University of Petroleum is the property of China University of Petroleum and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
32. PIDT: A Novel Decision Tree Algorithm Based on Parameterised Impurities and Statistical Pruning Approaches
- Author
-
Stamate, Daniel, Alghamdi, Wajdi, Stahl, Daniel, Logofatu, Doina, Zamyatin, Alexander, Rannenberg, Kai, Editor-in-Chief, Sakarovitch, Jacques, Series Editor, Goedicke, Michael, Series Editor, Tatnall, Arthur, Series Editor, Neuhold, Erich J., Series Editor, Pras, Aiko, Series Editor, Tröltzsch, Fredi, Series Editor, Pries-Heje, Jan, Series Editor, Whitehouse, Diane, Series Editor, Reis, Ricardo, Series Editor, Furnell, Steven, Series Editor, Furbach, Ulrich, Series Editor, Winckler, Marco, Series Editor, Rauterberg, Matthias, Series Editor, Iliadis, Lazaros, editor, Maglogiannis, Ilias, editor, and Plagianakos, Vassilis, editor
- Published
- 2018
- Full Text
- View/download PDF
33. Hypothesis Testing & ANOVA
- Author
-
Mooi, Erik, Sarstedt, Marko, Mooi-Reci, Irma, Mooi, Erik, Sarstedt, Marko, and Mooi-Reci, Irma
- Published
- 2018
- Full Text
- View/download PDF
34. Testing Hypotheses about Covariance Functions of Cylindrical and Circular Images.
- Author
-
Krasheninnikov, V. R., Kuvaiskova, Yu. E., Malenova, O. E., and Subbotin, A. Yu.
- Abstract
Imaging problems are becoming increasingly important due to the development of systems for aerospace monitoring of the Earth, radio and sonar location, medical devices for early diagnosis of diseases, etc. However, most of the work on image processing is related to images defined on rectangular two-dimensional grids or grids of higher dimensions. In some practical situations, images are defined on a cylinder (for example, images of pipelines, blood vessels, rotation details) or on a circle (for example, images of a facies (thin film) of dried biological fluid, an eye, a cut of a tree trunk). The specifics of the field of assignment of such images must be taken into account in their models and processing algorithms. In this paper, autoregressive models of cylindrical and circular images are considered and expressions of the correlation function are given depending on the autoregressive parameters. Spiral scanning of a cylindrical image can be viewed as a quasi-periodic process due to the correlation of image lines. To represent heterogeneous images with random heterogeneity, "double stochastic" models are used, in which one or several control images set the parameters of the resulting image. The available image can be used to estimate the parameters of the model of its control images. However, this is not sufficient to fully identify the hidden control images. It is also necessary to evaluate their covariance functions and find out whether they correspond to the hypothetical ones. The paper proposes a test for testing the hypotheses about the covariance functions of cylindrical and circular images with a study of its power relative to the parameters of the image model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Sample Size Calculation
- Author
-
Stock, Eileen M., Biswas, Kousick, Itani, Kamal M.F., editor, and Reda, Domenic J., editor
- Published
- 2017
- Full Text
- View/download PDF
36. The correlation between workers' working pressure and physical and mental health analyzed by the job demand-resource stress model.
- Author
-
Lu, Jingfu, Yu, Yanliang, Zhao, Yang, and Jenkin, Michelle
- Subjects
JOB stress ,RESEARCH ,RESEARCH evaluation ,MENTAL health ,PSYCHOLOGY ,REGRESSION analysis ,PSYCHOLOGY of teachers ,PSYCHOMETRICS ,COMPARATIVE studies ,CRONBACH'S alpha ,EMPLOYMENT ,EMPLOYEES' workload ,CONCEPTUAL models ,JOB satisfaction ,QUALITY of life ,QUESTIONNAIRES ,STATISTICAL correlation ,INDUSTRIAL hygiene ,PSYCHOLOGICAL stress ,MENTAL illness ,HEALTH promotion ,MENTAL health services - Abstract
BACKGROUND: Under the background of the information society, teachers' pressure from work and life is increasing. Meanwhile, the working pressure has a potential inevitable connection with the physical and mental health of teachers. OBJECTIVE: To analyze the correlation between working pressure of workers and mental health status, expand the application of the job demand-resource stress (JD-RS) model in the adjustment of working characteristic pressure, and achieve the coordinated development between working pressure and mental health. METHODS: The occupation of the teacher is taken as the research object. First, the pressure source questionnaire and Symptom Check List 90 (SCL-90) are chosen to measure the working pressure and mental health. Also, the reliability and validity of the pressure source questionnaire are tested. Second, the gender, duty, teaching age, and workload of teachers are chosen as the foundation for comparing and analyzing the impact of various dimensions and project factors on teachers' working pressure and mental health. Finally, based on the method of univariate linear regression analysis, the correlation between teachers' working pressure and mental health is analyzed and characterized. RESULTS: The measurement tool based on the pressure source questionnaire has good performance reliability and validity. The five dimensions of Cronbach's coefficients are all greater than 0.8, and the indicators of fitting all meet psychometrics requirements. Significance analysis shows that different genders, duties, teaching ages, and workloads have different levels of significant influence on teachers' working pressure and mental health. Linear regression analysis shows that teachers' working pressure has a significant impact on their physical and mental health, which has a predictive effect. Teachers who bear high-intensity pressure have psychological problems. CONCLUSIONS: The research based on the JD-RS model has a positive role in promoting the balanced and coordinated development of working pressure and the physical and mental health of employed workers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. СТАТИСТИЧЕСКАТА ЗНАЧИМОСТ – ПАНАЦЕЯ ИЛИ ПРЕПЪНИКАМЪК?
- Author
-
Ламбова, Маргарита
- Subjects
- *
STATISTICAL hypothesis testing , *STATISTICAL significance , *GOAL (Psychology) , *LOGIC , *HYPOTHESIS , *PSEUDOSCIENCE - Abstract
Thoughts and considerations are presented about the term “statistical significance” – a common tool for “proving” the compatibility with practice of theoretical models designed on the ground of assumptions. In accordance with the set goal and based on the theoretical features and logics of the concept, there are revealed problematic issues and possibilities for abusing it when verifying statistical hypotheses. The suggested argumentation enables one to claim that the “prescriptive” verification of statistical hypotheses creates conditions for developing a pseudoscience supported by incorrect conclusions which are made on the ground of “statistically significant” results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
38. USE OF NI LABVIEW AND DAQ SOLUTION FOR CONTROLLING THE VACUUM LEVEL IN A MECHANICAL MILKING MACHINE.
- Author
-
ROŞCA, Radu, CÂRLESCU, Petru, and ŢENU, Ioan
- Subjects
- *
MILKING machines , *VACUUM pumps , *FREQUENCY tuning , *ELECTRIC drives , *PID controllers , *ELECTRIC motors - Abstract
The VFD technology is able to adjust the rate of air removal from the milking system by changing the speed of the vacuum pump motor. Based on the NI LabView 7.1 software and the USB 6009 DAQ board a PID controller was developed in order to control the electric motor driving the vacuum pump. The PID controller was tuned using the Ziegler-Nichols tuning rules for the frequency response method, based on preliminary tests performed over the milking system. Another series of comparative tests aimed to evaluate the operating parameters of the milking system (pulsation rate and ratio, duration of the pulsation phases) and vacuum stability, for different vacuum levels. The tests showed that vacuum regulation by the means of the PID controller did not adversely affect the working parameters of the system, while achieving better results regarding the stability of the permanent vacuum. [ABSTRACT FROM AUTHOR]
- Published
- 2021
39. Significance Level
- Author
-
Kipfer, Barbara Ann
- Published
- 2021
- Full Text
- View/download PDF
40. Exploring the Direct & Indirect Costs of Accident: An Empirical Analysis for KVMRT Projects in Malaysia
- Author
-
Mohd Kamar, Izatul Farrita, Che Ahmad, Asmalia, Kasiron, Mohd Yusof, Mohd Kamar, Izatul Farrita, Che Ahmad, Asmalia, and Kasiron, Mohd Yusof
- Abstract
The Klang Valley Mass Rapid Transit (KVMRT) System is set to be one the most important and largest transport infrastructure projects in Malaysia. Since its inception in 2011, the rapid development of the KVMRT System has contributed to a substantial amount of costs related to safety and health issues. Fatalities, serious injuries, and damage to properties have occur every year due to the rapid construction of this project. Work injuries create significant economic and humanitarian consequences to our society, especially to this project where it involves billions of Malaysian Ringgit (RM). The awareness of the accident, especially the payment cost, is absent because the contractors, clients and the consultants leave these matters to the insurance company. They always ignore the indirect costs due to an accident without realising the true losses to them. Therefore, this paper studies the types of indirect costs of accident incurred during the construction of KVMRT Projects. Sixty (60) reported accident cases were examined to measure the level of significance for those items from the safety personnel experiences. The study found that the most significant type of accident contributing to the indirect costs is the Management Cost Component and Accident Report Cost where it involved the cost of the investigation process item until to the preparation of report. The findings of the study may assist stakeholders in estimating the accident costs during construction projects, and, hence, enable them to plan their investments in terms of safety measures in a more insightful manner.
- Published
- 2023
41. Variability of Sugars Concentrations in Infant Follow-on Formulas with Higher Consumption in Peru: A Preliminary Study
- Author
-
Munives-Marcos, Angélica K., Arauzo-Sinchez, Carlos J., Cupé-Araujo, Ana C., Ladera-Castañeda, Marysela I., Cervantes-Ganoza, Luis A., Cayo-Rojas, César F., Munives-Marcos, Angélica K., Arauzo-Sinchez, Carlos J., Cupé-Araujo, Ana C., Ladera-Castañeda, Marysela I., Cervantes-Ganoza, Luis A., and Cayo-Rojas, César F.
- Abstract
Aim: The aim of the present preliminary study was to determine sugar concentration in infant follow-on formulas most widely consumed in Peru. Materials and methods: In this descriptive and observational study, the sample was represented by five brands of infant follow-on formulas most consumed in Peru (A, Similac 2; B, Enfamil 2®; C, NAN 2®; D, Baby Lac Pro 2®; and E, Lacti Kids Premium 2®); with two samples of each, collected at two different locations in the Peruvian capital. Subsequently, the concentration of total and individual sugars (lactose, sucrose, glucose, fructose, and maltose) was determined using the high-performance liquid chromatography (HPLC) method in a specialized laboratory. For the comparison of means, Welch’s robust analysis of variance (ANOVA) test for equality of means and Tukey’s post hoc test were used. The significance level was p < 0.05. Results: The total sugars concentration per 100 gm of the five infant follow-on formulas showed a mean of 38.9 ± 11.03 gm, being Similac 2, the infant follow-on formula, with the highest concentration of 50.33 ± 0.11 gm and Enfamil 2, the lowest with 22.75 ± 0.06 gm. The average sugars recorded in the laboratory were compared with those on the product label for Similac 2 (50.3 and 53.1 gr), NAN 2 (46.5 and 51.5 gr), Baby Lac Pro 2 (41.5 and 57.0 gr), Lacti Kids Premium 2 (33.3 and 57.0 gr) and Enfamil 2 (22.8 and 56.0 gr). Furthermore, when comparing the infant follow-on formulas, significant differences were observed between all sugar concentrations (p < 0.001), with the follow-on formula with the significantly higher sugar concentration being Similac 2 (p < 0.001) and the one with the significantly lower concentration being Enfamil 2 (p < 0.001). Regarding individual sugars, per 100 gm analyzed, fructose and maltose registered values, Revisión por pares, ODS 2: Hambre cero, ODS 3: Salud y bienestar, ODS 12: Producción y consumo responsables
- Published
- 2023
42. Why and When Statistics is Required, and How to Simplify Choosing Appropriate Statistical Techniques During Ph.D. Program in India?
- Author
-
H. R. Ganesha and P. S. Aithal
- Subjects
Coursework ,History ,Polymers and Plastics ,Null Hypothesis ,Normal Distribution ,Research Methodology ,Median ,Non-parametric test ,Skewness ,Industrial and Manufacturing Engineering ,Measures of Dispersion ,Ph.D ,Type 1 Error ,FOS: Mathematics ,Bell Curve ,Mean ,Postmodernism ,Inferential Statistics ,Business and International Management ,Alpha ,Kurtosis ,Descriptive Statistics ,Significance Testing ,Statistics ,PhD ,Parametric Test ,Beta ,JASP ,Hypothesis Testing ,General Medicine ,Range ,Statistical Techniques ,Type 2 Error ,Coefficient of Variation ,Alternate Hypothesis ,Research Design ,Significance Level ,Standard Deviation ,Statistical Significance ,Mode ,Doctoral Research ,Research Hypothesis ,Measures of Central Tendency - Abstract
Purpose: The purpose of this article is to explain the key reasons for the existence of statistics in doctoral-level research, why and when statistical techniques are to be used, how to statistically describe the units of analysis/samples, how to statistically describe the data collected from units of analysis/samples; how to statistically discover the relationship between variables of the research question; a step-by-step process of statistical significance/hypothesis test, tricks for selecting an appropriate statistical significance test, and most importantly which is the most user-friendly and free software for carrying out statistical analyses. In turn, guiding Ph.D. scholars to choose appropriate statistical techniques across various stages of the doctoral-level research process to ensure a high-quality research output. Design/Methodology/Approach: Postmodernism philosophical paradigm; Inductive research approach; Observation data collection method; Longitudinal data collection time frame; Qualitative data analysis. Findings/Result: As long as the Ph.D. scholars can understand i) they need NOT be an expert in Mathematics/Statistics and it is easy to learn statistics during Ph.D.; ii) the difference between measures of central tendency and dispersion; iii) the difference between association, correlation, and causation; iv) difference between null and research/alternate hypotheses; v) difference between Type I and Type II errors; vi) key drivers for choosing a statistical significance test; vi) which is the best software for carrying out statistical analyses. Scholars will be able to (on their own) choose appropriate statistical techniques across various steps of the doctoral-level research process and comfortably claim their research findings. Originality/Value: There is a vast literature about statistics, probability theory, measures of central tendency and dispersion, formulas for finding the relationship between variables, and statistical significance tests. However, only a few have explained them together comprehensively which is conceivable to Ph.D. scholars. In this article, we have attempted to explain the reasons for the existence, objectives, purposes, and essence of ‘Statistics’ briefly and comprehensively with simple examples and tricks that would eradicate fear among Ph.D. scholars about ‘Statistics’. Paper Type: Conceptual.
- Published
- 2022
- Full Text
- View/download PDF
43. Analyze Phase
- Author
-
Muralidharan, K. and Muralidharan, K.
- Published
- 2015
- Full Text
- View/download PDF
44. An alternative approach to testing displacements in a geodetic network
- Author
-
Simona Savšek
- Subjects
statistical hypothesis testing ,significance level ,simulated distribution function ,critical value ,actual risk ,point displacement ,Geodesy ,QB275-343 - Abstract
In geodesy, statistical testing aids in determining the extent to which the criteria and requirements needed in the measurement and calculation proceedings have been fulfilled. A “rule of thumb” method that compares test statistics to constants Tcrit = 3 or Tcrit = 5 has been established. The test statistic T is the ratio between the displacement and its precision. Since it is not distributed through any of the known distribution functions, (statistical) simulations are used to assess the empirical distribution in 2D and 3D geodetic networks. The proposed alternative procedure leads to a more precise detection of significant displacements at a given test significance level α. Regardless of the network's dimensionality, Tcrit obtains the value of 3 at a risk level below 1%. When 5% is considered to be an acceptable risk level, the critical value can be lower than 3 or 5. Thus, significant displacements should be considered with regard to the acceptable risk level and not according to the usual “rule of thumb”.
- Published
- 2017
- Full Text
- View/download PDF
45. Selecting a Model for Forecasting
- Author
-
Jennifer L. Castle, Jurgen A. Doornik, and David F. Hendry
- Subjects
model selection ,forecasting ,location shifts ,significance level ,Autometrics ,Economics as a science ,HB71-74 - Abstract
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance.
- Published
- 2021
- Full Text
- View/download PDF
46. Testing statistical hypotheses for intuitionistic fuzzy data.
- Author
-
Akbari, Mohammad Ghasem and Hesamian, Gholamreza
- Subjects
- *
STATISTICAL hypothesis testing , *NULL hypothesis , *P-value (Statistics) , *DECISION making , *PARAMETERS (Statistics) - Abstract
The present work is aimed to extend the classical statistical methods based on intuitionistic fuzzy data to hypothesis test about exact parameter of the underlying population. In this approach, the concepts of the intuitionistic fuzzy type-I error, intuitionistic fuzzy type-II error, intuitionistic fuzzy power and intuitionistic fuzzy p value are extended at a given significance level. A degree-based criterion is also suggested to compare the intuitionistic fuzzy p value and a specific significance level to make decision on whether accepting or rejecting the null hypothesis. An applied example is examined based on the proposed method in both parametric and nonparametric cases. The results indicate that the proposed method can be successfully applied for all parametric/nonparametric statistical hypotheses testing based on intuitionistic fuzzy continuous data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. Using a Shiny app to teach the concept of power.
- Author
-
Arnholt, Alan T.
- Subjects
- *
CONCEPT learning , *FALSE positive error - Abstract
Summary: Understanding and computing power and the relationship between sample size and power are facilitated via a Shiny app. The Shiny app allows students to solve various scenarios by entering different parameters or moving sliders. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Testing for monotonic trend in time series based on resampling methods.
- Author
-
Zhu, Xiaojie, Ng, Hon Keung Tony, and Woodward, Wayne A.
- Subjects
- *
STATISTICAL bootstrapping , *TIME series analysis , *MONTE Carlo method , *ASYMPTOTIC distribution , *GAUSSIAN distribution - Abstract
In this paper, we propose several tests for monotonic trend based on the Brillinger's test statistic (1989, Biometrika, 76, 23–30). When there are highly correlated residuals or short record lengths, Brillinger's test procedure tends to have significance level much higher than the nominal level. It is found that this could be related to the discrepancy between the empirical distribution of the test statistic and the asymptotic normal distribution. Hence, in this paper, we propose three bootstrap-based procedures based on the Brillinger's test statistic to test for monotonic trend. The performance of the proposed test procedures is evaluated through an extensive Monte Carlo simulation study, and is compared to other trend test procedures in the literature. It is shown that the proposed bootstrap-based Brillinger test procedures can well control the significance levels and provide satisfactory power performance in testing the monotonic trend under different scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Evidence From Marginally Significant t Statistics.
- Author
-
Johnson, Valen E.
- Subjects
EVIDENCE ,STATISTICAL significance ,BAYESIAN analysis ,ODDS ratio ,LIKELIHOOD ratio tests - Abstract
This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are "marginally significant") correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests. The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. PATTERN FORMATION OF BUSINESS CONDITIONS IN DOMESTIC MARKET OF CROP PRODUCTION
- Author
-
Svitlana Strapchuk
- Subjects
cluster analysis ,agricultural enterprises ,crop products ,price ,the price of clusters ,t-criterion ,magnitude chart ,standard errors ,significance level ,Economic growth, development, planning ,HD72-88 - Abstract
The purpose of the article is to detect crop production on agricultural enterprises clusters at a price by defining their amplitude of price fluctuations. Methodology. The study is based on grouping of statistical data from agricultural enterprises using cluster analysis, followed by reliability evaluation of pre-selected clusters by t-test and charting the scope by the selected index. Cluster analysis of agricultural enterprises in Ukraine has been conducted using "STATISTICA" program. Distance between clusters was calculated as the Euclidean distance. The object of the study was data on the prices for agricultural enterprises by regions of Ukraine in 2013. As a result, an appropriate number of groups according to the produce types in the regions of Ukraine, plane in market prices, have been determined. The process of consistent combination of objects in clusters is shown in the graphs as agglomerative clustering dendrogram of the regions of Ukraine for such products as wheat, grain corn and sunflower seeds. In general, there have been examined: 7311 businesses growing wheat, 5034 – growing corn and 6124 companies growing sunflower. Results. During a year-long study of price fluctuations in agricultural enterprises within regions of Ukraine similarities in nature of absolute and relative changes in the formed clusters were established. Four clusters on wheat, five clusters on corn, three clusters on sunflower seeds have been allocated during the study. The study of the selected groups confirms significant differences between them and allows the sectors and enterprises of the cluster with high variability of prices to build their own marketing strategy based on the position of expectations and search for sale options according to the most favorable price. Practical value. The established differences on the selected clusters make it possible to forecast the price situation in various regions of Ukraine in terms of its differences from average by clusters for each product. Accordingly, it will enable specific producers to define the marketing strategy for pricing in the region for each product. Value/originality. The data on groups of growing crops permit to select forecast marketing strategies or rapid sale according to the prevailing prices.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.