3,295 results on '"Contingency tables"'
Search Results
2. Is Chinese Dyslexia Similar Across Chinese Societies? Evidence from Hong Kong, Beijing, and Taipei.
- Author
-
Cheah, Zebedee Rui En, McBride, Catherine, Meng, Xiangzhi, Lee, Jun Ren, and Huo, Shuting
- Subjects
- *
CHILDREN with dyslexia , *DYSLEXIA , *CHINESE language , *CONTINGENCY tables , *BAYESIAN analysis , *PHONOLOGICAL awareness - Abstract
While previous research has documented the unique aspects of Chinese dyslexia as compared to dyslexia in alphabetic scripts, it remains unclear whether the difference in Chinese literacy experiences influences the manifestation of Chinese dyslexia. The present article first reviews the characteristics of Chinese languages and scripts, including important cognitive‐linguistic correlates (rapid automatized naming, phonological, orthographic, and morphological awareness) of Chinese reading development and impairment. The diversity in Chinese literacy experiences of scripts, languages, and instructional practices, and consequently their impact on Chinese literacy acquisition across different Chinese societies are also reviewed. Using an equivalent Chinese assessment battery administered to 91 children with dyslexia from Hong Kong, Beijing, and Taipei, we examined the subtypes of Chinese dyslexia across these three societies concurrently. With the four cognitive‐linguistic skills included as the clustering variable, the hierarchical cluster analysis revealed four cognitive subtypes of dyslexia: 38% mild orthographic deficit subtype (OD), 33% phonological deficit subtype (PD), 18% morphological deficit subtype (MD), and 11% global deficit subtype (GD)—each with their own set of cognitive‐linguistic deficit profiles. Interestingly, all four subtypes of dyslexia manifested poorer orthographic skills as compared to the control group. Bayesian Analysis of Contingency Table further showed that the distribution of dyslexia subtypes remains similar across the three Chinese societies, suggesting invariance of the Chinese dyslexia construct. Findings highlight the importance of assessment in orthographic processing, rapid automatized naming, phonological awareness, and morphological awareness in order to understand Chinese dyslexia, both in a within and cross‐cultural Chinese perspective. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Histopathology based study of Nile tilapia fish (Oreochromis niloticus) as a biomarker for water pollution evaluation in the southern gulf of Lake Tana, Ethiopia.
- Author
-
Getnet, Mengesha Ayehu, Mekonnen, Muluken Yayeh, Yimam, Hailu Mazengia, Berihun, Asnakew Mulaw, and Malede, Birhan Anagaw
- Subjects
- *
NILE tilapia , *WATER pollution , *SUSTAINABLE fisheries , *AGE differences , *CONTINGENCY tables , *GONADS - Abstract
In the past decade, the increasing distribution of pollutants in the aquatic environment has been observed, causing integrative effects on fish. Likewise, due to anthropogenic activities, the southern gulf of Lake Tana is an impacted region, and the production of Nile tilapia fish is reduced. For this reason, the aim of this study was to conduct a histopathological-based study of 48 Nile tilapia fishes' health status at the southern gulf of Lake Tana and aquaculture using a cross-sectional study from February 2023 to May 2023. The study evaluated the histopathology of the gill, liver, gonads, and spleen organs using descriptive statistics accompanied by a 2 × 2 contingency table and t-test analysis. During the study, different histological alterations were detected, and the numbers of fish affected by a specific histological alteration were presented as percentage prevalence; hence, from the total fish examined, hyperplasia (54.15%), followed by pigment deposits (52%), hemorrhage (50%), and immune cell infiltration (50%), respectively, were the most frequently detected alterations. However, Nile tilapias from the southern gulf of Lake Tana were 1.4 (odds ratio) times more likely to show histopathological alterations than those from aquaculture, although statistically, was not significant (p > 0.05). In addition, the study found the mean value of the fish index (95.3) and regressive indices of the gill (13.6), liver (14.8), and gonad (12.3); moreover, the inflammatory indices of the spleen organ (11.3) and mean severity grade value of the gill (2.35) and gonad (1.7) organs, respectively, were obtained from the southern gulf of Lake Tana, and all those values were significantly higher (p < 0.05) from this site as compared to the aquaculture. In general, it has been found that tilapias from the southern gulf of Lake Tana showed higher pathological severity as compared with aquaculture. Among the four target organs evaluated, liver organs were observed to be the most damaged, while gonads were the least impacted organs. Therefore, it has been concluded that tilapia fish are living in abnormal conditions, so to ensure a sustainable fishery, water pollutant sources from Bahirdar city must receive proper attention, and future studies should consider age differences, seasonal variation, and the detection of specific pollutants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. High-Dimensional Categorical Process Monitoring: A Data Mining Approach.
- Author
-
Wang, Kai and Song, Zhenli
- Subjects
- *
EVIDENCE gaps , *COLLECTIVE behavior , *REAL numbers , *CONTINGENCY tables , *BIG data , *FALSE discovery rate - Abstract
AbstractThe advent of industrial big data has provided an unprecedented opportunity to achieve a data-driven monitoring of large-scale complex processes. When a process involves massive categorical variables each evaluated by attribute levels rather than real numbers, which is common in modern manufacturing and service applications, the existing process monitoring methods would typically fail due to the curse of high dimensionality in modeling the joint distribution of these categorical variables. To fill this research gap, we propose a novel data mining–based framework—a nonparametric method—for High-Dimensional (HD) categorical process monitoring. Specifically, a series of multiscale frequent patterns are particularly defined and quickly extracted to characterize both the significant individual behaviors and the major collective behaviors of HD categorical variables. Then all these discovered multiscale patterns, serving as informative surrogates of the original HD categorical data, are monitored sequentially from low scale to high scale via a principled and powerful multiple hypotheses testing procedure embedded with an alpha spending function and a false discovery rate approach. The superiority of our proposed method is validated extensively by numerical simulations and real case studies. It is capable of maintaining a desired false alarm rate when the process is normal and becoming very sensitive to many different kinds of process shifts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Random Transpositions on Contingency Tables.
- Author
-
Simper, Mackenzie
- Abstract
The space of contingency tables with n total entries and fixed row and column sums is in bijection with parabolic double cosets of S n . Via this correspondence, the uniform distribution on S n induces the Fisher–Yates distribution on contingency tables, which is classical for its use in the chi-squared test for independence. This paper studies the Markov chain on contingency tables induced by the random transpositions chain on S n . We show that the eigenfunctions are polynomials, which gives new insight into the orthogonal polynomials of the Fisher–Yates distribution. The eigenfunctions are used for upper bounds on the mixing time in special cases, as well as a general lower bound. The Markov chain on contingency tables is a novel example for the theory of double coset Markov chains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Decomposition of measure from symmetry for analyzing collapsed ordinal square contingency tables.
- Author
-
Shinoda, Satoru, Yamamoto, Kouji, and Tomizawa, Sadao
- Subjects
- *
CONTINGENCY tables , *SYMMETRY , *ARITHMETIC mean - Abstract
In some situations, square contingency tables with ordered categories are analyzed by considering collapsed tables where adjacent categories are combined. This study proposes measures to represent the degree of departure from symmetry using collapsed tables. The proposed measures are defined as the arithmetic mean of submeasures of each collapsed 3 × 3 table. Additionally, a theorem affirms that the value of the measure for symmetry is equal to the sum of the value of the proposed measures. Finally, examples are given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Categorical linkage‐data analysis.
- Author
-
Zhang, Li‐Chun and Tuoto, Tiziana
- Subjects
- *
DATA structures , *CONTINGENCY tables , *DATA analysis , *PROBABILITY theory , *SECONDARY analysis - Abstract
Analysis of integrated data often requires record linkage in order to join together the data residing in separate sources. In case linkage errors cannot be avoided, due to the lack a unique identity key that can be used to link the records unequivocally, standard statistical techniques may produce misleading inference if the linked data are treated as if they were true observations. In this paper, we propose methods for categorical data analysis based on linked data that are not prepared by the analyst, such that neither the match‐key variables nor the unlinked records are available. The adjustment is based on the proportion of false links in the linked file and our approach allows the probabilities of correct linkage to vary across the records without requiring that one is able to estimate this probability for each individual record. It accommodates also the general situation where unmatched records that cannot possibly be correctly linked exist in all the sources. The proposed methods are studied by simulation and applied to real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. RankCompV3: a differential expression analysis algorithm based on relative expression orderings and applications in single-cell RNA transcriptomics.
- Author
-
Yan, Jing, Zeng, Qiuhong, and Wang, Xianlong
- Subjects
- *
GENE expression , *TRANSCRIPTOMES , *RNA sequencing , *CONTINGENCY tables , *SOURCE code - Abstract
Background: Effective identification of differentially expressed genes (DEGs) has been challenging for single-cell RNA sequencing (scRNA-seq) profiles. Many existing algorithms have high false positive rates (FPRs) and often fail to identify weak biological signals. Results: We present a novel method for identifying DEGs in scRNA-seq data called RankCompV3. It is based on the comparison of relative expression orderings (REOs) of gene pairs which are determined by comparing the expression levels of a pair of genes in a set of single-cell profiles. The numbers of genes with consistently higher or lower expression levels than the gene of interest are counted in two groups in comparison, respectively, and the result is tabulated in a 3 × 3 contingency table which is tested by McCullagh's method to determine if the gene is dysregulated. In both simulated and real scRNA-seq data, RankCompV3 tightly controlled the FPR and demonstrated high accuracy, outperforming 11 other common single-cell DEG detection algorithms. Analysis with either regular single-cell or synthetic pseudo-bulk profiles produced highly concordant DEGs with the ground-truth. In addition, RankCompV3 demonstrates higher sensitivity to weak biological signals than other methods. The algorithm was implemented using Julia and can be called in R. The source code is available at https://github.com/pathint/RankCompV3.jl. Conclusions: The REOs-based algorithm is a valuable tool for analyzing single-cell RNA profiles and identifying DEGs with high accuracy and sensitivity. Key points: RankCompV3 is a method for identifying differentially expressed genes (DEGs) in either bulk or single-cell RNA transcriptomics. It is based on the counts of relative expression orderings (REOs) of gene pairs in the two groups. The contingency tables are tested using McCullagh's method. RankCompV3 has comparable or better performance than that of other conventional methods. It has been shown to be effective in identifying DEGs in both single-cell and pseudo-bulk profiles. Pseudo-bulk method is implemented in RankCompV3, which allows the method to achieve higher computational efficiency and improves the concordance with the bulk ground-truth. RankCompV3 is effective in identifying functionally relevant DEGs in weak-signal datasets. The method is not biased towards highly expressed genes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Generating contingency tables with fixed marginal probabilities and dependence structures described by loglinear models.
- Author
-
Hammond, Ceejay, van der Heijden, Peter G. M., and Smith, Paul A.
- Abstract
We present a method to generate contingency tables that follow loglinear models with prescribed marginal probabilities and dependence structures. We make use of (loglinear) Poisson regression, where the dependence structures, described using odds ratios, are implemented using an offset term. Other statistical models related to loglinear models that fall into the scope of this paper, such as the logistic regression model, the latent class model and the extended Rasch are discussed as well. We apply this methodology to carry out simulation studies in the context of population size estimation using dual system and triple system estimators, popular in official statistics. These estimators use contingency tables that summarize the counts of elements enumerated or captured within lists that are linked. The simulation is used to investigate these estimators in the situation that the model assumptions are fulfilled, and the situation that the model assumptions are violated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Improving Estimates Accuracy of Voter Transitions. Two New Algorithms for Ecological Inference Based on Linear Programming.
- Author
-
Pavía, Jose M. and Romero, Rafael
- Subjects
- *
MATHEMATICAL programming , *CONTINGENCY tables , *TRANSFER matrix , *ALGORITHMS , *PROBABILITY theory - Abstract
The estimation of RxC ecological inference contingency tables from aggregate data is one of the most salient and challenging problems in the field of quantitative social sciences, with major solutions proposed from both the ecological regression and the mathematical programming frameworks. In recent decades, there has been a drive to find solutions stemming from the former, with the latter being less active. From the mathematical programming framework, this paper suggests a new direction for tackling this problem. For the first time in the literature, a procedure based on linear programming is proposed to attain estimates of local contingency tables. Based on this and the homogeneity hypothesis, we suggest two new ecological inference algorithms. These two new algorithms represent an important step forward in the ecological inference mathematical programming literature. In addition to generating estimates for local ecological inference contingency tables and amending the tendency to produce extreme transfer probability estimates previously observed in other mathematical programming procedures, these two new algorithms prove to be quite competitive and more accurate than the current linear programming baseline algorithm. Their accuracy is assessed using a unique dataset with almost 500 elections, where the real transfer matrices are known, and their sensitivity to assumptions and limitations are gauged through an extensive simulation study. The new algorithms place the linear programming approach once again in a prominent position in the ecological inference toolkit. Interested readers can use these new algorithms easily with the aid of the R package lphom. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Reader bias in breast cancer screening related to cancer prevalence and artificial intelligence decision support—a reader study.
- Author
-
Al-Bazzaz, Hanen, Janicijevic, Marina, and Strand, Fredrik
- Subjects
- *
ARTIFICIAL intelligence , *EARLY detection of cancer , *BREAST cancer , *MEDICAL screening , *CONTINGENCY tables - Abstract
Objectives: The aim of our study was to examine how breast radiologists would be affected by high cancer prevalence and the use of artificial intelligence (AI) for decision support. Materials and method: This reader study was based on selection of screening mammograms, including the original radiologist assessment, acquired in 2010 to 2013 at the Karolinska University Hospital, with a ratio of 1:1 cancer versus healthy based on a 2-year follow-up. A commercial AI system generated an exam-level positive or negative read, and image markers. Double-reading and consensus discussions were first performed without AI and later with AI, with a 6-week wash-out period in between. The chi-squared test was used to test for differences in contingency tables. Results: Mammograms of 758 women were included, half with cancer and half healthy. 52% were 40–55 years; 48% were 56–75 years. In the original non-enriched screening setting, the sensitivity was 61% (232/379) at specificity 98% (323/379). In the reader study, the sensitivity without and with AI was 81% (307/379) and 75% (284/379) respectively (p < 0.001). The specificity without and with AI was 67% (255/379) and 86% (326/379) respectively (p < 0.001). The tendency to change assessment from positive to negative based on erroneous AI information differed between readers and was affected by type and number of image signs of malignancy. Conclusion: Breast radiologists reading a list with high cancer prevalence performed at considerably higher sensitivity and lower specificity than the original screen-readers. Adding AI information, calibrated to a screening setting, decreased sensitivity and increased specificity. Clinical relevance statement: Radiologist screening mammography assessments will be biased towards higher sensitivity and lower specificity by high-risk triaging and nudged towards the sensitivity and specificity setting of AI reads. After AI implementation in clinical practice, there is reason to carefully follow screening metrics to ensure the impact is desired. Key Points: • Breast radiologists' sensitivity and specificity will be affected by changes brought by artificial intelligence. • Reading in a high cancer prevalence setting markedly increased sensitivity and decreased specificity. • Reviewing the binary reads by AI, negative or positive, biased screening radiologists towards the sensitivity and specificity of the AI system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Marginal log‐linear parameters and their collapsibility for categorical data.
- Author
-
Ghosh, Sayan and Vellaisamy, P.
- Subjects
- *
LOG-linear models , *CONTINGENCY tables , *CELL physiology , *CATEGORIES (Mathematics) , *PROBABILITY theory - Abstract
Collapsibility is a practical and useful technique for dimension reduction in multidimensional contingency tables. In this paper, we consider marginal log‐linear models for studying collapsibility and related aspects in such tables. These models generalize ordinary log‐linear and multivariate logistic models, besides several others. First, we obtain some characteristic properties of marginal log‐linear parameters. Then we define collapsibility and strict collapsibility of these parameters in a general sense. Several necessary and sufficient conditions for collapsibility and strict collapsibility are derived based on simple functions of only the cell probabilities, which are easily verifiable. These include results for an arbitrary set of marginal log‐linear parameters having some common effects. The connections of strict collapsibility to various forms of independence of the variables are explored. We analyze some real‐life datasets to illustrate the above results on collapsibility and strict collapsibility. Finally, we obtain a result relating parameters with the same effect, but different margins for an arbitrary table, and demonstrate smoothness of marginal log‐linear models under collapsibility conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. 中国城镇居民教育代际流动性的 测量及影响因素分析.
- Author
-
方超 and 叶林祥
- Subjects
CITY dwellers ,EDUCATIONAL mobility ,URBAN education ,CONTINGENCY tables ,PUBLIC education ,INTERGENERATIONAL mobility - Abstract
Copyright of Journal of Educational Studies (1673-1298) is the property of Journal of Educational Studies Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
14. A generalisation of the aggregate association index (AAI): incorporating a linear transformation of the cells of a 2 × 2 table.
- Author
-
Beh, Eric J., Tran, Duy, and Hudson, Irene L.
- Subjects
- *
CELL transformation , *GENERALIZATION , *CONTINGENCY tables , *DATA protection , *ACQUISITION of data , *PEARSON correlation (Statistics) - Abstract
The analysis of aggregate, or marginal, data for contingency tables is an increasingly important area of statistics, applied sciences and the social sciences. This is largely due to confidentiality issues arising from the imposition of government and corporate protection and data collection methods. The availability of only aggregate data makes it difficult to draw conclusions about the association between categorical variables at the individual level. For data analysts, this issue is of growing concern, especially for those dealing with the aggregate analysis of a single 2 × 2 table or stratified 2 × 2 tables and lies in the field of ecological inference. As an alternative to ecological inference techniques, one may consider the aggregate association index (AAI) to obtain valuable information about the magnitude and direction of the association between two categorical variables of a single 2 × 2 table or stratified 2 × 2 tables given only the marginal totals. Conventionally, the AAI has been examined by considering p 11 —the proportion of the sample that lies in the (1, 1)th cell of a given 2 × 2 table. However, the AAI can be expanded for other association indices. Therefore, a new generalisation of the original AAI is given here by reformulating and expanding the index so that it incorporates any linear transformation of p 11 . This study shall consider the consistency of the AAI under the transformation by examining four classic association indices, namely the independence ratio, Pearson's ratio, standardised residual and adjusted standardised residual, although others may be incorporated into this general framework. We will show how these indices can be utilised to examine the strength and direction of association given only the marginal totals. Therefore, this work enhances our understanding of the AAI and establishes its links with common association indices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Comparing experiences and perceptions of primary health care among LGBT and non-LGBT people: Key findings from Catalonia.
- Author
-
Subirana-Malaret, Montse, Freude, Leon, and Gahagan, Jacqueline
- Subjects
- *
LGBTQ+ people , *PRIMARY health care , *CONTINGENCY tables , *TRANSGENDER people , *INTERNET surveys - Abstract
This study compares the experiences and perceptions of primary health care of respondents who self-identify as Lesbian, Gay, Bisexual, and Transgender (LGBT) with those of respondents who do not. Data were collected through a closed-ended, anonymous online survey in 2018. 468 respondents completed the survey, which included sociodemographic questions, perceptions of respondents' health status, and their primary health-care experiences. Both LGBT and non-LGBT groups were analyzed, comparing differences and similarities by using univariant analysis and contingency tables. Our results indicate that the primary health-care needs of LGBT people in Catalonia include specific requirements that are not currently being addressed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Ivermectin resistance in Rhipicephalus microplus (Acari: Ixodidae) in northeastern Mexico and associated risk factors.
- Author
-
Abigail Moreno-Linares, Samantha, García-Ponce, Romario, Jaime Hernández-Escareño, Jesús, Giselle Rodríguez-Ramírez, Heidi, and Pablo Villarreal-Villarreal, José
- Subjects
- *
CONTINGENCY tables , *VETERINARY epidemiology , *MULTIVARIATE analysis , *RHIPICEPHALUS , *CATTLE tick , *PROBIT analysis - Abstract
Rhipicephalus microplus is the parasitic species that causes the most damage to Mexican and global livestock due to direct and indirect losses, such as the increase in multidrug resistance and cross-resistance. Currently, there are few studies on resistance to macrocyclic lactones in Mexico, most of them in the south. This study aimed to evaluate the status of ivermectin resistance in R. microplus in northeastern Mexico and its associated risk factors. A total of 20 populations of Rhipicephalus microplus were collected in the states of Veracruz, Nuevo León, Tamaulipas, and San Luis Potosí, and they were analyzed with the larval immersion test. Mortality data were subjected to a Probit analysis, estimating lethal concentrations (LC) of 50 % and 99 % and their respective 95 % confidence intervals (95 % CI), and to determine possible risk factors, a multivariate analysis and 2 x 2 contingency tables were performed for the exposure variables, with a 95 % confidence interval, and a binomial logistic regression model for those variables with a P=0.05. Eighty (80) percent of the analyzed populations showed resistance with ranges of RR50= 2.07-11.14 and RR99= 3.03-47.93 (P=0.05), and through the binomial logistic regression, it was observed that the variable of frequency of treatments obtained a P=0.0134, a result that proved to be significant. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Correspondence Analysis for Assessing Departures from Perfect Symmetry Using the Cressie–Read Family of Divergence Statistics.
- Author
-
Beh, Eric J. and Lombardo, Rosaria
- Subjects
- *
CORRESPONDENCE analysis (Statistics) , *SINGULAR value decomposition , *CONTINGENCY tables , *NUMERICAL analysis , *SYMMETRY - Abstract
Recently, Beh and Lombardo (2022, Symmetry, 14, 1103) showed how to perform a correspondence analysis on a two-way contingency table where Bowker's statistic lies at the numerical heart of this analysis. Thus, we showed how this statistic could be used to visually identify departures from perfect symmetry. Interestingly, Bowker's statistic is a special case of the symmetry version of the Cressie–Read family of divergence statistics. Therefore, this paper presents a new framework for visually assessing departures from perfect symmetry using a second-order Taylor series approximation of the Cressie–Read family of divergence statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. On testing the equality between interquartile ranges.
- Author
-
Greco, Luca, Luta, George, and Wilcox, Rand
- Subjects
- *
STATISTICAL sampling , *QUANTILE regression , *QUANTILES , *CONTINGENCY tables - Abstract
The interquartile range is a statistical measure well suited to describe the variability of the data at hand, both at the population level and for sample data. The interquartile range is particularly useful when the distribution of the data is asymmetric or irregularly shaped. Here, the use of the interquartile range is investigated when the main aim is to compare the variability of two distributions using two independent random samples, without the need to make any distributional assumptions. Several techniques are compared through numerical studies and real data examples, with a particular attention given to the use of sample quantiles based on the Harrel-Davis estimator or the quantile regression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Distance Covariance, Independence, and Pairwise Differences.
- Author
-
Raymaekers, Jakob and Rousseeuw, Peter J.
- Subjects
- *
COMMON misconceptions , *MATHEMATICAL statistics , *CONTINGENCY tables , *RANDOM variables , *INDEPENDENT variables - Abstract
AbstractDistance covariance (Székely, Rizzo, and Bakirov) is a fascinating recent notion, which is popular as a test for dependence of any type between random variables
X andY . This approach deserves to be touched upon in modern courses on mathematical statistics. It makes use of distances of the type |X−X′| and |Y−Y′|, where (X′,Y′) is an independent copy of (X ,Y ). This raises natural questions about independence of variables like X−X′ and Y−Y′, about the connection between cov(|X−X′|,|Y−Y′|) and the covariance between doubly centered distances, and about necessary and sufficient conditions for independence. We show some basic results and present a new and nontechnical counterexample to a common fallacy, which provides more insight. We also show some motivating examples involving bivariate distributions and contingency tables, which can be used as didactic material for introducing distance correlation. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
20. Why Do Companies Cook the Books? Empirical Study of the Motives of Creative Accounting of Slovak Companies.
- Author
-
Michulek, Jakub, Gajanova, Lubica, Krizanova, Anna, and Blazek, Roman
- Subjects
EARNINGS management ,FINANCIAL statements ,CORPORATE culture ,ACCOUNTING ,CONTINGENCY tables - Abstract
Studies on creative accounting date back to the latter part of the 20th century. Creative accounting is still a big challenge in financial accounting. The problem of financial statement manipulation might be investigated, for instance, from an accounting, legal, ethical, or psychological perspective. This research aims to identify the main motives for the use of creative accounting and to find out whether corporate culture has an impact on the motives leading to the use of creative accounting. Data collection took place from 18 November 2023 to 18 December 2022. In the research, we used Pearson's χ
2 test to determine the dependence of the studied variables in contingency tables. Subsequently, correspondence analysis was used. The type of corporate culture does not have an impact on the motives that lead to creative accounting. It was proven that the type of corporate culture has an impact on the performance of creative accounting actions based on the request of a senior employee. The uniqueness of the research lies in the investigation of creative accounting from a psychological and managerial point of view in the territory of the Slovak Republic. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
21. Addressing Incomplete Data in Two-Way Contingency Tables with Three-Level Variables
- Author
-
Makapawee, Pitchayanin, Kamkongkaew, Kanyawee, Kooakachai, Monchai, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Ansari, Jonathan, editor, Fuchs, Sebastian, editor, Trutschnig, Wolfgang, editor, Lubiano, María Asunción, editor, Gil, María Ángeles, editor, Grzegorzewski, Przemyslaw, editor, and Hryniewicz, Olgierd, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Highly private large‐sample tests for contingency tables.
- Author
-
Jung, Sungkyu and Woo Kwak, Seung
- Subjects
- *
CONTINGENCY tables , *GOODNESS-of-fit tests , *ERROR rates , *FALSE positive error , *ASYMPTOTIC distribution , *STATISTICS - Abstract
Differential privacy is a foundational concept for safeguarding sensitive individual information when releasing data or statistical analysis results. In this study, we concentrate on the protection of privacy in the context of goodness‐of‐fit (GOF) and independence tests, utilizing perturbed contingency tables that adhere to Gaussian differential privacy within the high‐privacy regime, where the degrees of privacy protection increase as the sample size increases. We introduce private test procedures for GOF, independence of two variables and the equality of proportions in paired samples, similar to McNemar's test. For each of these hypothesis testing situations, we propose private test statistics based on the χ2$$ {\chi}^2 $$ statistics and establish their asymptotic null distributions. We numerically confirm that Type I error rates of the proposed private test procedures are well controlled and have adequate power for larger sample sizes and effect sizes. The proposal is demonstrated in private inferences based on the American Time Use Survey data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Diagonals–Parameter Symmetry Model and Its Property for Square Contingency Tables with Ordinal Categories.
- Author
-
Tahata, Kouji and Matsuda, Kohei
- Subjects
- *
CONTINGENCY tables , *SYMMETRY , *SQUARE - Abstract
The diagonals–parameter symmetry (DPS) model is a proposed method for analyzing square contingency tables with ordinal categories. Previously, it was stated that the generalized DPS (DPS[f]) model was equivalent to the DPS model for any function f, but the proof was not provided. This paper presents the derivation of the DPS[f] model and the proof of the relationship between the two models. The findings offer various interpretations of the DPS model. Additionally, a new model is considered, and it is shown that the proposed model and the DPS[f] model are separable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. On the uniqueness of correspondence analysis solutions.
- Author
-
Willemsen, Rick S.H., van de Velden, Michel, and van den Heuvel, Wilco
- Subjects
- *
CONTINGENCY tables - Abstract
In correspondence analysis (CA), the rows and columns of a contingency table are optimally represented in a k -dimensional approximation, where it is common to set k = 3 (which includes a so-called trivial dimension). Since CA is a dimension reduction technique, we might expect that the k -dimensional approximation is not unique, i.e. there exist several contingency tables with the same k -dimensional approximation. Interestingly, Van de Velden et al. [17] find in their computational experiments that 3-dimensional CA solutions are unique up to rotation, which leads to the question whether this is always the case. We show that k -dimensional CA solutions are not necessarily unique. That is, two distinct contingency tables may have the same k -dimensional approximation. We present necessary and sufficient conditions for the non-uniqueness of CA solutions, which hold for any value of k. Based on our sufficient conditions, we present a procedure to generate contingency tables with the same k -dimensional solution. Finally, we note that it is difficult to satisfy the necessary conditions, which suggests that CA solutions are most likely unique in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Prognostic factors to identify resolution of small bowel obstruction without need for operative management: systematic review.
- Author
-
Eze, Vivienne N., Parry, Tom, Boone, Darren, Mallett, Sue, and Halligan, Steve
- Subjects
- *
BOWEL obstructions , *SMALL intestine , *PROGNOSIS , *ASCITIC fluids , *CONTINGENCY tables - Abstract
Objectives: To identify imaging, clinical, and laboratory variables potentially prognostic for surgical management of small bowel obstruction. Methods: Two researchers systematically reviewed indexed literature 2001–2021 inclusive for imaging, clinical, and laboratory variables potentially predictive of surgical management of small bowl obstruction and/or ischaemia at surgery, where performed. Risk of bias was assessed. Contingency tables for variables reported in at least 5 studies were extracted and meta-analysed to identify strong evidence of association with clinical outcomes, across studies. Results: Thirty-one studies were ultimately included, reporting 4638 patients (44 to 313 per study). 11 (35%) studies raised no risk of bias concerns. CT was the modality reported most (29 studies, 94%). Meta-analysis of 21 predictors identified 5 strongly associated with surgical intervention, 3 derived from CT (peritoneal free fluid, odds ratio [OR] 3.24, 95%CI 2.45 to 4.29; high grade obstruction, OR 3.58, 95%CI 2.46 to 5.20; mesenteric inflammation, OR 2.61, 95%CI 1.94 to 3.50; abdominal distension, OR 2.43, 95%CI 1.34 to 4.42; peritonism, OR 3.97, 95%CI 2.67 to 5.90) and one with conservative management (previous abdominopelvic surgery, OR 0.58, 95%CI 0.40 to 0.85). Meta-analysis of 10 predictors identified 3 strongly associated with ischaemia at surgery, 2 derived from CT (peritoneal free fluid, OR 3.49, 95%CI 2.28 to 5.35; bowel thickening, OR 3.26 95%CI 1.91 to 5.55; white cell count, OR 4.76, 95%CI 2.71 to 8.36). Conclusions: Systematic review of patients with small bowel obstruction identified four imaging, three clinical, and one laboratory predictors associated strongly with surgical intervention and/or ischaemia at surgery. Clinical relevance statement: Via systematic review and meta-analysis, we identified imaging, clinical, and laboratory predictors strongly associated with surgical management of small bowel obstruction and/or ischaemia. Multivariable model development to guide management should incorporate these since they display strong evidence of potential utility. Key Points: • While multivariable models incorporating clinical, laboratory, and imaging factors could predict surgical management of small bowel obstruction, none are used widely. • Via systematic review and meta-analysis we identified imaging, clinical, and laboratory variables strongly associated with surgical management and/or ischaemia at surgery. • Development of multivariable models to guide management should incorporate these predictors, notably CT scanning, since they display strong evidence of potential utility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Constructing and sampling partite, 3-uniform hypergraphs with given degree sequence.
- Author
-
Hubai, András, Mezei, Tamás Róbert, Béres, Ferenc, Benczúr, András, and Miklós, István
- Subjects
- *
HYPERGRAPHS , *POLYNOMIAL time algorithms , *NP-complete problems , *STATISTICAL decision making , *CONTINGENCY tables - Abstract
Partite, 3-uniform hypergraphs are 3-uniform hypergraphs in which each hyperedge contains exactly one point from each of the 3 disjoint vertex classes. We consider the degree sequence problem of partite, 3-uniform hypergraphs, that is, to decide if such a hypergraph with prescribed degree sequences exists. We prove that this decision problem is NP-complete in general, and give a polynomial running time algorithm for third almost-regular degree sequences, that is, when each degree in one of the vertex classes is k or k − 1 for some fixed k, and there is no restriction for the other two vertex classes. We also consider the sampling problem, that is, to uniformly sample partite, 3-uniform hypergraphs with prescribed degree sequences. We propose a Parallel Tempering method, where the hypothetical energy of the hypergraphs measures the deviation from the prescribed degree sequence. The method has been implemented and tested on synthetic and real data. It can also be applied for χ2 testing of contingency tables. We have shown that this hypergraph-based χ2 test is more sensitive than the standard χ2 test. The extra sensitivity is especially advantageous on small data sets, where the proposed Parallel Tempering method shows promising performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Küçük Örneklem Büyüklüğüne Sahip 2x2 Çapraz Tablolar İçin Ki-Kare Yöntemlerinin Karşılaştırılması: Bir Simülasyon Çalışması.
- Author
-
DOĞAN, İsmet and DOĞAN, Nurhan
- Subjects
- *
CONTINGENCY tables , *SAMPLE size (Statistics) , *DEGREES of freedom , *TEST methods , *HYPOTHESIS - Abstract
Objective: The aim of this study is to introduce and compare the classical and continuity corrected chi-square tests used in 2x2 contingency tables. Material and Methods: Chi-square tests with 1 (one) degree of freedom were taken into consideration in the study. Because these tests are seriously affected by the discontinuity of the data. Using the Python-random library, data was derived for 4 different values in the range of 10 ≤ n ≤25. In deriving the data, first the value to be assigned to the cells indicated by a, b, c and d was determined, and then the value to be assigned to the relevant cell was determined. 246 different data sets for n=10, 756 for n=15, 958 for n=20, and 963 for n=25 were used in the study. In the comparison of the methods, both the rejection percentages of each method for the hypothetical H0 hypothesis for different sample sizes and significance levels, and the rejection/rejection, rejection/ acception, acception/rejection and acception/acception rates of the hypothetical H0 hypothesis of the methods relative to each other were used. Results: The results of the methods considered in the study do not enable one of the methods to be chosen as the best method among all the proposed methods. Different methods stand out at different sample sizes and significance levels. This means that the result of a study cannot be interpreted correctly. It has been determined that all methods are affected by the sample size and significance level, and if the sample size increases and the significance level changes from 0.01 to 0.10, the rejection rates of the H0 hypothesis also increase. Conclusion: It was concluded that the chi-square test is suitable for large samples, and if at least one of the expected values is less than 5, both the classical and continuity corrected chi-square methods should not be used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Likelihood Ratio Test and the Evidential Approach for 2 × 2 Tables.
- Author
-
Cahusac, Peter M. B.
- Subjects
- *
LIKELIHOOD ratio tests , *LITERATURE reviews , *CONTINGENCY tables , *DATA integrity , *ODDS ratio , *MEDICAL statistics , *NULL hypothesis - Abstract
Categorical data analysis of 2 × 2 contingency tables is extremely common, not least because they provide risk difference, risk ratio, odds ratio, and log odds statistics in medical research. A χ 2 test analysis is most often used, although some researchers use likelihood ratio test (LRT) analysis. Does it matter which test is used? A review of the literature, examination of the theoretical foundations, and analyses of simulations and empirical data are used by this paper to argue that only the LRT should be used when we are interested in testing whether the binomial proportions are equal. This so-called test of independence is by far the most popular, meaning the χ 2 test is widely misused. By contrast, the χ 2 test should be reserved for where the data appear to match too closely a particular hypothesis (e.g., the null hypothesis), where the variance is of interest, and is less than expected. Low variance can be of interest in various scenarios, particularly in investigations of data integrity. Finally, it is argued that the evidential approach provides a consistent and coherent method that avoids the difficulties posed by significance testing. The approach facilitates the calculation of appropriate log likelihood ratios to suit our research aims, whether this is to test the proportions or to test the variance. The conclusions from this paper apply to larger contingency tables, including multi-way tables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. INFLUENCIA DE LAS ESTRATEGIAS DE PROMOCIÓN EN LA CAPTACIÓN Y ADHERENCIA AL REMO FEDERADO.
- Author
-
García-González, Iván, Iglesias-Pérez, María del Carmen, and Vicente-Vila, Pedro
- Subjects
SECONDARY school students ,SCHOOL sports ,BIVARIATE analysis ,CONTINGENCY tables ,GOVERNMENT ownership - Abstract
Copyright of Journal of Sport & Health Research is the property of Journal of Sport & Health Research and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
30. Classical tests, linear models and their extensions for the analysis of 2 × 2 contingency tables.
- Author
-
Nagel, Rebecca, Ruxton, Graeme D., and Morrissey, Michael B.
- Subjects
CHI-squared test ,CONTINGENCY tables ,REGRESSION analysis ,NULL hypothesis ,STATISTICAL hypothesis testing ,STATISTICAL significance - Abstract
Ecologists and evolutionary biologists are regularly tasked with the comparison of binary data across groups. There is, however, some discussion in the biostatistics literature about the best methodology for the analysis of data comprising binary explanatory and response variables forming a 2 × 2 contingency table.We assess several methodologies for the analysis of 2 × 2 contingency tables using a simulation scheme of different sample sizes with outcomes evenly or unevenly distributed between groups. Specifically, we assess the commonly recommended logistic (generalised linear model [GLM]) regression analysis, the classical Pearson chi‐squared test and four conventional alternatives (Yates' correction, Fisher's exact, exact unconditional and mid‐p), as well as the widely discouraged linear model (LM) regression.We found that both LM and GLM analyses provided unbiased estimates of the difference in proportions between groups. LM and GLM analyses also provided accurate standard errors and confidence intervals when the experimental design was balanced. When the experimental design was unbalanced, sample size was small, and one of the two groups had a probability close to 1 or 0, LM analysis could substantially over‐ or under‐represent statistical uncertainty. For null hypothesis significance testing, the performance of the chi‐squared test and LM analysis were almost identical. Across all scenarios, both had high power to detect non‐null effects and reject false positives. By contrast, the GLM analysis was underpowered when using z‐based p‐values, in particular when one of the two groups had a probability near 1 or 0. The GLM using the LRT had better power to detect non‐null results.Our simulation results suggest that, wherever a chi‐squared test would be recommended, a linear regression is a suitable alternative for the analysis of 2 × 2 contingency table data. When researchers opt for more sophisticated procedures, we provide R functions to calculate the standard error of a difference between two probabilities from a Bernoulli GLM output using the delta method. We also explore approaches to compliment GLM analysis of 2 × 2 contingency tables with credible intervals on the probability scale. These additional operations should support researchers to make valid assessments of both statistical and practical significances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Farmers league: squad structure and resource dependency.
- Author
-
Feuillet, Antoine, Terrettaz, Loris, and Terrien, Mickaël
- Subjects
SOCCER teams ,CONTINGENCY tables ,CAPACITY building ,ARCHETYPES - Abstract
Purpose: This research aimed to measure the influence of resource dependency (trading and/or shareholder's dependencies) squad age structure by building archetypes to identify strategic dominant schemes. Design/methodology/approach: Based on the Ligue 1 football clubs from the 2009/2010 season to the 2018/2019 data, the authors use the k-means classification to build archetypes of resource dependency and squad structure variables. The influence of resource dependency on squad structure is then analysed through a table of contingency. Findings: Firstly, the authors identify archetypes of resource dependency with some clubs that are dependent on the transfer market and others that do not count on sales to balance their account. Secondly, they provide different archetypes of squad structure choices. The contingency between those archetypes allows to identify three main strategic schemes (avoidance, shaping and adaptation). Originality/value: The research tests an original relationship between resource dependency of clubs and their human resource strategy to respond to it. This paper can help to provide detailed profiles for big clubs looking for affiliate clubs to know which clubs have efficient academy or player development capacities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Rib and sternum fracture risks for restrained occupants in frontal car crashes.
- Author
-
Ranmal, Aarti, Shaikh, Junaid, and Lubbe, Nils
- Subjects
RIB fractures ,RIB cage ,TRAFFIC accidents ,CONTINGENCY tables ,STERNUM - Abstract
Most car occupant fatalities occur in frontal crashes and the thorax is the most frequently injured body region. The objectives of the study were, firstly, to quantify the relation between risk factors (such as speed and occupant age) and rib and sternum fracture injury probability in frontal car crashes, and, secondly, to evaluate whether rib fracture occurrence can predict sternum fractures. Weighted German data from 1999-2021 were used to create the injury risk curves to predict both, at least moderate and at least serious, rib and sternum fracture risks. A contingency table for rib and sternum fractures allowed the calculation of sensitivity, specificity, and precision, as well as testing for the association. Elderly occupants (≥65 years old) had increased rib and sternum fracture risk compared to mid aged occupants (18-64 years old). Besides occupant age, delta-V was always and sex sometimes a significant predictor for skeletal thoracic injury. Sternum fractures were more common than rib fractures and more likely to occur at any given delta-V. Sternum fractures occurred often in isolation. Female occupants were at higher risk than males to sustain at least moderate rib and sternum fractures together and sternum fractures in isolation. Rib and sternum fractures were associated, but low sensitivity and precision show that rib fractures do not predict sternum fractures well. Elderly and female occupants were at the highest risk and should be targeted by thoracic injury criteria and thresholds for frontal crash occupant protection. Rib and sternum fractures were not associated. Therefore, sternum fractures need to be predicted and evaluated separately from rib fractures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Improving the power of hypothesis tests in sparse contingency tables.
- Author
-
Nicolussi, Federica, Cazzaro, Manuela, and Rudas, Tamás
- Subjects
CONTINGENCY tables ,STATISTICAL power analysis ,HYPOTHESIS ,SAMPLE size (Statistics) - Abstract
When analyzing data in contingency tables it is frequent to deal with sparse data, particularly when the sample size is small relative to the number of cells. Most analyses of this kind are interpreted in an exploratory manner and even if tests are performed, little attention is paid to statistical power. This paper proposes a method we call redundant procedure, which is based on the union–intersection principle and increases test power by focusing on specific components of the hypothesis. This method is particularly helpful when the hypothesis to be tested can be expressed as the intersections of simpler models, such that at least some of them pertain to smaller table marginals. This situation leads to working on tables that are naturally denser. One advantage of this method is its direct application to (chain) graphical models. We illustrate the proposal through simulations and suggest strategies to increase the power of tests in sparse tables. Finally, we demonstrate an application to the EU-SILC dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. OASIS: An interpretable, finite-sample valid alternative to Pearson's X² for scientific discovery.
- Author
-
Baharav, Tavor Z., Tse, David, and Salzman, Julia
- Subjects
- *
SCIENTIFIC discoveries , *CONTINGENCY tables , *FALSE discovery rate , *ASYMPTOTIC distribution , *MYCOBACTERIUM tuberculosis - Abstract
Contingency tables, data represented as counts matrices, are ubiquitous across quantitative research and data-science applications. Existing statistical tests are insufficient however, as none are simultaneously computationally efficient and statistically valid for a finite number of observations. In this work, motivated by a recent application in reference-free genomic inference [K. Chaung et al., Cell 186, 5440-5456 (2023)], we develop Optimized Adaptive Statistic for Inferring Structure (OASIS), a family of statistical tests for contingency tables. OASIS constructs a test statistic which is linear in the normalized data matrix, providing closed-form P-value bounds through classical concentration inequalities. In the process, OASIS provides a decomposition of the table, lending interpretability to its rejection of the null. We derive the asymptotic distribution of the OASIS test statistic, showing that these finite-sample bounds correctly characterize the test statistic's P-value up to a variance term. Experiments on genomic sequencing data highlight the power and interpretability of OASIS. Using OASIS, we develop a method that can detect SARS-CoV-2 and Mycobacterium tuberculosis strains de novo, which existing approaches cannot achieve. We demonstrate in simulations that OASIS is robust to overdispersion, a common feature in genomic data like single-cell RNA sequencing, where under accepted noise models OASIS provides good control of the false discovery rate, while Pearson's X² consistently rejects the null. Additionally, we show in simulations that OASIS is more powerful than Pearson's X² in certain regimes, including for some important two group alternatives, which we corroborate with approximate power calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Self‐truncated sampling produces more moderate covariation judgment and related decision than descriptive frequency information: The role of regressive frequency estimation.
- Author
-
Zhang, Xuhui and Dai, Junyi
- Subjects
- *
LEGAL judgments , *CONTINGENCY tables , *SAMPLING errors , *SAMPLE size (Statistics) - Abstract
Covariation judgment underlies a diversity of psychological theories and influences various everyday decisions. Information about covariation can be learned from either a summary description of frequencies (i.e., contingency tables) or trial‐by‐trial experience (i.e., sampling individual instances). Two studies were conducted to investigate the impact of information learning mode (i.e., description vs. self‐truncated sampling) on covariation judgment and related decision. In each trial under the description condition, participants were presented with a contingency table with explicit cell frequencies, whereas in each trial under the self‐truncated sampling condition, participants were allowed to determine when to stop sampling instances and thus the actual sample size. To eliminate sampling error, an other‐yoked (i.e., between‐subject) design was used in this research so that cell frequencies shown in a trial under the description condition were matched with those experienced in a trial under the self‐truncated sampling condition. Experiment 1 showed that the self‐truncated sampling condition led to more moderate covariation judgments than the description condition (i.e., a description–experience gap). Experiment 2 demonstrated further that the same gap extended to covariation‐related decisions in terms of relative contingent preference (RCP). Regressive frequency estimation under self‐truncated sampling appeared to underlie the consistent gaps found in the two studies, whereas the impact of regressive diagnosticity (i.e., the same sample of instances was viewed as less diagnostic under description than under self‐truncated sampling) or simultaneous overestimation and underweighting of rare instances under experience was not supported by the observed data. Future research might examine alternative accounts of the observed gaps, such as the impacts of belief updating and stopping rule under self‐truncated sampling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Cross-cutting exposure to the Spanish public broadcasting system: influence of ideology, partisanship, and interest in politics on RTVE’s consumption.
- Author
-
Doménech-Beltrán, Jaume
- Subjects
- *
IDEOLOGY , *POLITICAL parties , *PUBLIC broadcasting , *CONTINGENCY tables , *PARTISANSHIP , *MEDIA consumption , *PUBLIC interest , *SELECTIVE exposure , *PUBLIC radio - Abstract
According to the selective exposure hypothesis, media consumption is determined by the ideological predispositions of individuals, who aim to confirm their opinions through media content. This research explores the role of the public state-owned radio and television corporation, Radio Televisión Española (RTVE), as a facilitator of transversal exposure, that is, consumption that is not aligned with individuals’ prior convictions. Through a quantitative methodology based on contingency tables and the calculation of corrected typed residuals, and through the CIS post-electoral studies, we analyse the relevance of ideology, partisan identification and interest in public issues as predictors of public television and radio consumption over a period of 11 years (2008-2019) in Spain. The results indicate that being a voter of the party in government and sharing its ideology are related to a higher likelihood of consuming RTVE television channels. In addition, the results show a higher transversal exposure of public radio, compared to television, which is more strongly influenced by the ideology and partisanship of the audiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A randomized controlled clinical trial on press and block lithium disilicate partial crowns: A 4-year recall.
- Author
-
FERRARI-CAGIDIACO, EDOARDO, VERNIANI, GIULIA, KEELING, ANDREW, ZARONE, FERDINANDO, SORRENTINO, ROBERTO, MANFREDINI, DANIELE, and FERRARI, MARCO
- Subjects
CRIME & the press ,CLINICAL trials ,RANDOMIZED controlled trials ,DENTAL abutments ,CONTINGENCY tables - Abstract
Purpose: To evaluate clinical performances of two lithium disilicate systems (Initial LiSi press vs Initial LiSi Block, GC Co.) using modified United States Public Health Service (USPHS) evaluation criteria and survival rates after 4 years of clinical service. Methods: Partial adhesive crowns on natural abutment posterior teeth were made on 60 subjects who were randomly divided into two groups: Group 1: Initial LiSi press and Group 2. Initial LiSi Block. Fabrication of partial crowns was made with full analog and digital procedure in Groups 1 and 2 respectively. The restorations were followed-up for I and 4 years, and the modified USPHS evaluation was performed at baseline and each recall together with periodontal evaluation. Contingency tables to assess for significant differences of success over time in each group and time-dependent Cox regression to test for differences between the two groups were used and the level of significance was set at P< 0.05. Results: Regarding modified USPHS scores, all evaluated parameters showed Alpha or Bravo and no Charlie was recorded. No statistically significant difference emerged between the two groups in any of the assessed variables (P> 0.05). No statistically significant difference between scores recorded at the baseline and each recall. All modified USPHS scores were compatible with the outcome of clinical success and no one restoration was replaced or repaired. and the survival rate was 100% after 4 years of clinical service. No difference was found between traditional and digital procedure to fabricate the crowns. The two lithium disilicate materials showed similar results after 4 years of clinical service. [ABSTRACT FROM AUTHOR]
- Published
- 2024
38. An intelligent model for early prognosis of heart illness.
- Author
-
Jaffrin, Lijetha C. and Visumathi, J.
- Subjects
- *
HEART diseases , *CONTINGENCY tables , *MACHINE learning , *HEART , *CARDIOVASCULAR diseases , *PROGNOSIS , *LOGISTIC regression analysis - Abstract
Cardiovascular Diseases (CVDs) had arisen as deadly disease. This was the biggest reason for the world's enormous number of deaths over the last few eras. More accuracy, perfection and precision were needed for prediction of heart diseases at early stage. To address this issue, a predictive system was needed. In medicinal arena, machine learning could be used as a tool for analysis, discovery, and prediction of numerous syndromes. Machine learning helps to make choices and forecasts effectively from vast number of records provided by healthcare field. The intent of this paper is to pinpoint threats of heart disease and to predict heart disease using Logistic Regression algorithm. The results offer the probabilities of occurrence of heart disease based on percentage. The metrics to evaluate the model for considered dataset using contingency table are Accuracy, Precision, Recall and F1 measure. Heart illness prediction using logistic regression provided enhanced accuracy than other techniques. The datasets taken are categorized based on the constraints interrelated with heart disease. This system assesses those constraints by means of machine learning practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Association study of various species of epiphyte fern with host tree in Srengseng city forest.
- Author
-
Azrai, Eka Putri, Miarsyah, Mieke, Widarsih, Kharisma, Gusverizon, Afrilisa Nur Rosifa, Muslimin, Sayyid Izzuddin, Syahrani, Yodellia, and Rojak
- Subjects
- *
FERNS , *URBAN trees , *SPECIES , *LIGHT intensity , *CONTINGENCY tables , *ATMOSPHERIC temperature - Abstract
Vegetation is formed by the presence and interaction of several species of plants, one of them is association. This study aims to analyze the types and levels of association among five epiphytic fern species and their host in Srengseng City Forest, Jakarta. The method used was a survey method with a purposive sampling technique. The data collected were the number of individuals of 5 epiphytic ferns, the number of individuals, and the species of host trees. Measurements of light intensity, air temperature, and humidity were also carried out as supporting data. The association type was identified using 2 x 2 contingency tables by comparing observation value (a) toward expectation value E (a). The association levels were identified using Jaccard Index and Dice Index. The results showed that association types between epiphytic ferns and host trees were 71.79% positively and 28.21% negatively. Research indicates that association levels of epiphytes fern with the host were variously from low to high. The highest association occurred between Drynaria quercifolia with Zapoteca tetragona and Pyrrosia piloselloides with Zapoteca tetragona. Factors affecting the occurrence of associations are environmental conditions such as light intensity, air temperature, and humidity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Radiation therapy for retroperitoneal sarcoma: practice patterns in North America.
- Author
-
Ruff, Samantha M., Heh, Victor, Konieczkowski, David J., Onuma, Amblessed, Dunlop, Hayley M., Kim, Alex C., Grignol, Valerie P., Contreras, Carlo M., Pawlik, Timothy M., Pollock, Raphael, and Beane, Joal D.
- Subjects
- *
HEALTH facilities , *SARCOMA , *RADIOTHERAPY , *CONTINGENCY tables , *OVERALL survival , *DATABASES , *LIPOSARCOMA - Abstract
Background: The addition of radiation therapy (RT) to surgery in retroperitoneal sarcoma (RPS) remains controversial. We examined practice patterns in the use of RT for patients with RPS over time in a large, national cohort. Methods: Patients in the National Cancer Database (2004–2017) who underwent resection of RPS were included. Trends over time for proportions were calculated using contingency tables with Cochran-Armitage Trend test. Results: Of 7,485 patients who underwent resection, 1,821 (24.3%) received RT (adjuvant: 59.9%, neoadjuvant: 40.1%). The use of RT decreased annually by < 1% (p = 0.0178). There was an average annual increase of neoadjuvant RT by 13% compared to an average annual decrease of adjuvant RT by 6% (p < 0.0001). Treatment at high-volume centers (OR 14.795, p < 0.0001) and tumor > 10 cm (OR 2.009, p = 0.001) were associated with neoadjuvant RT. In contrast liposarcomas (OR 0.574, p = 0.001) were associated with adjuvant RT. There was no statistically significant difference in overall survival between patients treated with surgery alone versus surgery and RT (p = 0.07). Conclusion: In the United States, the use of RT for RPS has decreased over time, with a shift towards neoadjuvant RT. However, a large percentage of patients are still receiving adjuvant RT and this mostly occurs at low-volume hospitals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Orthopantomography Detection of Atheroma Plaques and Its Relationship with Periodontal Disease and Missing Teeth.
- Author
-
Quevedo García, Rodrigo, Arnaiz Díez, Sara, Pérez Pevida, Esteban, and Del Río Solá, María Lourdes
- Subjects
- *
DENTAL pathology , *PERIODONTAL disease , *PANORAMIC radiography , *ATHEROSCLEROTIC plaque , *CONTINGENCY tables - Abstract
Background. The aim of this study is to determine the atheromatous plaques' prevalence in orthopantomography and their relationship with periodontal disease and missing teeth. Material and Methods. Orthopantomographs of 1,254 patients over 18 years of age from Clínica Arlanza in Lerma, Burgos, were examined between 2017 and 2021. A Planmeca ProOne® orthopantomograph (68 kV, 7 mA, and 10 sg) was used. Statistical analysis was carried out using SPSS Statistics® version 25. The results of the categorical variables were described as frequencies (%). Contingency tables were made with the qualitative variables, and the chi-square test was applied to study the relationship among them. The measure of statistical power used was the relative risk (RR), which was described with its respective 95% confidence interval (CI). Student's t-test was applied to study the relationship between the qualitative variable "presence or absence of atheroma plaque" and the quantitative variable "number of teeth." Results. A 6.2% prevalence of atheroma plaques was obtained from 1,079 selected X-rays. The risk in patients with periodontal disease increased as periodontal disease worsened. The risk in patients with periodontal disease increased as periodontal disease worsened as follows: healthy patients vs. periodontal patients with less than 30% bone loss in radiography: RR 0.434, 95% CI 0.181–1.041, p = 0.053 healthy patients vs. patients with between 30%–60% bone loss: RR 0.177, 95% CI 0.075–0.418, p < 0.05 healthy patients vs. patients with more than 60% bone loss: RR 0.121, 95% CI 0.041–0.355, p < 0.05. Patients with calcifications on their orthopantomograms had a lower mean teeth number (20.9 teeth) compared to patients without calcifications (24 teeth), which was statistically significant, t (1077) = −3.125, p < 0.05. Conclusions. Orthopantomography can be considered a screening method to detect patients at increased cardiovascular risk who are referred for individualized study. It is important to continue research to know the real significance of these findings. Dentists should be aware of the importance of our work in our patients' systemic health. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. VALIDACIÓN ESTADÍSTICA DEL USO EXITOSO DEL INSTRUMENTO KTK ADAPTADO PARA MEDIR LAS ALTERACIONES DE LA COORDINACIÓN MOTORA EN EL ANCIANO.
- Author
-
Robalino Morales, Gabriela Estefanía, Reales Chacón, Lisbeth Josefina, Cárdenas Medina, Jorge Humberto, Villarroel Quispe, Andrea Elizabeth, and Cantuña Vallejo, Paul Fernando
- Subjects
- *
MOTOR ability , *MOVEMENT disorders , *OLDER people , *PHYSICS instruments , *MODEL validation , *CONTINGENCY tables , *MEDICAL research , *MOTOR learning - Abstract
The adapted KTK Test to older adults aims to detect motor coordination disorders in people at this stage of life. This is a test that is designed to be applied to children between 5 and 14 years old. The methodological process was developed with a systematic adaptation of the instrument according to the physical state and motor coordination of the elderly, reviewing scientific information. The adapted test was applied to 169 participants, who in turn were evaluated independently by 5 experts, who assessed them directly according to their degree of expertise. The results of the test and the test performed by the experts were placed in a contingency table and compared using Pearson's contingency coefficient. It is concluded that the adapted KTK test in older adults to detect motor coordination disorders corresponds to the results achieved by the experts and therefore the adapted KTK test can be used to measure motor coordination in older adults. [ABSTRACT FROM AUTHOR]
- Published
- 2024
43. BÚSQUEDA DE ESTRATEGIAS PARA EL ENFRENTAMIENTO AL DETERIORO DE LA SALUD MENTAL EN PACIENTES CON NEUROPATÍA PERIFÉRICA DIABÉTICA BASADA EN PROCESO JERÁRQUICO ANALÍTICO.
- Author
-
Reales Chacón, Lisbeth, Eugenio Zumbana, Lizbeth Carolina, Flores-Hernández, Verónica, Campos Moposita, Angela, and Espín Pastor, Victoria
- Subjects
- *
DIABETIC neuropathies , *MENTAL health , *CONTINGENCY tables , *PEOPLE with diabetes , *PATIENTS' attitudes , *PERIPHERAL neuropathy , *QUALITY of life , *TREATMENT of peripheral neuropathy , *MENTAL depression , *PEOPLE with mental illness , *EMOTIONS , *PAIN perception - Abstract
Diabetic Peripheral Neuropathy (DPN) as a chronic, irreversible complication and having a cardinal symptom that predominates such as pain contributes to causing changes and alterations in the emotional sphere and therefore in the mental health of the patient. The findings in relation to the perception of patients suffering from DPN reported that pain relief is associated with good levels of quality of life, which reduces stress, depression, insomnia and other factors that can threaten mental health. This paper has two fundamental purposes, the first one is to determine the correlation that exists between the prevalence of DPN and the patients who present psychological disorders. This problem is treated with the help of contingency tables and it is statistically processed with Pearson's chi-square coefficient. Secondly, the Analytical Hierarchical Process (AHP) technique is applied to quantitatively evaluate, through the criteria of 3 experts, the possible strategies to be followed by patients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
44. Towards a better understanding of knee angular deformities: discrepancies between clinical examination and 2D/3D assessments.
- Author
-
Ghanem, Diane, Ghoul, Ali, Assi, Ayman, and Ghanem, Ismat
- Subjects
- *
KNEE , *HUMAN abnormalities , *THREE-dimensional imaging , *CONTINGENCY tables , *INTRAMEDULLARY fracture fixation , *JUDGMENT (Psychology) - Abstract
Introduction: Discrepancy between the clinical examination and the 2D/3D radiographs is a common concern in patients with angular or rotational deformities of the lower limbs, as it may alter clinical judgment and subsequent treatment. The aim was to identify such discrepancies and assess determinants that may contribute to their existence. Materials and methods: A retrospective chart review was conducted on 329 consecutive patients (658 lower limbs) who underwent physical examination and long-leg biplanar radiographs in our institution between 2013 and 2018 for limb length discrepancy or angular deformity of the knees (varus/valgus). Eleven parameters were measured on 2D and 3D images. 3D measurements were based on standing biplanar X-rays and their 3D reconstructions and were considered the gold standard. Contingency tables and multiple linear regression were used to assess discrepancies between the three modalities and their determinants respectively. Results: Significant mismatches were found between physical examination and 2D images (1% in varus and 1% in valgus), between physical examination and 3D assessment (1% in varus and 4.6% in valgus) as well as between 2 and 3D assessments (1.9% in varus and 7.6% in valgus). The significant determinants of the mismatch between 2 and 3D modalities were frontal pelvic obliquity, neck shaft angle, knee flexion, femoral torsion, and tibial mechanical angle. Conclusion: In the presence of positional and/or morphological deformities, physical examination and 2D assessment of knee alignment could be biased due to axes projection errors. A better understanding of 3D alignment of the knee as part of the entire lower limb from pelvis to toes, may lead to a better diagnosis and subsequently a better treatment of knee angular deformities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. CONTEXT-TREE APPROACH FOR MONITORING THE MULTI-WAY CONTINGENCY TABLES-BASED PROCESSES WITH DEPENDENCE BETWEEN NEIGHBORHOOD CELLS.
- Author
-
Kamranrad, Reza, Golshan, Azadeh, and Bagheri, Farnoosh
- Subjects
- *
CONTINGENCY tables , *NEIGHBORHOODS , *NUMERICAL analysis , *SENSITIVITY analysis - Abstract
In recent years, some statistical process monitoring (SPM) approaches have been used to control contingency table-based processes. The common assumption in this research is that the Neighborhood cells of the contingency table are temporally independent. This paper develops a new approach based on the Context-Tree method and Kullback-Leibler (KL) statistic to monitor the multi-way contingency tables by considering the dependence between Neighborhood cells in Phase II. The proposed approach is evaluated by using some simulation studies. In addition, the efficiency of the proposed approach has been approved using other sensitivity analyses in some numerical examples by contingency tables with more rows and columns and contingency tables with more categorical variables. Results show that proposed statistics have suitable performance in detecting the out-of-control condition under different shifts in a multi-way contingency table. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Adjusted Residuals for Evaluating Conditional Independence in IRT Models for Multistage Adaptive Testing.
- Author
-
van Rijn, Peter W., Ali, Usama S., Shin, Hyo Jeong, and Joo, Sean-Hwane
- Subjects
ADAPTIVE testing ,ITEM response theory ,FALSE positive error ,INFERENTIAL statistics ,CONTINGENCY tables - Abstract
The key assumption of conditional independence of item responses given latent ability in item response theory (IRT) models is addressed for multistage adaptive testing (MST) designs. Routing decisions in MST designs can cause patterns in the data that are not accounted for by the IRT model. This phenomenon relates to quasi-independence in log-linear models for incomplete contingency tables and impacts certain types of statistical inference based on assumptions on observed and missing data. We demonstrate that generalized residuals for item pair frequencies under IRT models as discussed by Haberman and Sinharay (J Am Stat Assoc 108:1435–1444, 2013. https://doi.org/10.1080/01621459.2013.835660) are inappropriate for MST data without adjustments. The adjustments are dependent on the MST design, and can quickly become nontrivial as the complexity of the routing increases. However, the adjusted residuals are found to have satisfactory Type I errors in a simulation and illustrated by an application to real MST data from the Programme for International Student Assessment (PISA). Implications and suggestions for statistical inference with MST designs are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Community detection in interval-weighted networks.
- Author
-
Alves, Hélder, Brito, Paula, and Campos, Pedro
- Subjects
SOCIAL network analysis ,CONTINGENCY tables - Abstract
In this paper we introduce and develop the concept of interval-weighted networks (IWN), a novel approach in Social Network Analysis, where the edge weights are represented by closed intervals composed with precise information, comprehending intrinsic variability. We extend IWN for both Newman's modularity and modularity gain and the Louvain algorithm, considering a tabular representation of networks by contingency tables. We apply our methodology to two real-world IWN. The first is a commuter network in mainland Portugal, between the twenty three NUTS 3 Regions (IWCN). The second focuses on annual merchandise trade between 28 European countries, from 2003 to 2015 (IWTN). The optimal partition of geographic locations (regions or countries) is developed and compared using two new different approaches, designated as "Classic Louvain" and "Hybrid Louvain" , which allow taking into account the variability observed in the original network, thereby minimizing the loss of information present in the raw data. Our findings suggest the division of the twenty three Portuguese regions in three main communities for the IWCN and between two to three country communities for the IWTN. However, we find different geographical partitions according to the community detection methodology used. This analysis can be useful in many real-world applications, since it takes into account that the weights may vary within the ranges, rather than being constant. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Access to University Studies: A New Form of Discrimination for Low-Functioning People.
- Author
-
Rodríguez-Rodríguez, Tamara, Álvarez-Martínez-Iglesias, José-María, Molina-Saorín, Jesús, and Marín-Marín, José-Antonio
- Subjects
DISCRIMINATION (Sociology) ,HIGHER education ,EMPLOYMENT of people with disabilities ,CONTINGENCY tables ,EDUCATIONAL equalization - Abstract
Although it would seem that we are currently in a more inclusive society, the reality is quite different, since discriminatory models continue to be perpetuated based on the level of functional performance of each person. In this sense, the purpose of this study is to find out the degree of discrimination that people with low functional performance have in relation to the rest of the population on the basis of sex and level of studies. To this end, through a thorough investigation based on the scientific method and articulated via statistical analysis (the modelling of categorical data), this study reveals the situations of inequality to which people with low functional performance are subjected in terms of higher education. This study used the survey on Employment of People with Disabilities (EPD), carried out by the National Statistics Institute (INE), and conducted annually with a sample size of 60,000 households, equivalent to some 200,000 people. The statistical analysis was carried out using R software and the main techniques used were contingency table modelling, log-linear models, and logistic models. Finally, some recommendations are offered to contribute to social awareness, for which the role of teachers is a crucial element for educational equity and their training is of vital importance, as teachers are a key element in adapting contents to different abilities, especially for people with lower functional performance. The quality of the initial training they receive will depend on their achievement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Factors affecting 30-day mortality in poor-grade aneurysmal subarachnoid hemorrhage: a 10-year single-center experience.
- Author
-
Scibilia, Antonino, Rustici, Arianna, Linari, Marta, Zenesini, Corrado, Belotti, Laura Maria Beatrice, Dall'Olio, Massimo, Princiotta, Ciro, Cuoci, Andrea, Aspide, Raffaele, Migliorino, Ernesto, Moneti, Manuel, Sturiale, Carmelo, Castioni, Carlo Alberto, Conti, Alfredo, Bortolotti, Carlo, and Cirillo, Luigi
- Subjects
SUBARACHNOID hemorrhage ,DISTRIBUTION (Probability theory) ,FAMILY communication ,CEREBRAL vasospasm ,CONTINGENCY tables ,CHI-squared test ,INTRAVENTRICULAR hemorrhage - Abstract
Background: The management of patients with poor-grade aneurysmal subarachnoid hemorrhage (aSAH) is burdened by an unfavorable prognosis even with aggressive treatment. The aim of the present study is to investigate the risk factors affecting 30-day mortality in poor-grade aSAH patients. Methods: We performed a retrospective analysis of a prospectively collected database of poor-grade aSAH patients (World Federation of Neurosurgical Societies, WFNS, grades IV and V) treated at our institution from December 2010 to December 2020. For all variables, percentages of frequency distributions were analyzed. Contingency tables (Chi-squared test) were used to assess the association between categorical variables and outcomes in the univariable analysis. Multivariable analysis was performed by using the multiple logistic regression method to estimate the odds ratio (OR) for 30-day mortality. Results: A total of 149 patients were included of which 32% had WFNS grade 4 and 68% had WFNS grade 5. The overall 1-month mortality rate was 21%. On univariable analysis, five variables were found to be associated with the likelihood of death, including intraventricular hemorrhage (IVH ≥ 50 mL, p = 0.005), the total amount of intraventricular and intraparenchymal hemorrhage (IVH + ICH ≥ 90 mL, p = 0.019), the IVH Ratio (IVH Ratio ≥ 40%, p = 0.003), posterior circulation aneurysms (p = 0.019), presence of spot sign on initial CT scan angiography (p = 0.015). Nonetheless, when the multivariable analysis was performed, only IVH Ratio (p = 0.005; OR 3.97), posterior circulation aneurysms (p = 0.008; OR 4.05) and spot sign (p = 0.022; OR 6.87) turned out to be independent predictors of 30-day mortality. Conclusion: The risk of mortality in poor-grade aSAH remains considerable despite maximal treatment. Notwithstanding the limitations of a retrospective study, our report highlights some neuroradiological features that in the emergency setting, combined with leading clinical and anamnestic parameters, may support the multidisciplinary team in the difficult decision-making process and communication with family members from the earliest stages of poorgrade aSAH. Further prospective studies are warranted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Nonparametric estimation of the random effects distribution for the risk or rate ratio in rare events meta‐analysis with the arm‐based and contrast‐based approaches.
- Author
-
Sangnawakij, Patarawan, Böhning, Dankmar, Holling, Heinz, and Jansen, Katrin
- Subjects
- *
NONPARAMETRIC estimation , *MAXIMUM likelihood statistics , *POISSON regression , *CONTINGENCY tables , *EXPECTATION-maximization algorithms - Abstract
Rare events are events which occur with low frequencies. These often arise in clinical trials or cohort studies where the data are arranged in binary contingency tables. In this article, we investigate the estimation of effect heterogeneity for the risk‐ratio parameter in meta‐analysis of rare events studies through two likelihood‐based nonparametric mixture approaches: an arm‐based and a contrast‐based model. Maximum likelihood estimation is achieved using the EM algorithm. Special attention is given to the choice of initial values. Inspired by the classification likelihood, a strategy is implemented which repeatably uses random allocation of the studies to the mixture components as choice of initial values. The likelihoods under the contrast‐based and arm‐based approaches are compared and differences are highlighted. We use simulations to assess the performance of these two methods. Under the design of sampling studies with nested treatment groups, the results show that the nonparametric mixture model based on the contrast‐based approach is more appropriate in terms of model selection criteria such as AIC and BIC. Under the arm‐based design the results from the arm‐based model performs well although in some cases it is also outperformed by the contrast‐based model. Comparisons of the estimators are provided in terms of bias and mean squared error. Also included in the comparison is the mixed Poisson regression model as well as the classical DerSimonian‐Laird model (using the Mantel‐Haenszel estimator for the common effect). Using simulation, estimating effect heterogeneity in the case of the contrast‐based method appears to behave better than the compared methods although differences become negligible for large within‐study sample sizes. We illustrate the methodologies using several meta‐analytic data sets in medicine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.