20 results on '"Fulvia Mecatti"'
Search Results
2. TB Hackathon: Development and Comparison of Five Models to Predict Subnational Tuberculosis Prevalence in Pakistan
- Author
-
Sandra Alba, Ente Rood, Fulvia Mecatti, Jennifer M. Ross, Peter J. Dodd, Stewart Chang, Matthys Potgieter, Gaia Bertarelli, Nathaniel J. Henry, Kate E. LeGrand, William Trouleau, Debebe Shaweno, Peter MacPherson, Zhi Zhen Qin, Christina Mergenthaler, Federica Giardina, Ellen-Wien Augustijn, Aurangzaib Quadir Baloch, and Abdullah Latif
- Subjects
small area estimation ,tuberculosis burden ,predictive modelling ,subnational prevalence ,spatial epidemiology ,forecasting ,Medicine - Abstract
Pakistan’s national tuberculosis control programme (NTP) is among the many programmes worldwide that value the importance of subnational tuberculosis (TB) burden estimates to support disease control efforts, but do not have reliable estimates. A hackathon was thus organised to solicit the development and comparison of several models for small area estimation of TB. The TB hackathon was launched in April 2019. Participating teams were requested to produce district-level estimates of bacteriologically positive TB prevalence among adults (over 15 years of age) for 2018. The NTP provided case-based data from their 2010–2011 TB prevalence survey, along with data relating to TB screening, testing and treatment for the period between 2010–2011 and 2018. Five teams submitted district-level TB prevalence estimates, methodological details and programming code. Although the geographical distribution of TB prevalence varied considerably across models, we identified several districts with consistently low notification-to-prevalence ratios. The hackathon highlighted the challenges of generating granular spatiotemporal TB prevalence forecasts based on a cross-sectional prevalence survey data and other data sources. Nevertheless, it provided a range of approaches to subnational disease modelling. The NTP’s use and plans for these outputs shows that, limitations notwithstanding, they can be valuable for programme planning.
- Published
- 2022
- Full Text
- View/download PDF
3. A Narrower Perspective? From a Global to a Developed-Countries Gender Gap Index: a Gender Statistics Excercise
- Author
-
Silvia Caligaris, Fulvia Mecatti, and Franca Crippa
- Subjects
Statistics ,HA1-4737 - Abstract
In this paper, we focus our attention on a particular composite index of gender equality, the Global Gender Gap Index (GGGI), highlighting problematics and weaknesses and proposing a different approach structured in four steps. The starting point of our analysis is to narrow the research to a small group of OECD countries: in this way it is possible to lower the gender analysis in a homogeneous socio-cultural framework and introduce a fifth dimension related to the time use. Next, to explore which variables have a greater impact on the gender gap persistence among these countries, we propose a different weighting method, based on the structural equation modeling (SEM). Through the study of official data, the effects of these steps on the final ranking of countries were then analyzed, allowing reflections from both the methodological and socio-cultural point of view.
- Published
- 2014
- Full Text
- View/download PDF
4. Confronti fra stimatori per la media nel campionamento per centri
- Author
-
Fulvia Mecatti and Sonia Migliorati
- Subjects
Statistics ,HA1-4737 - Abstract
The center sampling technique is well suited when a population is naturally gathered in overlapping groups of units for which the units can not be labeled and the group size as well as the population size are unknown. An unbiased estimator Ym for the mean of a quantitative characteristic of interest has been proposed under the simple hypotheses that the relative weight of each center is known. A second estimator Yc can be deduced from a previous proposal under the same hypothesis. In the present paper the exact variance of Yc together with an estimate of it are given. The two estimators are based on different ways of summarizing data and they coincide in the case of proportional allocation of the overall sample size only. A comparison between the two estimators is accomplished both from the inferential and from the practical point of view. Through a simulation study it is shown that no estimator is uniformly more efficient than the other in the general case of L>_2 centers. Besides it comes out that Yc is more efficient when there is a "small" variability of the center sampling fractions, while Ym is more efficient as this variability increases. Simulation also shows that the proposed estimators for the variance are asymptotically unbiased, consistent and c-consistent.
- Published
- 2007
- Full Text
- View/download PDF
5. La stima della media nel campionamento per centri
- Author
-
Fulvia Mecatti
- Subjects
Statistics ,HA1-4737 - Abstract
The "aggregation points" sampling design applies, for instance, in survey of irregular immigrants i.e. of populations composed by a finite but unknown number of units which do not consent labelling and that can be reached only through a set of known but overlapping frames called "aggregation points". Dealing with the "aggregation points" sampling design, the problem of estimating the mean of a quantitative character is concerned; an estimate of the estimator's variance is also proposed. Some results from a simulation study are presented. Simulations indicate that estimators proposed perform better in case of not too large number of aggregation points but extensively overlapping.
- Published
- 2007
- Full Text
- View/download PDF
6. Center sampling: a strategy for elusive population surveys
- Author
-
Fulvia Mecatti and Sonia Migliorati
- Subjects
Statistics ,HA1-4737 - Abstract
Center sampling is useful in finite population surveys when exhaustive lists of all units are not available and the target population is naturally clustered into a number of overlapping sites spread over an area of interest such as, for instance, the immigrant population illegally resident in a country. Center sampling has been successfully employed in official European surveys; nevertheless few systematic theoretical results have been given yet to support empirical findings. In this paper a general theory for Center sampling is formalized and an unbiased estimator for the mean of a quantitative or dichotomous characteristic is proposed together with its exact variance. A suitable estimator for the variance, unbiased under simple random sampling, is also derived and the optimum allocation of the sample size among centers subject to linear cost constraints is discussed. Other sampling designs, useful under operational aspects, are also considered.
- Published
- 2007
- Full Text
- View/download PDF
7. Improving the causal treatment effect estimation with propensity scores by the bootstrap
- Author
-
Maeregu W. Arisido, Fulvia Mecatti, Paola Rebora, Arisido, M, Mecatti, F, and Rebora, P
- Subjects
Statistics and Probability ,Economics and Econometrics ,Propensity score ,Applied Mathematics ,Time-to-event endpoint ,Modeling and Simulation ,Bootstrap bia ,Observational study ,SECS-S/01 - STATISTICA ,Social Sciences (miscellaneous) ,Analysis ,Simulation ,Average treatment effect ,Causal inference - Abstract
When observational studies are used to establish the causal effects of treatments, the estimated effect is affected by treatment selection bias. The inverse propensity score weight (IPSW) is often used to deal with such bias. However, IPSW requires strong assumptions whose misspecifications and strategies to correct the misspecifications were rarely studied. We present a bootstrap bias correction of IPSW (BC-IPSW) to improve the performance of propensity score in dealing with treatment selection bias in the presence of failure to the ignorability and overlap assumptions. The approach was motivated by a real observational study to explore the potential of anticoagulant treatment for reducing mortality in patients with end-stage renal disease. The benefit of the treatment to enhance survival was demonstrated; the suggested BC-IPSW method indicated a statistically significant reduction in mortality for patients receiving the treatment. Using extensive simulations, we show that BC-IPSW substantially reduced the bias due to the misspecification of the ignorability and overlap assumptions. Further, we showed that IPSW is still useful to account for the lack of treatment randomization, but its advantages are stringently linked to the satisfaction of ignorability, indicating that the existence of relevant though unmeasured or unused covariates can worsen the selection bias.
- Published
- 2022
8. Efficient unequal probability resampling from finite populations
- Author
-
Fulvia Mecatti, Pier Luigi Conti, Federica Nicolussi, Conti, P, Mecatti, F, and Nicolussi, F
- Subjects
Statistics and Probability ,education.field_of_study ,Resampling ,Computer science ,Applied Mathematics ,Sampling design ,Population ,Sampling (statistics) ,Finite populations ,Sample (statistics) ,Confidence interval ,finite populations ,sampling designs ,resampling ,pseudo-population ,Computational Mathematics ,Computational Theory and Mathematics ,Sample size determination ,Sampling designs ,Finite population ,Pseudo-population ,education ,Algorithm ,Quantile - Abstract
A resampling technique for probability-proportional-to size sampling designs is proposed. It is essentially based on a special form of variable probability, without replacement sampling applied directly to the sample data, yet according to the pseudo-population approach. From a theoretical point of view, it is asymptotically correct: as both the sample size and the population size increase, under mild regularity conditions the proposed resampling design tends to coincide with the original sampling design under which sample data were collected. From a computational point of view, the proposed methodology is easy to be implemented and efficient, because it neither requires the actual construction of the pseudo-population nor any form of randomization to ensure integer weights and sizes. Empirical evidence based on a simulation study 1 indicates that the proposed resampling technique outperforms its two main competitors for confidence interval construction of various population parameters including quantiles.
- Published
- 2022
9. FENStatS COVID-19 Working Group: Goals,Initiatives and Perspectives
- Author
-
FULVIA MECATTI, Biganzoli, E, Manzi, G, Michelett AI, Nicolussi, F, Salini, S, and Mecatti, F
- Subjects
SECS-S/01 - STATISTICA ,Infodamic, Communication, Data Literacy, Statistical Divide - Abstract
The FENStatS Covid19 WR is a free spontaneous association of statistical experts from 14 European countries, united by concerns related to the current pan-infodemic and statistical challenges revealed by the Covid-19 havoc. The present short paper tracks when, how and why the WG has formed, illustrates our mission, aims and scope, describes the steps taken so far and outlines perspectives for future work.
- Published
- 2022
10. Number of samples for accurate visual estimation of mean herbage mass in Campos grasslands
- Author
-
Fulvia Mecatti, Masahiko Hirata, Gerónimo Cardozo, P. Soca, Martin Do Carmo, Do Carmo, M, Cardozo, G, Mecatti, F, Soca, P, and Hirata, M
- Subjects
Agronomy ,Statistics ,Visual estimation ,Sampling error ,bootstrap, sampling error, sample size computation, bias, simulations ,Agronomy and Crop Science ,Mathematics - Abstract
The number of samples is a major issue when estimating the mean herbage mass of grazed paddocks. The aim of this study was to assess the number of samples required for accurate visual estimation of mean herbage mass in relation to the herbage mass heterogeneity and size of paddocks. Data were collected across scales of space and time (273 sampling events) from paddocks on Campos grasslands in Uruguay, using the visual estimation technique. The mean herbage mass of the paddocks ranged from 270 to 6350 kg of dry matter (DM) per hectare with coefficient of variation (CV) of 0.13 to 1.26. Twenty-four events representing four levels of herbage mass hetero- geneity (CV = 0.3, 0.5, 0.7 and 1.0) × three levels of paddock size (small, 5–13 ha; medium, 41–67 ha; large, 100–140 ha) were chosen (two replicates per group), and analyzed for the probability that the estimation error exceeded 10% of the mean (10% error probability) using the bootstrap technique. The number of samples required for controlling the 10% error probability below 0.1 increased gradually from 50 to 150 per paddock as the CV increased from 0.3 to 0.7, then sharply to 350 until the CV increased to 1.0, with no effect of paddock size. Taking account of the distribution of CV (< 0.7 in nearly 80% of the events), we propose a general recommendation to take a minimum of 150 samples per paddock for accurate estimation of mean herbage mass in Campos grasslands irrespective of the size of paddocks.
- Published
- 2020
11. On the role of weights rounding in applications of resampling based on pseudopopulations
- Author
-
Pier Luigi Conti, Federico Andreis, Fulvia Mecatti, Andreis, F, Conti, P, and Mecatti, F
- Subjects
Statistics and Probability ,Mathematical optimization ,Computer science ,01 natural sciences ,finite populations ,probability proportional to size ,010104 statistics & probability ,Resampling ,complex designs ,bootstrap ,resampling ,0502 economics and business ,Point estimation ,Nearest integer function ,0101 mathematics ,variance estimation ,050205 econometrics ,Rounding ,05 social sciences ,Estimator ,Sampling (statistics) ,Variance (accounting) ,bootstrap, finite populations, probability proportional to size, varianceestimation, pi-ps complex designs ,Variable (computer science) ,SECS-S/01 - STATISTICA ,Statistics, Probability and Uncertainty ,π‐ps complex designs - Abstract
Resampling methods are widely studied and increasingly employed in applied research and practice. When dealing with complex sampling designs, common resampling techniques require adjusting noninteger sampling weights in order to construct the so called “pseudopopulation” in order to perform the actual resampling. The practice of rounding, however, has been empirically shown to be harmful under general designs. In this paper, we present asymptotic results concerning, in particular, the practice of rounding resampling weights to the nearest integer, an approach that is commonly adopted by virtue of its reduced computational burden, as opposed to randomization‐based alternatives. We prove that such approach leads to nonconsistent estimation of the distribution function of the survey variable; we provide empirical evidence of the practical consequences of the nonconsistency when the point estimation of the variance of complex estimators is of interest.
- Published
- 2019
12. Methodological perspectives for surveying rare and clustered population: towards a sequentially adaptive approach
- Author
-
Federico Andreis, Emanuela Furfaro, Fulvia Mecatti, Perna, C, Pratesi, M, Ruiz-Gazen, A, Andreis, F, Furfaro, E, and Mecatti, F
- Subjects
clustered populations ,Population ,Settore SECS-S/05 - STATISTICA SOCIALE ,Machine learning ,computer.software_genre ,spatial pattern, prevalence surveys, logistic constraints, sampling strategies ,Spatial pattern ,Horvitz-Thompson estimator ,logistic constraints ,education ,survey sampling ,education.field_of_study ,business.industry ,Small number ,Prevalence surveys ,Sampling (statistics) ,Integrated approach ,Poisson sampling ,Geography ,Current practice ,Settore SECS-S/01 - STATISTICA ,Logistic constraint ,Who guidelines ,Horvitz-Thompson estimation ,Prevalence survey ,SECS-S/01 - STATISTICA ,Trait ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Sampling a rare and clustered trait in a finite population is challenging: traditional sampling designs usually require a large sample size in order to obtain reasonably accurate estimates, resulting in a considerable investment of resources infront of the detection of a small number of cases. A notable example is the case of WHO’s tubercoulosis (TB) prevalence surveys, crucial for countries that bear a high TB burden, the prevalence of cases being still less than 1%. In the latest WHO guidelines, spatial patterns are not explicitly accounted for, with the risk of missing a large number of cases; moreover, cost and logistic constraints can pose further problems. After reviewing the methodology in use by WHO, the use of adaptive and sequential approaches is discussed as natural alternatives to overcome the limitations of the current practice. A small simulation study is presented to highlight possible advantages and limitations of these alternatives, and an integrated approach, combining both adaptive and sequential features in a single sampling strategy is discussed as a promising methodological perspective.
- Published
- 2018
13. Measuring Latent Variables in Space and/or Time: A Gender Statistics exercise
- Author
-
F Crippa, Fulvia Mecatti, Gaia Bertarelli, Skiadas C., Skiadas C., Bertarelli, G, Crippa, F, and Mecatti, F
- Subjects
Multivariate statistics ,Longitudinal data ,Computer science ,05 social sciences ,Latent variable ,Space (commercial competition) ,Markov model ,01 natural sciences ,Structural equation modeling ,Latent clustering ,Spatial ordering ,010104 statistics & probability ,Variable (computer science) ,Gender gap ,0502 economics and business ,Statistics ,SECS-S/01 - STATISTICA ,Added value ,Settore SECS-S/05 - Statistica Sociale ,0101 mathematics ,Latent clustering, Longitudinal data, Spatial ordering, Gender Gap ,050205 econometrics - Abstract
This paper concerns a Multivariate Latent Markov Model recently introduced in the literature for estimating latent traits in social sciences. Based on its ability of simultaneously dealing with longitudinal and spacial data, the model is proposed when the latent response variable is expected to have a time and space dynamic of its own, as an innovative alternative to popular methodologies such as the construction of composite indicators and structural equation modeling. The potentials of the proposed model and the added value with respect to the traditional weighted composition methodology, are illustrated via an empirical Gender Statistics exercise, focused on gender gap as the latent status to be measured and based on supranational o cial statistics for 30 European countries in the period 2010–2015.
- Published
- 2018
14. Measuring Latent Variables is space and/or time: A Gender Statistics exercise
- Author
-
Bertarelli, Gaia, Franca, Crippa, and Fulvia, Mecatti
- Subjects
Latent clustering ,Spatial ordering ,Gender Gap ,Longitudinal data ,Latent clustering, Longitudinal data, Spatial ordering, Gender Gap ,Settore SECS-S/05 - Statistica Sociale - Published
- 2017
15. A smooth subclass of graphical models for chain graph: towards measuring gender gaps
- Author
-
Fulvia Mecatti, Federica Nicolussi, Nicolussi, F, and Mecatti, F
- Subjects
Statistics and Probability ,media_common.quotation_subject ,Marginal model ,Logistic regression ,01 natural sciences ,Markov propertie ,010104 statistics & probability ,0502 economics and business ,Statistics ,Conditional independence ,Econometrics ,Gender statistics ,Graphical model ,0101 mathematics ,Markov properties ,Categorical variable ,050205 econometrics ,Mathematics ,media_common ,Contingency table ,Variables ,05 social sciences ,General Social Sciences ,Marginal log-linear model ,Marginal log-linear models ,Settore SECS-S/01 - STATISTICA ,SECS-S/01 - STATISTICA ,Graph (abstract data type) ,Chain graph ,Gender statistic - Abstract
Recent gender literature shows a growing demand of sound statistical methods for analysing gender gaps, for capturing their complexity and for exploring the pattern of relationships among a collection of observable variables selected in order to disentangle the latent trait of gender equity. In this paper we consider parametric Hierarchical Marginal Models applying to binary and categorical data, as a promising statistical tool for gender studies. We explore the potential of Chain Graphical Models, by focusing on a special smooth sub-class of models known as Graphical Models of type II as recently developed (Nicolussi in Marginal parameterizations for conditional independence models and graphical models for categorical data, 2013) , i.e. an advanced methodology for untangling and highlighting any dependence/independence pattern between gender and a set of covariates of mixed nature, either categorical, ordinal or quantitative. With respect to traditional methodologies for treating categorical variables, such as Logistic Regression and Chi-Squared test for contingency table, the proposed model lead to a full multivariate analysis, allowing for isolating the effect of each dependent variable from all other response variables. At the same time, the resulting graph offers an immediate visual idea of the association pattern in the entire set of study variables. The empirical performance of the method is tested by using data from a recent survey about sexual harassment issues inside university, granted by the Equal Opportunities Committee of the University of Milano-Bicocca (Italy).
- Published
- 2016
16. Contributions to Sampling Statistics
- Author
-
Fulvia Mecatti, Pier Luigi Conti, Maria Giovanna Ranalli, Fulvia Mecatti, Pier Luigi Conti, and Maria Giovanna Ranalli
- Subjects
- Sampling (Statistics)--Congresses
- Abstract
This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. It is intended as an international forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputation, small area estimation and effective use of auxiliary information; applications cover a wide and enlarging range of subjects in official household surveys, Bayesian networks, auditing, business and economic surveys, geostatistics and agricultural statistics. The book is an updated, high level reference survey addressed to researchers, professionals and practitioners in many fields.
- Published
- 2014
17. Resampling from finite populations: An empirical process approach
- Author
-
Conti, Pier Luigi, Marella, Daniela, and Fulvia, Mecatti
- Subjects
sampling ,bootstrap ,asymptotics - Published
- 2015
18. Bootstrap algorithms for risk models with auxiliary variable and complex samples
- Author
-
Giancarlo Manzi, Fulvia Mecatti, Manzi, G, and Mecatti, F
- Subjects
Statistics and Probability ,Mathematical optimization ,General Mathematics ,Estimator ,Probability-proportional-to-size sampling ,Ratio model ,Ratio estimator ,Regression estimator ,Resampling methods ,Variance (accounting) ,Stability (probability) ,Bootstrapping (electronics) ,Resampling ,Statistics ,Sampling design ,SECS-S/01 - STATISTICA ,Probability-proportional-to-size Sampling, Ratio Model, Ratio Estimator, Regression Estimator, Resampling Methods ,Settore SECS-S/01 - Statistica ,Operational risk management ,Algorithm ,Jackknife resampling ,Mathematics - Abstract
Resampling methods are often invoked in risk modelling when the stability of estimators of model parameters has to be assessed. The accuracy of variance estimates is crucial since the operational risk management affects strategies, decisions and policies. However, auxiliary variables and the complexity of the sampling design are seldom taken into proper account in variance estimation. In this paper bootstrap algorithms for finite population sampling are proposed in presence of an auxiliary variable and of complex samples. Results from a simulation study exploring the empirical performance of some bootstrap algorithms are presented.
- Published
- 2009
19. Analysis of tuberculosis prevalence surveys: new guidance on best-practice methods
- Author
-
Rhian Daniel, Sian Floyd, Fulvia Mecatti, Ikushi Onozaki, Charalambos Sismanidis, Philippe Glaziou, Edine W. Tiemersma, Katherine Floyd, Norio Yamada, Emily Bloss, Rosalind Vianzon, Jaime Y Lagahid, Floyd, S, Sismanidis, C, Yamada, N, Daniel, R, Lagahid, J, Mecatti, F, Vianzon, R, Bloss, E, Tiemersma, E, Onozaki, I, Glaziou, P, and Floyd, K
- Subjects
medicine.medical_specialty ,Tuberculosis ,business.industry ,Epidemiology ,Best practice ,Inverse probability weighting ,Public health ,Prevalence survey ,Analytic Perspective ,medicine.disease ,Missing data ,computer.software_genre ,R1 ,spatially clustered sampling, imputation, missing values, nationwide TB burden estimation, simulation ,SECS-S/01 - STATISTICA ,medicine ,Cluster sampling ,Data mining ,business ,computer ,MED/01 - STATISTICA MEDICA ,Demography - Abstract
Background\ud \ud An unprecedented number of nationwide tuberculosis (TB) prevalence surveys will be implemented between 2010 and 2015, to better estimate the burden of disease caused by TB and assess whether global targets for TB control set for 2015 are achieved. It is crucial that results are analysed using best-practice methods.\ud \ud \ud Objective\ud \ud To provide new theoretical and practical guidance on best-practice methods for the analysis of TB prevalence surveys, including analyses at the individual as well as cluster level and correction for biases arising from missing data.\ud \ud \ud Analytic methods\ud \ud TB prevalence surveys have a cluster sample survey design; typically 50-100 clusters are selected, with 400-1000 eligible individuals in each cluster. The strategy recommended by the World Health Organization (WHO) for diagnosing pulmonary TB in a nationwide survey is symptom and chest X-ray screening, followed by smear microscopy and culture examinations for those with an abnormal X-ray and/or TB symptoms. Three possible methods of analysis are described and explained. Method 1 is restricted to participants, and individuals with missing data on smear and/or culture results are excluded. Method 2 includes all eligible individuals irrespective of participation, through multiple missing value imputation. Method 3 is restricted to participants, with multiple missing value imputation for individuals with missing smear and/or culture results, and inverse probability weighting to represent all eligible individuals. The results for each method are then compared and illustrated using data from the 2007 national TB prevalence survey in the Philippines. Simulation studies are used to investigate the performance of each method.\ud \ud \ud Key findings\ud \ud A cluster-level analysis, and Methods 1 and 2, gave similar prevalence estimates (660 per 100,000 aged ≥ 10 years old), with a higher estimate using Method 3 (680 per 100,000). Simulation studies for each of 4 plausible scenarios show that Method 3 performs best, with Method 1 systematically underestimating TB prevalence by around 10%.\ud \ud \ud Conclusion\ud \ud Both cluster-level and individual-level analyses should be conducted, and individual-level analyses should be conducted both with and without multiple missing value imputation. Method 3 is the safest approach to correct the bias introduced by missing data and provides the single best estimate of TB prevalence at the population level.
- Full Text
- View/download PDF
20. Modelling Measurement Errors by Object-Oriented Bayesian Networks
- Author
-
MARELLA, Daniela, VICARD, Paola, Federica Nicolussi, Fulvia Mecatti, Marella, Daniela, and Vicard, Paola
- Subjects
Bayesian Network ,Bayesian Network, Measurement Error ,Measurement Error - Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.