27,918 results
Search Results
2. Utilization of Online Past Examination Papers and Academic Performance of Information Technology Students at Jomo Kenyatta University- Eldoret Campus
- Author
-
Grace Irura, Chelulei Kennedy Kipkosgei, and Paul Maku Gichohi
- Subjects
Estimation ,Research design ,Medical education ,Government ,010504 meteorology & atmospheric sciences ,Descriptive statistics ,business.industry ,Validity ,Information technology ,010501 environmental sciences ,01 natural sciences ,Test (assessment) ,Stratified sampling ,business ,Psychology ,0105 earth and related environmental sciences - Abstract
Purpose: The aim of this study was to determine the influence of utilization of online past examination papers on academic performance of IT students at Jomo Kenyatta University.Methodology: Descriptive survey research design was used in this study. Respondents were the 105 undergraduate students in Information Technology Department and 2 librarians in Jomo Kenyatta University of Agriculture and Technology-Eldoret Campus. They were sampled using stratified sampling, while Head of library and the library staff in charge of e-resources in JKUAT Eldoret Campus were purposively included in the study. Primary data was collected from students using closed-ended questionnaires, while interview guide was used on librarians. To ensure validity and reliability, pre-testing of research instruments was done on 20 undergraduate students of Mount Kenya university-Eldoret Campus in the department of Information Technology. D Descriptive statistics such as mean, percentage, frequencies and standard deviation, SPSS (version 22) was used in analysing data. Univariate and multiple regression analysis were used to test the hypothesis and overall model respectively. The results were presented using tables and explanations.Results: There is a positive and significant relationship between online past examination papers and academic performance of IT undergraduate students of Jomo Kenyatta University of Agriculture and Technology - Eldoret Campus. The study had a normal mean of 4.7 and standard deviation of 0.58. The online past examination papers had R estimation of 0.715. The P estimation of constant was significant (.000), consequently R square value was used. The R square estimation of 0.711 inferred that online past assessment papers anticipated 71.1% of the fluctuation in the academic performance. It had a beta of 0.504 at p
- Published
- 2020
3. [Papers] Interpretable Convolutional Neural Network Including Attribute Estimation for Image Classification
- Author
-
Kazaha Horii, Takahiro Ogawa, Miki Haseyama, and Keisuke Maeda
- Subjects
attribute estimation ,Estimation ,Contextual image classification ,business.industry ,Computer science ,Interpretable convolutional neural network ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,Convolutional neural network ,Signal Processing ,Media Technology ,Artificial intelligence ,business ,image classification - Abstract
An interpretable convolutional neural network (CNN) including attribute estimation for image classification is presented in this paper. Although CNNs perform highly accurate image classification, the reason for the classification results obtained by the neural networks is not clear. In order to provide interpretation of CNNs, the proposed method estimates attributes, which explain elements of objects, in an intermediate layer of the network. This enables improvement of the interpretability of CNNs, and it is the main contribution of this paper. Furthermore, the proposed method uses the estimated attributes for image classification in order to enhance its accuracy. Consequently, the proposed method not only provides interpretation of CNNs but also realizes improvement in the performance of image classification.
- Published
- 2020
4. Contributions of the UK biobank high impact papers in the era of precision medicine
- Author
-
Peter Glynn and Philip Greenland
- Subjects
Publishing ,Estimation ,Gerontology ,medicine.medical_specialty ,Epidemiology ,business.industry ,030204 cardiovascular system & hematology ,Precision medicine ,Biobank ,United Kingdom ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,030212 general & internal medicine ,Metric (unit) ,Personalized medicine ,Journal Impact Factor ,Periodicals as Topic ,Precision Medicine ,business ,Risk assessment ,Biological Specimen Banks ,Cohort study - Abstract
To review the highest impact studies published from the UK Biobank and assess their contributions to "precision medicine." We reviewed 140 of 689 studies published between 2008 and May 2019 from the UK Biobank deemed to be high impact by citations, alternative metric data, or publication in a high impact journal. We classified studies according to whether they (1) were largely methods papers, (2) largely replicated prior findings or associations, (3) generated novel findings or associations, (4) developed risk prediction models that did not yield clinically significant improvements in risk estimation over prior models or (5) developed models that produced significant improvements in individualized risk assessment, targeted screening, or targeted treatment. This final category represents "precision medicine." We classified 15 articles as category 1, 33 as category 2, 85 as category 3, six as category 4, and one as category 5. In this assessment of the first 7 years of the UK Biobank and first 4 years of genetic data availability, the majority of high impact UK Biobank studies either replicated known associations or generated novel associations without clinically relevant improvements in risk prediction, screening, or treatment. This information may be useful for designers of other cohort studies in terms of input to design and follow-up to facilitate precision medicine research.
- Published
- 2020
5. Cross-Validation, Risk Estimation, and Model Selection: Comment on a Paper by Rosset and Tibshirani
- Author
-
Stefan Wager
- Subjects
Statistics and Probability ,Estimation ,Computer science ,business.industry ,Model selection ,05 social sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Cross-validation ,Task (project management) ,010104 statistics & probability ,0502 economics and business ,Range (statistics) ,Artificial intelligence ,0101 mathematics ,Statistics, Probability and Uncertainty ,business ,computer ,050205 econometrics - Abstract
How best to estimate the accuracy of a predictive rule has been a longstanding question in statistics. Approaches to this task range from simple methods like Mallow’s Cp to algorithmic techniques l...
- Published
- 2020
6. Short Paper: Evaluation of Location Estimation Method That Focuses on Geographical Proximity of Friends
- Author
-
Masahiro Tani, Keisuke Ikeda, and Kojima Kazufumi
- Subjects
DBSCAN ,Estimation ,Information retrieval ,Computer science ,media_common.quotation_subject ,Short paper ,02 engineering and technology ,Friendship ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Social media ,Noise (video) ,Function (engineering) ,media_common - Abstract
Our proposed method focuses on offline friends who are acquainted with a target user in the real world. We use Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to distinguish between online friends of a target user who are also offline friends and online friends who are acquainted with the target user only in cyberspace. By utilizing the probability of friendship as a function of the geographical distance between users’ home locations, our proposed method achieves an estimation accuracy 1.7 percentage point better than that of a conventional method. We have also been able to verify the effectiveness of an optimization method that narrows down candidate areas for a target user’s home location.
- Published
- 2019
7. Randomization, blinding, data handling and sample size estimation in papers published in Veterinary Anaesthesia and Analgesia in 2009 and 2019
- Author
-
Brittany A. Munro, Paige Bergen, and Daniel Sj Pang
- Subjects
Estimation ,education.field_of_study ,Veterinary medicine ,Randomization ,Blinding ,General Veterinary ,business.industry ,Population ,Confidence interval ,Checklist ,Random Allocation ,Sample size determination ,Anesthesia ,Sample Size ,Medicine ,Animals ,Pain Management ,Analgesia ,education ,Literature survey ,business - Abstract
OBJECTIVE To evaluate reporting of items indicative of bias and weak study design. STUDY DESIGN Literature survey. POPULATION Papers published in Veterinary Anaesthesia and Analgesia. METHODS Reporting of randomization, blinding, sample size estimation and data exclusion were compared for papers published separated by a 10 year interval. A reporting rate of more than 95% was considered ideal. The availability of data supporting results in a publicly accessible repository was also assessed. Selected papers were randomized and identifiers removed for review, with data from 59 (57 in 2009, two in 2008) and 56 (52 in 2019, four in 2018) papers analyzed. Items were categorized for completeness of reporting using a previously published operationalized checklist. Two reviewers reviewed all papers independently. RESULTS Full reporting of randomization increased over time from 13.6% to 85.7% [95% confidence interval (CI), 57.8-86.6%; p < 0.0001], as did sample size estimation (from 0% to 20%; 95% CI, 7.6-32.4%; p = 0.002). Reporting of blinding (49.2% and 50.0%; 95% CI, -18.3% to 20.0%; p = 1.0) and exclusions of samples/animals (39.0% and 50.0%; 95% CI, -8.8% to 30.8%; p = 0.3) did not change significantly. Data availability was low (2008/2009, zero papers; 2018/2019, two papers). None of the items studied exceeded the predetermined ideal reporting rate. CONCLUSIONS AND CLINICAL RELEVANCE These results indicate that reporting quality remains low, with a risk of bias.
- Published
- 2021
8. Discussion of the Paper 'Prediction, Estimation, and Attribution' by B. Efron
- Author
-
Emmanuel J. Candès and Chiara Sabatti
- Subjects
Statistics and Probability ,Estimation ,media_common.quotation_subject ,05 social sciences ,01 natural sciences ,010104 statistics & probability ,Core (game theory) ,Reading (process) ,0502 economics and business ,Mathematics education ,Active listening ,0101 mathematics ,Statistics, Probability and Uncertainty ,Attribution ,Psychology ,050205 econometrics ,media_common - Abstract
We enjoyed reading Professor Efron’s (Brad) paper just as much as we enjoyed listening to his June 2019 lecture in Leiden. One of the core values underlying statistical research is in how it enable...
- Published
- 2020
9. Measuring Abundance: Methods for the Estimation of Population Size and Species Richness. Data in the Wild Series. By Graham J. G. Upton. Exeter (United Kingdom): Pelagic Publishing. $97.13 (hardcover); $45.33 (paper). x + 226 p.; ill.; index of examples and general index. ISBN: 978-1-78427-232-6 (hc); 978-1-78427-231-9 (pb); 978-1-78427-233-3 (eb). 2020
- Author
-
Jo A. Werba
- Subjects
Estimation ,Series (stratigraphy) ,Index (economics) ,Geography ,Abundance (ecology) ,Ecology ,Population size ,Pelagic zone ,Species richness ,General Agricultural and Biological Sciences - Published
- 2021
10. Three papers in regional dynamics and panel econometrics
- Author
-
Kevin Davey Duncan
- Subjects
Estimation ,Spillover effect ,Enterprise value ,Econometrics ,Economics ,Endogeneity ,Wald test ,Capital Purchase Program ,Statistical hypothesis testing ,Panel data - Abstract
This dissertation includes three chapters that cover broad topics in economics. The first chapter explores how the US Government's Capital Purchase Program, a large capital injection to local and regional banks through a stock purchase agreement, impacted local establishment dynamics such as entry, exit, employment expansion, and employment contraction following the 2008 Financial Crisis. The Capital Purchase Program dispersed over \$200 billion dollars to banks hoping to prevent failure and ease tightened lending conditions. I estimate the direct effects of a county having a bank receive Capital Purchase Program funds on local business dynamics in the seven years following treatment, as well as spillover effects as entrepreneurs and business in neighboring regions travel to gain access to credit. Estimates show the CPP had no effect on establishment entry and exit, nor employment expansion and contraction. This paper establishes that the business-lending aims of the CPP were not realized in the communities and regions that received funds, and casts further doubt on meaningful pass through of CPP funds to desirable local economic activity.The second chapter develops a joint hypothesis centered Wald test over fixed effects in large N small T panel data models with symmetric serial correlation within cross sectional observations. The enables joint hypothesis tests over inconsistently estimated fixed effects, such as the traditional varying intercept model as well as models with individual specific slope coefficients. I establish two different set of assumptions where feasible tests exist. The first assumption requires that individual errors follow a stationary $\ARp$ process. Under this assumption all second and fourth cross product moments can be consistently estimated while allowing for individual specific hypothesis and covariates to vary across individuals and time with individual specific slopes. The second feasible test requires individuals to have coefficient slopes that are shared among all individuals in a known grouping structure under the null. This set of assumptions enables estimation of a completely unconstrained variance-covariance matrix and higher cross product moments for individuals. Examples of these tests arise in wanting to establish latent panel structure, such as unobserved grouping of individuals, wanting to compare different models of teacher or firm value added against each other, or testing whether or not fixed effects can be approximated by Mundlak-Chamberlain devices.Finally, the third paper estimates how messages displaced on Dynamic Message Boards, large signs either adjacent to or displayed above roads, impact near to sign accidents. In this research, I look at the traffic-related messages such as ``drive sober,'' ``x deaths on roads this year,'' and ``click it or ticket,'' displayed on major highways, on reported near-to-sign traffic accidents. This provides estimates of the impact of different types of nudges on road safety behavior. To estimate the causal effect of these nudges, we build a new high-frequency panel data set using the information on the time and location of messages, crashes, overall traffic levels, and weather conditions using the data of the state of Vermont over a three year time period. I estimate models that control for endogeneity of displayed messages, or allow for spillover effects from neighboring messages.
- Published
- 2020
11. An Estimation of the Evolution of Waste Generated by Direct and Indirect Suppliers of the Spanish Paper Industry
- Author
-
José A. Camacho-Ballesta, Soraya María Ruíz-Peñalver, and Mercedes Rodríguez
- Subjects
Estimation ,Engineering ,Environmental Engineering ,Renewable Energy, Sustainability and the Environment ,business.industry ,05 social sciences ,Waste paper ,010501 environmental sciences ,Raw material ,Pulp and paper industry ,01 natural sciences ,0502 economics and business ,Cleaner production ,050207 economics ,business ,Waste Management and Disposal ,Life-cycle assessment ,0105 earth and related environmental sciences ,Production system - Abstract
The generation of waste by the paper industry has attracted great attention over the last decades among other reasons because the demand for recycled waste paper has considerably increased. As the paper industry is closely intertwined to the rest of industries in the production system, its activity exerts both a direct and indirect influence on the volume of waste generated by its supplier industries. The purpose of this study is to shed some light on the evolution of the volume of waste generated by the suppliers of the Spanish paper industry over the period 2005–2010 using an Economic Input–Output Life Cycle Assessment Model. In particular we focus on the evolution of the volume of waste generated by firms of the own paper industry. We employ data from different waste surveys conducted by the Spanish National Statistics Institute and input–output tables extracted from the World Input–Output Database. The results obtained show that the waste generated by suppliers amounted to 1250 thousand tonnes in 2010, an important volume if we take into account that the waste generated by the paper industry in 2010 amounted to 1739 thousand tonnes. The analysis of the evolution of the waste generated by suppliers reveals that there is a high degree of concentration, both in terms of industries and in terms of waste categories. In addition, the decrease in the volume of waste generated by supplier firms within the own paper industry reflects not only the growing importance of recycled paper as raw material for paper-making but also the important investments in technology made by this industry in Spain.
- Published
- 2016
12. Temporal Summarization of Scholarly Paper Collections by Semantic Change Estimation: Case Study of CORD-19 Dataset
- Author
-
Adam Jatowt, Muhammad Syafiq Mohd Pozi, and Yukiko Kawai
- Subjects
Estimation ,2019-20 coronavirus outbreak ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Event (computing) ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,05 social sciences ,010501 environmental sciences ,01 natural sciences ,Data science ,Automatic summarization ,Semantic change ,0502 economics and business ,050207 economics ,0105 earth and related environmental sciences - Abstract
The new pandemic disease caused by COVID-19 virus is the crucial event over the world in the beginning of 2020. Studies on corona viruses have been however carried since several decades ago, with recent research papers published on weekly basis. We demonstrate a simple approach to explore CORD-19 dataset to provide a high level overview of important semantic changes that occurred over time. Our method aims to support better understanding of large domain-specific collections of scholarly publications that span long time periods and could be regarded as complementary to frequency-based analysis.
- Published
- 2020
- Full Text
- View/download PDF
13. Conference paper
- Author
-
Xiaohang Fang, Riyaz Ismail, Martin H. Davy, and Nikola Sekularac
- Subjects
Estimation ,Control theory ,Computer science ,Phenomenon ,Term (time) - Abstract
In this study, the role of turbulence-chemistry interaction in diesel spray auto-ignition, flame stabilization and end of injection phenomena is investigated under engine relevant “Spray A” conditions. A recently developed diesel spray combustion modeling approach, Conditional Source-term Estimation (CSE-FGM), is coupled with Reynolds-averaged Navier-Stokes simulation (RANS) framework to study the details of spray combustion. The detailed chemistry mechanism is included through the Flamelet Generated Manifold (FGM) method. Both unsteady and steady flamelet solutions are included in the manifold to account for the auto-ignition process and the subsequent flame propagation in a diesel spray. Conditionally averaged chemical source terms are closed by the conditional scalars obtained in the CSE routine. Both non-reacting and reacting spray jets are computed over a wide range of Engine Combustion Network (ECN) diesel. “Spray A” conditions. The reacting spray results are compared with simulations using a homogeneous reactor combustion model and a flamelet combustion model with the same chemical mechanism. The present study represents the first application of CSE for a diesel spray. The non-reacting liquid/vapour penetration, the mean and RMS mixture fraction, the reactive region, the flame lift-off and the ignition delay show a good agreement with literature data from an optically accessible combustion vessel over a wide range of tested conditions. The CSE-FGM model also shows a better capability in predicting the end-of-injection events in diesel spray combustion. Overall, the CSE-FGM model is shown to capture the experimental trends well, both quantitatively and qualitatively.
- Published
- 2020
14. Position Paper: Proposing the Transit Heat Island Concept for Dynamic Spatio-Temporal Estimation of Urban Heat Islands in Urban Areas
- Author
-
Mahmoud Al-Ayyoub and Wiesam Essa
- Subjects
Estimation ,Geography ,business.industry ,Smart city ,Environmental resource management ,Distribution (economics) ,Temperature difference ,Urban heat island ,business ,Transit (satellite) - Abstract
In this article, we discuss how we can advance our knowledge and estimation of the distribution of temperature across urban regions by going beyond the commonly known urban heat islands (UHI) phenomena, which identify the temperature difference between a city and its rural surrounding. Transit heat island (THI) focuses more on the transit routes, which are among the hottest intra-urban land covers. We also discuss new efficient and effective means of collecting data with enough spatio-temporal resolution to model and analyze THI with various smart city applications in mind.
- Published
- 2020
15. Review Paper on Soil Loss Estimation Using RUSLE
- Author
-
Amare Dg
- Subjects
Estimation ,Soil loss ,Environmental science ,Soil science - Published
- 2020
16. The dynamic general nesting spatial econometric model for spatial panels with common factors
- Author
-
Elhorst, J. Paul and Research programme EEF
- Subjects
Spatial spillovers ,Original Paper ,Economics and Econometrics ,Bar (music) ,Geography, Planning and Development ,Nesting (process) ,Raising (metalworking) ,Dynamic effects ,Spatial panels ,Econometric model ,C51 ,Econometrics ,Economics ,Common factors ,C21 ,Estimation ,Social Sciences (miscellaneous) ,C23 - Abstract
The dynamic general nesting spatial econometric model for spatial panels with common factors is the most advanced model currently available. It accounts for local spatial dependence by means of an endogenous spatial lag, exogenous spatial lags, and a spatial lag in the error term. It accounts for dynamic effects by means of the dependent variable lagged in time, and the dependent variable lagged in both space and time. Finally, it accounts for global cross-sectional dependence by means of cross-sectional averages or principal components with heterogeneous coefficients, which generalizes the traditional controls for time-invariant and spatial-invariant variables by unit-specific and time-specific effects. This paper provides an overview of the main arguments in favor of each of these model components, as well as some potential pitfalls.
- Published
- 2022
17. Meta-analysis data for 104 Energy-Economy Nexus papers
- Author
-
Agáta Kociánová, Vladimír Hajko, and Martina Buličková
- Subjects
Estimation ,Energy ,Multidisciplinary ,Computer science ,020209 energy ,Scopus ,02 engineering and technology ,lcsh:Computer applications to medicine. Medical informatics ,Data science ,GDP ,Granger causality ,Energy-Economy Nexus ,Meta-analysis ,0202 electrical engineering, electronic engineering, information engineering ,Econometrics ,lcsh:R858-859.7 ,lcsh:Science (General) ,Nexus (standard) ,Energy economics ,Data Article ,lcsh:Q1-390 ,Multinomial logistic regression - Abstract
The data presented here are manually encoded characteristics of research papers in the area of Energy-Economy Nexus (empirical investigation of Granger causality between energy consumption and economic growth) that describe the methods, samples, and other details related to the individual estimations done in the examined empirical papers. Data cover papers indexed by Scopus, published in economic journals, written in English, after year 2000. In addition, papers were manually filtered to only those that deal with Energy-Economy Nexus investigation and have at least 10 citations at (at the time of query – November 2015). This data are to be used to conduct meta-analysis – associated dataset was used in Hajko [1] . Early version of the dataset was used for multinomial logit estimation in Master thesis by Kocianova [2] .
- Published
- 2017
18. Desert island papers-A life in variance parameter and quantitative genetic parameter estimation reviewed using 16 papers
- Author
-
Robin Thompson
- Subjects
0301 basic medicine ,Mixed model ,Restricted maximum likelihood ,Scientific career ,03 medical and health sciences ,Food Animals ,Statistics ,Computer software ,Animals ,Humans ,Inbreeding ,Mathematics ,Estimation ,Likelihood Functions ,Sheep ,Models, Genetic ,Estimation theory ,0402 animal and dairy science ,04 agricultural and veterinary sciences ,General Medicine ,Variance (accounting) ,History, 20th Century ,040201 dairy & animal science ,030104 developmental biology ,Genetics, Population ,Linear Models ,Animal Science and Zoology ,Periodicals as Topic ,Algorithms ,Software - Abstract
I review my scientific career in terms of eight areas and 16 papers. The first two areas are associated with childhood. The other six are associated with residual maximum likelihood (REML), canonical transformation, inbreeding in selected populations, average information residual maximum likelihood (AIREML), the computer program ASReml and sampling-based estimation.
- Published
- 2018
19. Estimating psychological networks and their accuracy: A tutorial paper
- Author
-
Sacha Epskamp, Eiko I. Fried, Denny Borsboom, and Psychologische Methodenleer (Psychologie, FMG)
- Subjects
FOS: Computer and information sciences ,Psychological networks ,050103 clinical psychology ,Psychometrics ,Network psychometrics ,Computer science ,Stability (learning theory) ,Experimental and Cognitive Psychology ,Variation (game tree) ,Machine learning ,computer.software_genre ,Statistics - Applications ,Article ,Field (computer science) ,Stress Disorders, Post-Traumatic ,Methodology (stat.ME) ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Replicability ,Tutorial ,Developmental and Educational Psychology ,Humans ,Applications (stat.AP) ,0501 psychology and cognitive sciences ,Statistics - Methodology ,General Psychology ,Estimation ,Syntax (programming languages) ,business.industry ,05 social sciences ,Sampling (statistics) ,Bootstrap ,Dimensional Measurement Accuracy ,Female ,Neural Networks, Computer ,Psychology (miscellaneous) ,Artificial intelligence ,business ,Centrality ,computer ,030217 neurology & neurosurgery ,Psychophysiology - Abstract
The usage of psychological networks that conceptualize psychological behavior as a complex interplay of psychological and other components has gained increasing popularity in various fields of psychology. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online., Comment: Accepted for publication in Behavior Research Methods
- Published
- 2017
20. Validation of non-participation bias methodology based on record-linked Finnish register-based health survey data : a protocol paper
- Author
-
Emma Gorman, Megan A. McMinn, Hanna Tolonen, Linsay Gray, Alastair H Leyland, Harri Rissanen, Pekka Martikainen, Tommi Härkänen, Helsinki Inequality Initiative (INEQ), Demography, Population Research Unit (PRU), Center for Population, Health and Society, Sociology, and University of Helsinki
- Subjects
Male ,ALCOHOL-CONSUMPTION ,0302 clinical medicine ,Research Methods ,Protocol ,Medicine ,030212 general & internal medicine ,Registries ,Finland ,education.field_of_study ,RESPONDENTS ,public health ,General Medicine ,Middle Aged ,3. Good health ,5141 Sociology ,Female ,Record linkage ,Adult ,medicine.medical_specialty ,Adolescent ,Medical Records Systems, Computerized ,Population ,PARTICIPATION ,Sample (statistics) ,Synthetic data ,03 medical and health sciences ,Young Adult ,Humans ,COHORT ,RATES ,education ,Aged ,Retrospective Studies ,Protocol (science) ,Estimation ,OLDER ,Actuarial science ,Models, Statistical ,business.industry ,Public health ,CAUSE-SPECIFIC MORTALITY ,Health Surveys ,Survey data collection ,Patient Participation ,business ,FOLLOW-UP ,030217 neurology & neurosurgery - Abstract
IntroductionDecreasing participation levels in health surveys pose a threat to the validity of estimates intended to be representative of their target population. If participants and non-participants differ systematically, the results may be biased. The application of traditional non-response adjustment methods, such as weighting, can fail to correct for such biases, as estimates are typically based on the sociodemographic information available. Therefore, a dedicated methodology to infer on non-participants offers advancement by employing survey data linked to administrative health records, with reference to data on the general population. We aim to validate such a methodology in a register-based setting, where individual-level data on participants and non-participants are available, taking alcohol consumption estimation as the exemplar focus.Methods and analysisWe made use of the selected sample of the Health 2000 survey conducted in Finland and a separate register-based sample of the contemporaneous population, with follow-up until 2012. Finland has nationally representative administrative and health registers available for individual-level record linkage to the Health 2000 survey participants and invited non-participants, and the population sample. By comparing the population sample and the participants, synthetic observations representing the non-participants may be generated, as per the developed methodology. We can compare the distribution of the synthetic non-participants with the true distribution from the register data. Multiple imputation was then used to estimate alcohol consumption based on both the actual and synthetic data for non-participants, and the estimates can be compared to evaluate the methodology’s performance.Ethics and disseminationEthical approval and access to the Health 2000 survey data and data from administrative and health registers have been given by the Health 2000 Scientific Advisory Board, Statistics Finland and the National Institute for Health and Welfare. The outputs will include two publications in public health and statistical methodology journals and conference presentations.
- Published
- 2019
21. Reply to Comment on 'Automatic estimation of aquifer parameters using long-term water supply pumping and injection records': paper published in Hydrogeology Journal (2016) 24: 1443–1461, by Ning Luo and Walter A. Illman
- Author
-
Ning Luo and Walter A. Illman
- Subjects
Estimation ,Hydrology ,geography ,Hydrogeology ,geography.geographical_feature_category ,business.industry ,0208 environmental biotechnology ,Water supply ,Aquifer ,02 engineering and technology ,020801 environmental engineering ,Term (time) ,Earth and Planetary Sciences (miscellaneous) ,business ,Geology ,Water Science and Technology - Published
- 2017
22. Studies on the differentiation of thermal papers and estimation of the printing age of thermal paper documents
- Author
-
Biao Li
- Subjects
Estimation ,Identification (information) ,business.industry ,Computer science ,Thermal ,Facsimile ,Pattern recognition ,Artificial intelligence ,Thermal paper ,business ,Pathology and Forensic Medicine - Abstract
Thermo-sensitive printers and thermo-sensitive facsimile machines are used extensively throughout China. Document examiners are often requested to confirm the authenticity of thermal paper documents. Identification of the type of thermal paper and estimation of the age of thermal paper documents could be effective methods to aid in confirming the authenticity of documents. In this paper, Fourier transform infrared (FTIR) spectrophotometry was successfully employed to differentiate between the 30 types of thermal paper. The relative age of the thermal paper documents was estimated by measuring changes in the gray value of printed strokes. The method employed resulted in curves, which indicate the relationship between the gray value of the strokes and the age of the thermal paper documents, and which are applicable for estimating the relative age of thermal paper documents in some cases. It was determined that the brand of thermal paper and thickness of strokes on the paper affected the accuracy of estimati...
- Published
- 2013
23. Dual Control and Information Gain in Controlling Uncertain Processes**This paper was not presented at any IFAC meeting. Corresponding author H. C. La. This work was supported by the German Research Foundation (DFG) within the Heidelberg Graduate School for Mathematical and Computational Methods for the Sciences. Support by the EU through S. Engell’s and H.G. Bock’s ERC Advanced Investigator Grant MOBOCON (291 458) is gratefully acknowledged
- Author
-
Hans Georg Bock, Andreas Potschka, Johannes P. Schlöder, and Huu Chuong La
- Subjects
Estimation ,0209 industrial biotechnology ,Engineering ,business.industry ,media_common.quotation_subject ,Control (management) ,Control engineering ,02 engineering and technology ,Task (project management) ,Dual (category theory) ,Model predictive control ,Noise ,020901 industrial engineering & automation ,020401 chemical engineering ,Control and Systems Engineering ,Control theory ,Order (exchange) ,Quality (business) ,0204 chemical engineering ,business ,media_common - Abstract
In controlling uncertain processes, it is decisive to utilize information provided by measurements in order to estimate parameters and states. Nonlinear Model Predictive Control (NMPC) is a popular method to implement feedback control and deal with uncertainties. Conventional NMPC or nominal control, however, sometimes does not provide enough information for system estimation, leading to unsatisfactory performance. Dual control attempts to strike a balance between the two goals of enhancing system estimation and optimizing the nominal objective function. In this paper, we analyze the performance of these strategies through the interplay between the performance control task and the information gain task in connection with Optimal Experimental Design. Examples illustrate the conflict and agreement between the two tasks and explain why in some cases nominal control performs well. It is also observed that measurement noise provides excitation helping to improve the quality of estimates.
- Published
- 2016
24. A Study on Estimation of High Impact Papers based on Cited Structure in Body Text
- Author
-
Noriko Kando, Tetsuji Satoh, and Yoshito Kamisawa
- Subjects
Structure (mathematical logic) ,Estimation ,Document Structure Description ,Computer science ,05 social sciences ,Scientific literature ,050905 science studies ,Data science ,Field (computer science) ,Body text ,Very large database ,0509 other social sciences ,050904 information & library sciences ,Citation - Abstract
Academic papers cite other literature for various purposes: such as sharing research motivations, comparing with previous methods or achieved accuracies, and so on. At some annual conference, high impact papers have been selected and honored after about 10 years from the publication to demonstrate both the emphasis research field and great contributions. This paper has proposed the estimation method of high impact papers using characteristics of those citing purposes. To extract the cited structure of scientific literature, we analyze the paper structure based on purpose and identify the citation points in the paper. For evaluating the proposed method, 17 years long proceedings on VLDB (Very Large Data Bases) Conference and "10 Year Award" papers are applied. We believe that the research becomes a foothold for estimating papers that may have a significant impact on the academic field in the future.
- Published
- 2018
25. Comment on 'Automatic estimation of aquifer parameters using long-term water supply pumping and injection records': paper published in Hydrogeology Journal (2016) 24: 1443–1461, by Ning Luo and Walter A. Illman
- Author
-
Christopher J. Neville
- Subjects
Estimation ,Hydrology ,geography ,Hydrogeology ,geography.geographical_feature_category ,business.industry ,0208 environmental biotechnology ,Water supply ,Aquifer ,02 engineering and technology ,020801 environmental engineering ,Aquifer properties ,Term (time) ,Earth and Planetary Sciences (miscellaneous) ,business ,Geology ,Water Science and Technology - Published
- 2017
26. Estimation of the uncertainty of the measurement results of some trace levels elements in document paper samples using ICP-MS
- Author
-
Hassan Y. Aboul-Enein, Dana Elena Popa, Ion Tanase, Gabriela Elena Udristioiu, and Andrei A. Bunaciu
- Subjects
Estimation ,Propagation of uncertainty ,General Chemical Engineering ,Data quality ,Statistics ,Measurement uncertainty ,Sensitivity analysis ,Statistical dispersion ,General Chemistry ,Uncertainty analysis ,Mathematics ,TRACE (psycholinguistics) - Abstract
The measurement uncertainty characterizes the dispersion of the quantity values being attributed to the measurand and there are different approaches for uncertainty estimation. This study illustrates the application of the GUM (bottom-up) approach to estimate the measurement results uncertainty for the quantitative determination of Al, Ba, Fe, Mg, Mn, Pb, Sr and Zn from document paper samples using ICP-MS. The measurement uncertainty estimation was done based on identifying, quantifying and combining all the associated sources of uncertainty separately. Certain typical steps were followed: specifying the measurand; identifying the major sources of uncertainty; quantifying the uncertainty components; combining the significant uncertainty components; determining the extended combined standard uncertainty; reviewing the estimates and reporting the measurement uncertainty. For the eight mentioned trace elements the combined standard uncertainties and the expanded uncertainties were determined. The relative measurement uncertainty values lay between 7.7% and 13.6%. In all the five paper samples for each of the eight elements homogenous uncertainty values were obtained. In order to emphasize the uncertainty sources contributions, the percent contribution of the uncertainty components to the combined relative standard uncertainty were graphically represented for the elements determined by ICP-MS in paper samples. The previously validated method proved to be suitable for the intended purpose and when the uncertainty of the measurement results is estimated, it becomes a significant tool for characterizing the elemental composition of the document paper samples. Moreover, the applied approach for the uncertainty estimation enables improving the data quality and decision making.
- Published
- 2015
27. About estimation of correctness of text reuse in scientific papers
- Author
-
Yu. V. Chekhovich and O. S. Belenkaya
- Subjects
Estimation ,Correctness ,Computer science ,business.industry ,Reuse ,Software engineering ,business - Published
- 2018
28. Shadow prices of environmental outputs and production efficiency of household-level paper recycling units in Vietnam
- Author
-
Nguyen Van Ha, Virginia Maclaren, and Shashi Kant
- Subjects
Paper recycling ,Estimation ,Economics and Econometrics ,Labour economics ,Linear programming ,restrict ,Shadow price ,Econometrics ,Economics ,Production (economics) ,Production efficiency ,Environmental quality ,General Environmental Science - Abstract
The production efficiency and shadow prices of three environmental outputs (BOD, COD, and SS) of 63 household-level paper-recycling units, from a recycling craft village in Vietnam, are assessed A two-stage procedure, linear programming and stochastic estimation, is used to estimate output distance function. Social capital as a production factor and environmental outputs are included in the output distance function. Results indicate that production efficiencies could potentially be improved by 28%. There is a substantial variation in the shadow prices of environmental outputs among the production units of different types of paper products. Furthermore, the average shadow prices of the three environmental outputs are all positive. This indicates a potential for improving environmental quality though introducing pollution-prevention methods to paper-recycling production processes in Vietnam (e.g., recirculation of wastewater), and suggests that it may be inappropriate to restrict the shadow prices of environmental outputs to be non-positive for the analysis of some production processes.
- Published
- 2008
29. Estimating and decomposing the rate of technical change in the Swedish pulp and paper industry: A general index approach
- Author
-
Patrik Söderholm and Robert Lundmark
- Subjects
Estimation ,Economics and Econometrics ,Index (economics) ,Standard time ,Contrast (statistics) ,Management Science and Operations Research ,Relative price ,Pulp and paper industry ,General Business, Management and Accounting ,Industrial and Manufacturing Engineering ,Technical change ,Dummy variable ,Economics ,Panel data - Abstract
The purpose of this paper is to analyse the rate and the impacts of technical change in the Swedish pulp and paper industry. In contrast to earlier research on this industry we replace the standard time trend with time-specific dummy variables enabling the estimation and decomposing of a general index of technical change. The analysis is made within a Translog cost function model, which is estimated using a panel data set with observations across individual paper and board mills over the time period 1974–1994. Our results indicate that the highest rates of technical change have generally occurred during the latter part of this period. Pure technical change is the primary component that has directed technical change over the entire time period. We also find evidence of non-neutral technical change. Energy use has been stimulated by technical improvements while labour use has been discouraged. Also, technical change has had wastepaper and woodpulp using impacts. However, the magnitudes of these latter impacts are relatively small, implying that the increase in wastepaper use during the last decades has mainly been stimulated by relative price changes.
- Published
- 2004
30. Replicating Sachs and Warner’s Working Papers on the Resource Curse
- Author
-
Graham A. Davis
- Subjects
Estimation ,Resource (project management) ,Computer science ,Resource curse ,Development economics ,Value (economics) ,Sample (statistics) ,Replicate ,Development ,Data science ,Typographical error ,Replication (computing) - Abstract
This article reports on my attempt to replicate Sachs and Warner’s 1995 and 1997 resource curse working papers. The 1995 paper is not replicable for lack of a data archive. Pure replication of the 1997 paper is achieved. Statistical replication determines that the proposed institutional causes of the resource curse are not robust to country sample. Scientific replication shows that findings of a resource curse are not sensitive to different measures of resource intensiveness, though they are sensitive to estimation technique. Typographical errors in the published paper reveal the value of researchers making both their data and code available.
- Published
- 2013
31. Will there be a third COVID-19 wave? A SVEIRD model-based study of India’s situation
- Author
-
Dwarakesh Kannan, Rudra Banerjee, R. Gurusriram, Pritish Kumar Varadwaj, and Srijit Bhattacharjee
- Subjects
SEIRD ,Estimation ,Vaccination rate ,Original Paper ,2019-20 coronavirus outbreak ,History ,Coronavirus disease 2019 (COVID-19) ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,COVID-19 ,General Physics and Astronomy ,Development economics ,Pandemic ,SARS-CoV-II ,Mass vaccination ,Epidemics ,Third wave ,Model - Abstract
Since the first patient was detected in India in late February 2020, the SARS-CoV-II virus is playing havoc on India. After the first wave, India is now riding the second wave. As was in the case of European countries like Italy and the UK, the second wave is more contagious and at the time of writing this paper, the per day infection is as high as 400,000. The alarming thing is it is not uncommon that people are getting infected multiple times. On the other hand, mass vaccination has started step by step. There is also a growing danger of potential third wave is unavoidable, which can even infect kids and minors. In this situation, an estimation of the dynamics of SARS-CoV-II is necessary to combat the pandemic. We have used a modified SEIRD model that includes vaccination and repeat infection as well. We have studied India and 8 Indian states with varying SARS-CoV-II infections. We have shown that the COVID-19 wave will be repeated from time to time, but the intensity will slow down with time. In the most possible situation, our calculation shows COVID-19 will remain endemic for the foreseeable future unless we can increase our vaccination rate manifold.
- Published
- 2021
32. Heat waves: a hot topic in climate change research
- Author
-
Lutz Bornmann, Robin Haunschild, and Werner Marx
- Subjects
FOS: Computer and information sciences ,Estimation ,Original Paper ,Atmospheric Science ,Survivability ,FOS: Physical sciences ,Climate change ,Computer Science - Digital Libraries ,Scientific literature ,Heat wave ,Physics - Atmospheric and Oceanic Physics ,Geography ,Hot weather ,Atmospheric and Oceanic Physics (physics.ao-ph) ,Regional science ,Digital Libraries (cs.DL) ,Urban heat island ,High humidity - Abstract
Research on heat waves (periods of excessively hot weather, which may be accompanied by high humidity) is a newly emerging research topic within the field of climate change research with high relevance for the whole of society. In this study, we analyzed the rapidly growing scientific literature dealing with heat waves. No summarizing overview has been published on this literature hitherto. We developed a suitable search query to retrieve the relevant literature covered by the Web of Science (WoS) as complete as possible and to exclude irrelevant literature (n = 8,011 papers). The time-evolution of the publications shows that research dealing with heat waves is a highly dynamic research topic, doubling within about 5 years. An analysis of the thematic content reveals the most severe heat wave events within the recent decades (1995 and 2003), the cities and countries/regions affected (United States, Europe, and Australia), and the ecological and medical impacts (drought, urban heat islands, excess hospital admissions, and mortality). Risk estimation and future strategies for adaptation to hot weather are major political issues. We identified 104 citation classics which include fundamental early works of research on heat waves and more recent works (which are characterized by a relatively strong connection to climate change)., 40 pages, 2 tables, and 9 figures
- Published
- 2021
33. Estimation of exogenous drivers to predict COVID-19 pandemic using a method from nonlinear control theory
- Author
-
Alexander Wasserburger, Lukas Böhler, Michael Bergmann, Christoph Hametner, Robert Kölbl, Stefan Jakubek, Zhang Peng Du, Martin Kozek, and Thomas Bachleitner-Hofmann
- Subjects
Estimation ,Data source ,Original Paper ,Coronavirus disease 2019 (COVID-19) ,Dynamical systems theory ,Computer science ,Epidemiological modelling ,Applied Mathematics ,Mechanical Engineering ,Psychological intervention ,COVID-19 ,Differential flatness ,Aerospace Engineering ,Ocean Engineering ,Nonlinear control ,Complement (complexity) ,Control and Systems Engineering ,SARS-CoV2 ,Dynamical systems ,Pandemic ,Econometrics ,Electrical and Electronic Engineering - Abstract
The currently ongoing COVID-19 pandemic confronts governments and their health systems with great challenges for disease management. Epidemiological models play a crucial role, thereby assisting policymakers to predict the future course of infections and hospitalizations. One difficulty with current models is the existence of exogenous and unmeasurable variables and their significant effect on the infection dynamics. In this paper, we show how a method from nonlinear control theory can complement common compartmental epidemiological models. As a result, one can estimate and predict these exogenous variables requiring the reported infection cases as the only data source. The method allows to investigate how the estimates of exogenous variables are influenced by non-pharmaceutical interventions and how imminent epidemic waves could already be predicted at an early stage. In this way, the concept can serve as an “epidemometer” and guide the optimal timing of interventions. Analyses of the COVID-19 epidemic in various countries demonstrate the feasibility and potential of the proposed approach. The generic character of the method allows for straightforward extension to different epidemiological models.
- Published
- 2021
34. Screening for Chemicals in Paper and Board Packaging for Food Use: Chemometric Approach and Estimation of Migration
- Author
-
Barbara Giussani, Luciano Piergiovanni, V. Guazzotti, and Sara Limbo
- Subjects
Estimation ,Food packaging ,Multivariate statistics ,Engineering ,business.industry ,Mechanical Engineering ,Forensic engineering ,General Materials Science ,General Chemistry ,Biochemical engineering ,business - Abstract
An analytical survey of 20 paper and board (PB occasionally, migration estimations overcame the specific migration limits. The chosen analytical methods coupled with a chemometric approach proved to be an effective way to describe the data; it may be concluded that only the simultaneous consideration of several chemicals with a multivariate approach allowed the investigated packaging materials to be distinguished. Copyright © 2014 John Wiley & Sons, Ltd.
- Published
- 2014
35. Estimation of cancer risk due to exposure to lead contamination in Joss paper
- Author
-
Beuy Joob and Viroj Wiwanitkit
- Subjects
0301 basic medicine ,Estimation ,Cancer Research ,business.industry ,MEDLINE ,Contamination ,lcsh:Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,lcsh:RC254-282 ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Lead (geology) ,Oncology ,030220 oncology & carcinogenesis ,Environmental health ,Medicine ,business ,Cancer risk ,Letters to the Editor - Published
- 2017
36. Use of validated community-based trachoma trichiasis (TT) case finders to measure the total backlog and detect when elimination threshold is achieved: a TT methodology paper
- Author
-
John Sironka, Ernest Barasa, Gichangi M, Francis Kiio, Jefitha Karimurio, Alice Mwangi, Doris W. Njomo, Kefa Ronald, Catherine Kareko, and Hillary Rono
- Subjects
Male ,Trichiasis ,medicine.medical_specialty ,030231 tropical medicine ,Population ,TTall ,03 medical and health sciences ,Survey methodology ,0302 clinical medicine ,case finders ,TT15 ,Prevalence ,medicine ,Humans ,Mass Screening ,education ,Mass screening ,Trachoma ,Estimation ,education.field_of_study ,Data collection ,business.industry ,Research ,Trachoma trichiasis ,Public health ,General Medicine ,medicine.disease ,Health Surveys ,Kenya ,Surgery ,Community health ,030221 ophthalmology & optometry ,Female ,Public Health ,TT40 ,business ,Demography - Abstract
Introduction The World Health Organization recommends TT surveys to be conducted in adults aged 15+ years (TT 15 survey) and certifies elimination of TT as a public health problem when there is less than 1 unknown case per 1,000 people of all ages. There is no standard survey method to accurately confirm this elimination prevalence threshold of 0.1% because rare conditions require large and expensive prevalence survey samples. The aim of this study was to develop an accurate operational research method to measure the total backlog of TT in people of all ages and detect when the elimination threshold is achieved. Methods Between July to October 2016, an innovative Community-based, Mapping, Mop-up and Follow-up (CMMF) approach to elimination of TT as a public health problem was developed and tested in Esoit, Siana, Megwara and Naikara sub-locations in Narok County in Kenya. The County had ongoing community-based TT surgical camps and case finders. TT case finders were recruited from existing pool of Community health volunteers (CHV) in the Community Health Strategy Initiative Programme of the Ministry of Health. They were trained, validated and supervised by experienced TT surgeons. A case finder was allocated a population unit with 2 to 3 villages to conduct a de jure pre-survey census, examine all people in the unit and register those with TT (TT all survey). Identified cases were confirmed by TT surgeons prior to surgery. Operated patients were reviewed at 1 day, 2 weeks and 3-6 months. The case finders will also be used to identify and refer new and recurrent cases. People with other eye and medical conditions were treated and referred accordingly. Standardised data collection and computer based data capture tools were used. Case finders kept registers with details of all persons with TT, those operated and those who refused to be operated (refusals). These details informed decision and actions on follow-up and counselling. Progress towards achievement of elimination threshold was assessed by dividing the number of TT cases diagnosed by total population in the population unit multiplied by 1,000. Results Narok County Government adopted both the CMMF approach and TT all survey method. All persons in 4,784 households in the four sub-locations were enumerated and examined. The total population projection was 29,548 and pre-survey census 22,912 people. Fifty-three cases of TT were diagnosed. The prevalence was 0.23% and this is equivalent to 2.3 cases per thousand population of all ages. Prior to this study, the project required to operate on at least 30 cases (excess cases) to achieve the elimination threshold of 1 case per 1000 population. Conclusion The total backlog of TT was confirmed and the project is now justified to lay claim of having eliminated TT as a public health problem in the study area. TT all method may not be appropriate in settings with high burden of TT. Nomadic migrations affect estimation of population size. Non-trachomatous TT could not be ruled-out.
- Published
- 2017
37. [Paper] Blind PSNR Estimation of Compressed Video Sequences Supported by Machine Learning
- Author
-
Naofumi Wada, Masahiro Wakabayashi, Jiro Katto, and Takahiro Kumekawa
- Subjects
Estimation ,business.industry ,Computer science ,Speech recognition ,Video sequence ,AC power ,Computer Graphics and Computer-Aided Design ,Support vector machine ,Signal Processing ,Media Technology ,Saliency map ,Computer vision ,Artificial intelligence ,business - Published
- 2014
38. Social and Emotional Competencies And Attitudes of Parents and Educators As Determinants of Abilities and Talents Perception of Preschool Children
- Author
-
Anela Hasanagić and Amina Odobašić
- Subjects
Estimation ,Socioemotional selectivity theory ,media_common.quotation_subject ,parents ,Sociodemographic data ,Developmental psychology ,Unpublished paper ,socio-emotional competencies ,children ,Expression (architecture) ,Perception ,AZ20-999 ,educators ,History of scholarship and learning. The humanities ,Psychology ,giftedness ,media_common - Abstract
To ensure that the process of giftedness development runs smoothly, it is necessary to build adequate socio-emotional competencies related to the ability to use various social and emotional stimulation from the environment to achieve results that enable satisfactory and competent participation in groups, communities, and society to which individual belongs. The goal of this research was to examine whether and to what extent are socio-emotional competencies of parents and Kindergarten teachers are significant predictors of the perception of talents of preschool children. The sample consisted out of 100 participants from Zeničko-Dobojski kanton, 75 parents, and 25 educators. As instruments, we used: Questionnaire of general sociodemographic data (SD questionnaire), Giftedness Questionnaire (Von Krafft and Semke, 2008), and questionnaire of socioemotional competencies of educators (Jusufovic, unpublished paper). The results indicate that among all socio-emotional competencies of parents the only that is significant predictor is awareness of others for assessing the expression of one’s characteristics, for assessing the expression of talent and out of socio-demographic variables, the variable of age parents is significant, but only for assessing the expression of talents (older parents perceive less giftedness). Furthermore, in the case of educators, pure non-violent communication is important for socio-emotional competencies for the expression of one’s characteristics, then for the expression of talents significant factors are non-violent communication, awareness of others, emotion regulation, self-esteem, and the total score of socio-emotional competencies. Among socio-demographic characteristics, the important predictor is working experience for perceiving talents. In addition to this, there are statistically significant differences between parents and educators, in an expression of talent, and the results show that educators are better in the estimation of expression of talents.
- Published
- 2021
39. Analysis of Research Papers Published by the Korean Journal of Hospice and Palliative Care (The First Issue∼2012)
- Author
-
In Cheol Hwang, Hong Yup Ahn, and Kyung-Ah Kang
- Subjects
Estimation ,Medical education ,medicine.medical_specialty ,Palliative care ,Future studies ,Ethical issues ,business.industry ,Alternative medicine ,Sample size determination ,Multidisciplinary approach ,Family medicine ,medicine ,Healthcare industry ,business - Abstract
The purpose of this paper is to suggest a direction for future studies based on the analysis of the articles published in the Korean Journal of Hospice and Palliative Care from 1998 to 2012. A total of 240 articles (51 reviews, 189 original) were examined in three five-year groups. Categories of analysis include authors' background (profession, region) and general characteristics and qualitative aspects of the original paper (participants, topic, study design, data analysis, ethical consideration, multidisciplinary approach, research funds and sample size estimation). While the journal publishes more of articles than before, it is mainly due to the increase in the number of review articles, not original articles. As for study topics, healthcare industry and physical symptoms were most frequently studied. The disparity in authors' regional background is fading, and more articles are published by nurses than before. Moreover, more studies are funded while fewer papers tend to adopt a multidisciplinary approach or focus on care givers. Also, in terms of a study design, the number of experimental and methodological studies has slightly increased. In the qualitative aspect, studies considered ethical issues and collected participation consent, and fewer studies reported an estimated sample size. In data analysis, post-adjustment comparison decreased, and new analytical methods are increasingly used. Our results indicate the need to conduct research with more extensive scientific data in various fields of hospice and palliative care.
- Published
- 2013
40. What Makes a Working Paper in Economics Publishable?: A Tale from the Scientific Periphery
- Author
-
Aurora A.C. Teixeira
- Subjects
Estimation ,Political science ,media_common.quotation_subject ,Scientific production ,Media Technology ,Institution ,Media studies ,Production (economics) ,Social science ,Publish or perish ,Education ,media_common - Abstract
Research on scientific production and publications in the field of economics has positively boomed in the last few years. However, hardly any attention has been dedicated to the production of working papers and the consequences they may have within the institutions where they are produced. This paper provides a detailed analysis of the working papers produced and published from an institution that is relatively peripheral in terms of its production of research in economics. It mainly explores the probability of the working papers being published in peer-reviewed journals. Through the use of an extensive series of these working papers, produced between 1985 and the end of 2005, and through the estimation of a logistic regression model, it was concluded that the probability of international publication increases significantly when the working paper is recent and co-written with a researcher from a foreign institution. Such evidence suggests that for success in the ‘publish or perish’ world of scientific research, one has to be integrated into an international scientific network.
- Published
- 2013
41. Mathematical modeling and estimation for next wave of COVID-19 in Poland
- Author
-
M. K. Arti and Antoni Wilinski
- Subjects
Estimation ,Original Paper ,Environmental Engineering ,Coronavirus disease 2019 (COVID-19) ,Computer science ,COVID-19 ,Mixture model ,Corona ,Gaussian Mixture Model ,Mathematical Modelling ,Environmental Chemistry ,Statistical physics ,Current (fluid) ,Prediction ,Safety, Risk, Reliability and Quality ,Pandemics ,General Environmental Science ,Water Science and Technology - Abstract
We investigate the problem of mathematical modeling of new corona virus (Covid19) in Poland and tries to predict the upcoming wave. A Gaussian mixture model is proposed to characterize the COVID-19 disease and to predict a new / future wave of COVID-19. This prediction is very much needed to prepare for medical setup and continue with the upcoming program. Specifically, data related to the new confirmed cases of COVID-19 per day are considered, and then we attempt to predict the data and statistical activity. A close match between actual data and analytical data by using the Gaussian mixture model shows that it is a suitable model to present new cases of COVID-19. In addition, it is thought that there are N waves of COVID-19 and that information for each future wave is also present in current and previous waves as well. Using this concept, predictions of a future wave can be made.
- Published
- 2021
42. A Wavelet Evaluation of Some Leading Business Cycle Indicators for the German Economy
- Author
-
Krüger, Jens J.
- Subjects
Estimation ,Economics and Econometrics ,Leading indicators ,Index (economics) ,E37 ,media_common.quotation_subject ,Wavelet analysis ,Phase difference ,Term (time) ,Interest rate ,Business cycle forecasting ,Wavelet ,Economic indicator ,Economics ,Econometrics ,Business cycle ,Stock market ,C49 ,Statistics, Probability and Uncertainty ,Business and International Management ,Finance ,Research Paper ,E32 ,media_common - Abstract
Leading indicators are important variables in business cycle forecasting. We use wavelet analysis to investigate the lead-lag stability of German leading indicators in time-frequency space. This method permits a time-varying relation of the leading indicators to the reference cycle allowing simultaneously to focus on lead-lag stability at the specific business cycle frequencies. In this way we analyze an index of new orders, a survey-based index of business expectations, an index of stock market returns and the interest rate term spread. We confirm that most of these indicators are indeed leading the reference cycle most of the time, but the number of months leading varies considerably over time and is associated with a great deal of estimation uncertainty.
- Published
- 2021
43. Estimation of final standings in football competitions with a premature ending: the case of COVID-19
- Author
-
Paolo Gorgi, Rutger Lit, Siem Jan Koopman, Econometrics and Data Science, and Tinbergen Institute
- Subjects
Statistics and Probability ,Estimation ,Original Paper ,Economics and Econometrics ,Schedule ,2019-20 coronavirus outbreak ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Applied Mathematics ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,COVID-19 ,Football ,Sport statistics ,SDG 3 - Good Health and Well-being ,Ranking ,League table ,Modeling and Simulation ,Bivariate Poisson ,Paired-comparison models ,Econometrics ,Social Sciences (miscellaneous) ,Analysis - Abstract
We study an alternative approach to determine the final league table in football competitions with a premature ending. For several countries, a premature ending of the 2019/2020 football season has occurred due to the COVID-19 pandemic. We propose a model-based method as a possible alternative to the use of the incomplete standings to determine the final table. This method measures the performance of the teams in the matches of the season that have been played and predicts the remaining non-played matches through a paired-comparison model. The main advantage of the method compared to the incomplete standings is that it takes account of the bias in the performance measure due to the schedule of the matches in a season. Therefore, the resulting ranking of the teams based on our proposed method can be regarded as more fair in this respect. A forecasting study based on historical data of seven of the main European competitions is used to validate the method. The empirical results suggest that the model-based approach produces more accurate predictions of the true final standings than those based on the incomplete standings.
- Published
- 2021
44. Evaluating the estimation of genetic correlation and heritability using summary statistics
- Author
-
Fredrick R. Schumacher and Ju Zhang
- Subjects
Genetic correlation ,Population ,Methods Paper ,Single-nucleotide polymorphism ,Biology ,Polymorphism, Single Nucleotide ,Heritability ,Quantitative Trait, Heritable ,Neoplasms ,Statistics ,Genetics ,Humans ,Generalizability theory ,Computer Simulation ,Genetic Predisposition to Disease ,education ,Evaluation ,Molecular Biology ,Estimation ,education.field_of_study ,Models, Genetic ,General Medicine ,Summary statistics ,Sample size determination ,Genetic Background ,Genome-Wide Association Study - Abstract
While novel statistical methods quantifying the shared heritability of traits and diseases between ancestral distinct populations have been recently proposed, a thorough evaluation of these approaches under differing circumstances remain elusive. Brown et al.2016 proposed the method Popcorn to estimate the shared heritability, i.e. genetic correlation, using only summary statistics. Here, we evaluate Popcorn under several parameters and circumstances: sample size, number of SNPs, sample size of external reference panel, various population pairs, inappropriate external reference panel, and admixed population involved. Our results determined the minimum sample size of the external reference panel, summary statistics, and number of SNPs required to accurately estimate both the genetic correlation and heritability. Moreover, the number of individuals and SNPs required to produce accurate and stable estimates was directly proportional with heritability in Popcorn. Misrepresentation of the reference panel overestimated the genetic correlation by 20% and heritability by 60%. Lastly, applying Popcorn to homogeneous (EUR) and admixed (ASW) populations underestimated the genetic correlation by 15%. Although statistical approaches estimating the shared heritability between ancestral populations will provide novel etiologic insight, caution is required ensuring results are based on the appropriate sample size, number of SNPs, and the generalizability of the reference panel to the discovery populations.
- Published
- 2021
45. Computational methods for exploiting image-based data in paper web profile control
- Author
-
Ohenoja, M. (Markku) and Leiviskä, K. (Kauko)
- Subjects
papermaking ,säätö ,estimation ,paperinvalmistus ,mittaus ,imaging ,kuvantava ,simulation ,scanning ,skannaava ,simulointi ,measurements ,control ,estimointi - Abstract
Sheet and film forming processes such as paper manufacturing pose a challenging monitoring and control problem, where quality variations are classified into machine direction (MD), cross-machine direction (CD) and residual variation. The measurements are typically collected with a scanning sensor that covers only a small part of the paper web, and therefore provides a very limited view of the paper web, setting performance limitations on the online monitoring and control. The development of cameras, light sources and computation hardware enable the consideration of utilizing in-use web inspection systems in paper machines to measure the paper web variations with a considerably higher resolution, sampling rate and coverage. The light transmittance images captured with this kind of system need, however, to be converted into a controllable quality property, such as basis weight, in order to utilize the new measurement information for control purposes. In this thesis, computational methods are identified and developed that are capable of combining light transmittance and scanning measurements, and can efficiently utilize the combined information for control purposes. The possible benefits gained with these image-based measurements in paper machine online monitoring and profile control are evaluated in a simulation environment. In a real paper machine, the benefits are ultimately dependent on the machine configuration and the nature of paper variations therein. It was found that with a suitable estimation method, light transmittance could increase the awareness of basis weight variations such as fast MD variation, tilted waves and dynamic CD variation patterns, which are practically undetectable using scanner-based measurement. The enhanced basis weight estimation enables a considerable improvement in the dynamic performance of profile controls. CD control was able to handle fast variations earlier classified as uncontrollable residual variation. In MD control, enhanced estimation enabled the development of a control strategy that led to improved reference tracking and disturbance rejection properties. Tiivistelmä Paperinvalmistus on yksi esimerkki levyjen tai kalvojen valmistusprosesseista, jotka ovat tyypillisesti haasteellisia prosessin monitoroinnin ja säädön kannalta. Laatuvaihtelut näissä prosesseissa luokitellaan koneensuuntaisiin (MD), poikkisuuntaisiin (CD) ja jäännösvaihteluihin. Paperikoneella mittaukset kerätään tavallisesti radan yli liikkuvalla skannaavalla sensorilla, joka tarjoaa vain hyvin rajoitetun määrän informaatiota paperiradasta, asettaen siten rajoituksia online monitoroinnin ja säädön suorituskyvylle. Kameroiden ja valonlähteiden kehitys sekä laskentakapasiteetin kasvu mahdollistavat paperiradan vaihteluiden mittaamisen huomattavasti korkeammalla resoluutiolla ja näytteenottovälillä jo käytössä olevilla vianilmaisujärjestelmillä. Vianilmaisujärjestelmän keräämä valon transmittanssitieto pitää kuitenkin muuntaa esimerkiksi neliömassatiedoksi, jotta uutta mittausinformaatiota voitaisiin hyödyntää myös prosessin online säädössä nykyisillä toimilaitteilla. Tässä työssä on identifioitu ja kehitetty laskennallisia menetelmiä, jotka kykenevät yhdistämään kuvantavan ja skannaavan mittauksen sekä käyttämään tätä yhdistettyä tietoa säätötarkoituksissa. Kuvapohjaisen mittauksen mahdollisia hyötyjä online monitoroinnissa ja profiilien säädössä on arvioitu simulointiympäristössä. Saavutettavat hyödyt paperikoneella ovat lopulta riippuvaisia myös koneen konfiguraatiosta ja koneella ilmenevien laatuvaihteluiden luonteesta. Tulokset osoittavat, että transmittanssimittauksen ja tehokkaan estimointimenetelmän avulla kyetään lisäämään tietämystä neliömassamuutoksista, joita ei käytännössä voida havaita pelkän skannaavan mittauksen avulla. Estimoinnin parempi suorituskyky mahdollistaa myös profiilisäätöjen dynaamisen suorituskyvyn kasvattamisen. CD-säätö voitiin laajentaa kattamaan myös nopeita vaihteluita, jotka ovat aiemmin luokiteltu jäännösvaihteluksi. MD-säädölle voitiin kehittää säätöstrategia, jonka avulla sekä asetusarvojen seurantaa että häiriöiden vaimennusta pystyttiin parantamaan.
- Published
- 2016
46. Comments on two papers concerning estimation of the parameters of the Pareto distribution in the presence of outliers
- Author
-
David P. M. Scollnik
- Subjects
Statistics and Probability ,Estimation ,symbols.namesake ,Maximum likelihood ,Outlier ,Econometrics ,symbols ,Estimator ,State (functional analysis) ,Pareto distribution ,Mathematics - Abstract
In this paper, we examine and correct various results relating to estimation of a Pareto distribution in the presence of outliers according to a model introduced by Dixit and Jabbari Nooghabi (2011) [1] and further studied by Dixit and Jabbari Nooghabi (2011) [2] . In particular, Dixit and Jabbari Nooghabi (2011) [2] state that the maximum likelihood estimators for the parameters appearing in their model do not exist. We show that these estimators can in fact exist, and we present and illustrate a method for determining them when they do. Two numerical illustrations using actual insurance data are included.
- Published
- 2013
47. Generating community measures of food purchasing activities using store-level electronic grocery transaction records: an ecological study in Montreal, Canada
- Author
-
Hiroshi Mamiya, Alexandra M. Schmidt, Yu Ma, David L. Buckeridge, and Erica E. M. Moodie
- Subjects
Estimation ,Canada ,medicine.medical_specialty ,Nutrition and Dietetics ,Public health ,Commerce ,Public Health, Environmental and Occupational Health ,Medicine (miscellaneous) ,Ecological study ,Consumer Behavior ,Purchasing ,Food Preferences ,Geography ,Diabetes Mellitus, Type 2 ,Sample size determination ,Environmental health ,Scale (social sciences) ,medicine ,Humans ,Electronics ,Neighbourhood (mathematics) ,Transaction data ,Research Paper - Abstract
Objective:Geographic measurement of diets is generally not available at areas smaller than a national or provincial (state) scale, as existing nutrition surveys cannot achieve sample sizes needed for an acceptable statistical precision for small geographic units such as city subdivisions.Design:Using geocoded Nielsen grocery transaction data collected from supermarket, supercentre and pharmacy chains combined with a gravity model that transforms store-level sales into area-level purchasing, we developed small-area public health indicators of food purchasing for neighbourhood districts. We generated the area-level indicators measuring per-resident purchasing quantity for soda, diet soda, flavoured (sugar-added) yogurt and plain yogurt purchasing. We then provided an illustrative public health application of these indicators as covariates for an ecological spatial regression model to estimate spatially correlated small-area risk of type 2 diabetes mellitus (T2D) obtained from the public health administrative data.Setting:Greater Montreal, Canada in 2012.Participants:Neighbourhood districts (n 193).Results:The indicator of flavoured yogurt had a positive association with neighbourhood-level risk of T2D (1·08, 95 % credible interval (CI) 1·02, 1·14), while that of plain yogurt had a negative association (0·93, 95 % CI 0·89, 0·96). The indicator of soda had an inconclusive association, and that of diet soda was excluded due to collinearity with soda. The addition of the indicators also improved model fit of the T2D spatial regression (Watanabe–Akaike information criterion = 1765 with the indicators, 1772 without).Conclusion:Store-level grocery sales data can be used to reveal micro-scale geographic disparities and trends of food selections that would be masked by traditional survey-based estimation.
- Published
- 2021
48. Income Shocks and Out-of-Pocket Health Care Spending: Implications for Single-Mother Families
- Author
-
Irina B. Grafova, Alan C. Monheit, and Rizie Kumar
- Subjects
Estimation ,Economics and Econometrics ,Original Paper ,Chronic conditions ,Social Psychology ,Poverty ,business.industry ,Family health care spending ,Middle income ,Single mothers ,respiratory system ,Health care ,Demographic economics ,Business ,Out-of-pocket spending ,Medical prescription ,Medical Expenditure Panel Survey ,Social policy - Abstract
We examine how out-of-pocket health care spending by single-mother families responds to income losses. We use eleven two-year panels of the Medical Expenditure Panel Survey for the period 2004–2015 and apply the correlated random effects estimation approach. We categorize income in relation to the federal poverty line (FPL): poor or near-poor (less than 125% of the FPL); low income (125 to 199% of the FPL); middle income (200 to 399% of the FPL); and high income (400% of the FPL or more). Income losses among high-income single-mother families lead a decline in out-of-pocket spending toward office-based care and emergency room care of $119–$138 and $30–$60, respectively. Among middle-income single-mother families, income losses lead to a $30 decline in out-of-pocket spending toward family emergency room care and a $45–$91 decline in mother’s out-of-pocket spending toward prescription medications. Further research should examine whether these declines compromise health status of single-mother family members.
- Published
- 2021
49. Survey Paper On Bandwidth Estimation For Video Streaming
- Author
-
Sumant Deo
- Subjects
Estimation ,Computer science ,Real-time computing ,0202 electrical engineering, electronic engineering, information engineering ,Bandwidth (computing) ,020206 networking & telecommunications ,02 engineering and technology ,Video streaming - Published
- 2016
50. Review of the paper 'What if the 25th October 2011 event that stroke Cinque Terre (Liguria) had happened in Genova, Italy? Flooding scenarios, hazard mapping and damages estimation
- Author
-
Maria Carmen Llasat
- Subjects
Estimation ,Hazard mapping ,Geography ,Event (relativity) ,Flooding (psychology) ,medicine ,Damages ,medicine.disease ,Stroke ,Cartography - Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.