2,232 results on '"SAMPLING errors"'
Search Results
52. Advanced endoscopic imaging for detection of Barrett's esophagus.
- Author
-
Zilberstein, Netanel, Godbee, Michelle, Mehta, Neal A., and Waxman, Irving
- Subjects
- *
BARRETT'S esophagus , *SAMPLING errors , *DIAGNOSIS , *ARTIFICIAL intelligence - Abstract
Barrett's esophagus (BE) is the precursor to esophageal adenocarcinoma (EAC), and is caused by chronic gastroesophageal reflux. BE can progress over time from metaplasia to dysplasia, and eventually to EAC. EAC is associated with a poor prognosis, often due to advanced disease at the time of diagnosis. However, if BE is diagnosed early, pharmacologic and endoscopic treatments can prevent progression to EAC. The current standard of care for BE surveillance utilizes the Seattle protocol. Unfortunately, a sizable proportion of early EAC and BE-related high-grade dysplasia (HGD) are missed due to poor adherence to the Seattle protocol and sampling errors. New modalities using artificial intelligence (AI) have been proposed to improve the detection of early EAC and BE-related HGD. This review will focus on AI technology and its application to various endoscopic modalities such as high-definition white light endoscopy, narrow-band imaging, and volumetric laser endomicroscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
53. Assessing sampling and retrieval errors of GPROF precipitation estimates over the Netherlands.
- Author
-
Bogerd, Linda, Leijnse, Hidde, Overeem, Aart, and Uijlenhoet, Remko
- Subjects
- *
SAMPLING errors , *ARITHMETIC mean , *RADAR meteorology , *RADIOMETERS , *RADAR , *DETECTORS , *BRIGHTNESS temperature - Abstract
The Goddard Profiling algorithm (GPROF) converts radiometer observations from Global Precipitation Measurement (GPM) constellation satellites into precipitation estimates. Typically, high-quality ground-based estimates serve as reference to evaluate GPROF's performance. To provide a fair comparison, the ground-based estimates are often spatially aligned to GPROF. However, GPROF combines observations from various sensors and channels, each associated with a distinct footprint. Consequently, uncertainties related to the representativeness of the sampled areas are introduced in addition to the uncertainty when converting brightness temperatures into precipitation intensities. The exact contribution of resampling precipitation estimates, required to spatially and temporally align different resolutions when combining or comparing precipitation observations, to the overall uncertainty remains unknown. Here, we analyze the current performance of GPROF over the Netherlands during a 4-year period (2017–2020) while investigating the uncertainty related to sampling. The latter is done by simulating the reference precipitation as satellite footprints that vary in size, geometry, and applied weighting technique. Only GPROF estimates based on observations from the conical-scanning radiometers of the GPM constellation are used. The reference estimates are gauge-adjusted radar precipitation estimates from two ground-based weather radars from the Royal Netherlands Meteorological Institute (KNMI). Echo top heights (ETHs) retrieved from the same radars are used to classify the precipitation as shallow, medium, or deep. Spatial averaging methods (Gaussian weighting vs. arithmetic mean) minimally affect the magnitude of the precipitation estimates. Footprint size has a higher impact but cannot explain all discrepancies between the ground- and satellite-based estimates. Additionally, the discrepancies between GPROF and the reference are largest for low ETHs, while the relative bias between the different footprint sizes and implemented weighting methods increase with increasing ETHs. Lastly, our results do not show a clear difference between coastal and land simulations. We conclude that the uncertainty introduced by merging different channels and sensors cannot fully explain the discrepancies between satellite- and ground-based precipitation estimates. Hence, uncertainties related to the retrieval algorithm and environmental conditions are found to be more prominent than resampling uncertainties, in particular for shallow and light precipitation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
54. Meta‐analysis in allergy—Statistical recommendations.
- Author
-
Ordak, Michal, Canonica, Giorgio Walter, Paoletti, Giovanni, Brussino, Liusa, Carvalho, Daniela, and Di Bona, Danilo
- Subjects
- *
ALLERGIES , *STANDARD deviations , *SAMPLING errors , *STATISTICAL errors - Abstract
Meta-analysis is a statistical procedure that combines the results of multiple scientific studies on the same topic. It allows for a more precise estimation of effects and identification of general trends in research. However, there are common errors that researchers make when conducting meta-analyses. These include misinterpreting the I-squared value, not properly examining outliers, using the wrong measures for effect size, not accounting for correlated observations, including studies with small sample sizes, and failing to conduct sensitivity and asymmetry analyses. These errors can lead to incorrect conclusions and have serious consequences in scientific research and clinical practice. It is recommended to work with a biostatistician to ensure the validity and appropriateness of statistical analysis methods used in meta-analyses. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
55. Statistical Methods in Health Disparity Research: Edited by J. Sunil Rao, Boca Raton, CRC Press, 2023, 298 pp., £120.00, ISBN 9780367635121.
- Author
-
Zulfaidil, Rumadaul, Dewi Syitra, Bunga, Esther Yolandyne, Safril, Pudjaprasetya, Sri Redjeki, and Djohan, Warsoma
- Subjects
- *
HEALTH equity , *HEALTH policy , *MULTILEVEL models , *ETHNICITY , *SAMPLING errors - Abstract
"Statistical Methods in Health Disparity Research" by J. Sunil Rao is a comprehensive book that explores statistical methods for estimating health disparities. The book covers various approaches, including non-model and linear-based models, as well as the application of machine learning. It emphasizes the importance of validating findings and discusses the advantages and limitations of each method. The book is divided into seven chapters, which cover topics such as basic concepts of health disparities, estimation of health disparities, domain-specific estimates, causality, moderation, and mediation, machine learning-based approaches, health disparity estimation under a precision medicine paradigm, and extended topics. While the book has some structural inconsistencies, it offers a diverse range of topics and provides in-depth explanations of complex concepts. It is recommended for students, professionals in public health or health policy, researchers, and practitioners working with vulnerable populations. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
56. Simultaneous selection and incorporation of consistent external aggregate information.
- Author
-
Huang, Yunxiang, Huang, Chiung‐Yu, and Kim, Mi‐Ok
- Subjects
- *
OPTIMIZATION algorithms , *ASYMPTOTIC normality , *SAMPLING errors , *REGULATION of body weight , *WEIGHT gain - Abstract
Interest has grown in synthesizing participant level data of a study with relevant external aggregate information. Several efficient and flexible procedures have been developed under the assumption that the internal study and the external sources concern the same population. This homogeneity condition, albeit commonly being imposed, is hard to check due to limitedly available external information in aggregate data forms. Bias may be introduced when the assumption is violated. In this article, we propose a penalized likelihood approach that avoids undesirable bias by simultaneously selecting and synthesizing consistent external aggregate information. The proposed approach provides a general framework which incorporate consistent external information from heterogeneous study populations as long as the conditional distribution of the dependent variable under investigation is same and differences in the independent variable distributions are properly accounted for via a semi‐parametric density ratio model. The proposed approach also properly accounts for the sampling errors in the external information. A two‐step estimator and an optimization algorithm are proposed for computation. We establish the selection and estimation consistency and the asymptotic normality of the two‐step estimator. The proposed approach is illustrated with an analysis of gestational weight gain management studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
57. RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention.
- Author
-
Qian, Jian and Tang, Dewen
- Subjects
- *
POINT cloud , *CONVOLUTIONAL neural networks , *SAMPLING errors - Abstract
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlapping environments. In this paper, a novel learning network is proposed to optimize correspondences in sparse keypoints. Firstly, a multi-layer channel sampling mechanism is suggested to enhance the information in point clouds, and keypoints were filtered and fused at multi-layer resolutions to form patches through feature weight filtering. Moreover, a template matching module is devised, comprising a self-attention mapping convolutional neural network and a cross-attention network. This module aims to match contextual features and refine the correspondence in overlapping areas of patches, ultimately enhancing correspondence accuracy. Experimental results demonstrate the robustness of our model across various datasets, including ModelNet40, 3DMatch, 3DLoMatch, and KITTI. Notably, our method excels in low-overlap scenarios, showcasing superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
58. Homogenization of daily temperatures using covariates and statistical learning—The case of parallel measurements.
- Author
-
de Valk, Cees and Brandsma, Theo
- Subjects
- *
STATISTICAL learning , *OCEAN temperature , *PHYSICAL constants , *SAMPLING errors , *TEMPERATURE , *HUMIDITY - Abstract
A data driven method based on generalized additive modelling (GAM) has been developed for homogenizing daily minimum and maximum temperature (TN, TX) series using parallel measurements and covariates. The method is applied to two coastal and two inland stations in the Netherlands. Between 1950 and 1972, these stations were relocated from cities to airports, accompanied by parallel measurement of at least 5 years at the old and new sites. Separating these parallel measurements in training and test data, the method compares numerous models involving covariates like the wind vector, cloudiness, specific humidity and sea surface temperature, and selects a model for each station. The resulting models offer an improvement compared to models based on temperature and season only: seasonal dependence is largely replaced by dependence on physical quantities. However, quantitatively, the impact is not large in the cases studied. One of the reasons might be that some covariates have only been measured at specific times not coinciding with the occurrences of the temperature minima or maxima. Additional benefits of the method are robustness and estimation of the sampling error variance of the daily homogenized daily temperature values. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
59. Gastro-Esophageal Junction Precancerosis: Histological Diagnostic Approach and Pathogenetic Insights.
- Author
-
Giacometti, Cinzia, Gusella, Anna, and Cassaro, Mauro
- Subjects
- *
ESOPHAGUS , *ADENOCARCINOMA , *GENETICS , *PATHOGENESIS , *INFLAMMATION , *WORK , *MOLECULAR pathology , *APOPTOSIS , *BARRETT'S esophagus , *MEDICAL protocols , *METAPLASIA , *GASTROESOPHAGEAL reflux , *SEVERITY of illness index , *SAMPLING errors , *CELL cycle , *CELLULAR signal transduction , *EXPERIENTIAL learning , *GENES , *GENOMICS , *EPITHELIAL cells , *HISTOLOGY , *GASTRIC mucosa , *ESOPHAGEAL tumors , *EPIGENOMICS , *DISEASE risk factors , *DISEASE complications - Abstract
Simple Summary: A diagnosis of Barrett's esophagus (BE) requires the macroscopic visualization of gastric-appearing mucosa in the esophagus and the identification of intestinal metaplasia on histologic examination. Histologic diagnosis of BE dysplasia can be challenging due to sampling error, pathologists' experience, interobserver variation, and difficulty in histologic interpretation: all these problems complicate patient management. In intestinal metaplasia, which occurs because of chronic gastresophageal reflux disease (GERD), the squamous epithelium converts to columnar epithelium, which is initially of the cardia type and devoid of goblet cells; it later develops goblet cell metaplasia and eventually dysplasia, which develops and progresses to adenocarcinoma because of the accumulation of multiple genetic and epigenetic alterations. Therefore, this review aims to provide an up-to-date and clear diagnostic approach to Barrett's esophagus and an overview of dysplasia's development's pathogenetic and molecular mechanisms. Barrett's esophagus (BE) was initially defined in the 1950s as the visualization of gastric-like mucosa in the esophagus. Over time, the definition has evolved to include the identification of goblet cells, which confirm the presence of intestinal metaplasia within the esophagus. Chronic gastro-esophageal reflux disease (GERD) is a significant risk factor for adenocarcinoma of the esophagus, as intestinal metaplasia can develop due to GERD. The development of adenocarcinomas related to BE progresses in sequence from inflammation to metaplasia, dysplasia, and ultimately carcinoma. In the presence of GERD, the squamous epithelium changes to columnar epithelium, which initially lacks goblet cells, but later develops goblet cell metaplasia and eventually dysplasia. The accumulation of multiple genetic and epigenetic alterations leads to the development and progression of dysplasia. The diagnosis of BE requires the identification of intestinal metaplasia on histologic examination, which has thus become an essential tool both in the diagnosis and in the assessment of dysplasia's presence and degree. The histologic diagnosis of BE dysplasia can be challenging due to sampling error, pathologists' experience, interobserver variation, and difficulty in histologic interpretation: all these problems complicate patient management. The development and progression of Barrett's esophagus (BE) depend on various molecular events that involve changes in cell-cycle regulatory genes, apoptosis, cell signaling, and adhesion pathways. In advanced stages, there are widespread genomic abnormalities with losses and gains in chromosome function, and DNA instability. This review aims to provide an updated and comprehensible diagnostic approach to BE based on the most recent guidelines available in the literature, and an overview of the pathogenetic and molecular mechanisms of its development. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
60. Neutrosophic Estimators in Two-Phase Survey Sampling.
- Author
-
Yadav, Vinay Kumar and Prasad, Shakti
- Subjects
- *
AMBIGUITY , *SAMPLING errors , *PARAMETER estimation , *GAUSSIAN distribution , *EARTH sciences - Abstract
Point estimates in survey sampling only provide a single value for the parameter being studied and are consequently vulnerable to changes caused by sampling error. In order to cope with ambiguity, indeterminacy, and uncertainty in data, Florentin Smarandache's neutrosophic technique, which generates interval estimates with high probability, offers a helpful solution. To estimate the neutrosophic population mean of the studied variable, this research provides new neutrosophic factor type exponential estimators using well-known neutrosophic auxiliary parameters. For the first-degree of approximation, the study derives the bias and Mean Squared Error (MSE) of the proposed estimators. Characterising constants have neutrosophic optimal values, and for these optimum values, the least value of the neutrosophic MSE is obtained. Notably, the proposed neutrosophic estimators outperform the corresponding adapted classical estimators since their estimated interval falls under the minimal MSE and lies within the estimated interval of the proposed neutrosophic estimators. The theoretical results are supported by empirical data from real data sets acquired by the "Ministry of Earth Sciences" and the "India Meteorological Department (IMD), Pune, India," as well as simulated data sets produced via Neutrosophic Normal Distribution. The estimator with the lowest MSE is suggested for practical applications across many domains, providing greater accuracy and reliability in parameter estimation when utilising the neutrosophic methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
61. A sequential sampling approach for discriminating log-normal, Weibull, and log-logistic distributions.
- Author
-
Paul, Biplab, De, Shyamal K., and Kundu, Debasis
- Subjects
- *
DISCRIMINATION against overweight persons , *ERROR probability , *SAMPLE size (Statistics) , *SAMPLING errors - Abstract
Log-normal, Weibull, and log-logistic distributions are widely used in modeling nonnegative skewed data. We develop sequential methodologies to discriminate between any two of these three distributions as well as to discriminate among these three distributions. These methods are extended to discriminate Mð 2Þ distributions from location-scale, log-location-scale and regular families of distributions. Discriminating three or more distributions having similar shapes often requires large sample size. Sequential procedures allow early stopping which in turn reduce the sample size needed for discrimination. Proposed methods yield high probabilities of correct selection that are shown to converge to 1 asymptotically. Asymptotic behavior of expected sample size and error probabilities are studied as stopping boundaries tend to infinity. Extensive simulation study validates finite sample performances of the proposed procedures requiring significantly fewer samples on average. These methods are applied to three benchmark datasets on cancer trials and are shown to select the correct model with high probability [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
62. On the mathematical theory of ensemble (linear-Gaussian) Kalman–Bucy filtering.
- Author
-
Bishop, Adrian N. and Del Moral, Pierre
- Subjects
- *
KALMAN filtering , *FILTERS & filtration , *GRANULAR flow , *CONTINUOUS-time filters , *SAMPLING errors - Abstract
The purpose of this review is to present a comprehensive overview of the theory of ensemble Kalman–Bucy filtering for continuous-time, linear-Gaussian signal and observation models. We present a system of equations that describe the flow of individual particles and the flow of the sample covariance and the sample mean in continuous-time ensemble filtering. We consider these equations and their characteristics in a number of popular ensemble Kalman filtering variants. Given these equations, we study their asymptotic convergence to the optimal Bayesian filter. We also study in detail some non-asymptotic time-uniform fluctuation, stability, and contraction results on the sample covariance and sample mean (or sample error track). We focus on testable signal/observation model conditions, and we accommodate fully unstable (latent) signal models. We discuss the relevance and importance of these results in characterising the filter's behaviour, e.g. it is signal tracking performance, and we contrast these results with those in classical studies of stability in Kalman–Bucy filtering. We also provide a novel (and negative) result proving that the bootstrap particle filter cannot track even the most basic unstable latent signal, in contrast with the ensemble Kalman filter (and the optimal filter). We provide intuition for how the main results extend to nonlinear signal models and comment on their consequence on some typical filter behaviours seen in practice, e.g. catastrophic divergence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
63. Variable-sample method for the computation of stochastic Nash equilibrium.
- Author
-
Zhang, Dali, Ji, Lingyun, Zhao, Sixiang, and Wang, Lizhi
- Subjects
- *
NASH equilibrium , *SAMPLING errors , *MONTE Carlo method , *SAMPLE size (Statistics) , *POINT set theory - Abstract
This article proposes a variable-sample method for the computation of stochastic stable Nash equilibrium, in which the objective functions are approximated, in each iteration, by the sample average approximation with different sample sizes. We start by investigating the contraction mapping properties under the variable-sample framework. Under some moderate conditions, it is shown that the accumulation points attained from the algorithm satisfy the first-order equilibrium conditions with probability one. Moreover, we use the asymptotic unbiasedness condition to prove the convergence of the accumulation points of the algorithm into the set of fixed points and prove the finite termination property of the algorithm. We also verify that the algorithm converges to the equilibrium even if the optimization problems in each iteration are solved inexactly. In the numerical tests, we comparatively analyze the accuracy error and the precision error of the estimators with different sample size schedules with respect to the sampling loads and the computational times. The results validate the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
64. Multiple Change Point Detection in Reduced Rank High Dimensional Vector Autoregressive Models.
- Author
-
Bai, Peiliang, Safikhani, Abolfazl, and Michailidis, George
- Subjects
- *
CHANGE-point problems , *AUTOREGRESSIVE models , *LOW-rank matrices , *VECTOR autoregression model , *SPARSE matrices , *SAMPLING errors - Abstract
We study the problem of detecting and locating change points in high-dimensional Vector Autoregressive (VAR) models, whose transition matrices exhibit low rank plus sparse structure. We first address the problem of detecting a single change point using an exhaustive search algorithm and establish a finite sample error bound for its accuracy. Next, we extend the results to the case of multiple change points that can grow as a function of the sample size. Their detection is based on a two-step algorithm, wherein the first step, an exhaustive search for a candidate change point is employed for overlapping windows, and subsequently a backward elimination procedure is used to screen out redundant candidates. The two-step strategy yields consistent estimates of the number and the locations of the change points. To reduce computation cost, we also investigate conditions under which a surrogate VAR model with a weakly sparse transition matrix can accurately estimate the change points and their locations for data generated by the original model. This work also addresses and resolves a number of novel technical challenges posed by the nature of the VAR models under consideration. The effectiveness of the proposed algorithms and methodology is illustrated on both synthetic and two real datasets. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
65. Convergence acceleration of preconditioned conjugate gradient solver based on error vector sampling for a sequence of linear systems.
- Author
-
Iwashita, Takeshi, Ikehara, Kota, Fukaya, Takeshi, and Mifune, Takeshi
- Subjects
- *
CONJUGATE gradient methods , *SAMPLING errors , *EIGENVALUES , *EIGENVECTORS , *LINEAR systems - Abstract
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
66. Optimization of sampling conditions to minimize sampling errors of both PM2.5 mass and its semi-volatile inorganic ion concentrations.
- Author
-
Thi-Cuc Le, Barhate, Pallavi Gajanan, Kai-Jing Zhen, Mishra, Manisha, Pui, David. Y. H., and Chuen-Jinn Tsai
- Subjects
- *
SAMPLING errors , *PRESSURE drop (Fluid dynamics) , *HUMIDITY , *INORGANIC compounds , *HUMIDITY control , *IONS , *BLOOD substitutes - Abstract
The accurate measurement of PM2.5 and its inorganic matters (IMs) is crucial for compliance monitoring and understanding particle formation. However, semi-volatile IMs (SVIMs) like NH4+, NO3-, and Cl- tend to evaporate from particles, causing sampling artifacts. The evaporation loss occurs due to many factors making the quantitative prediction difficult. This study aimed to investigate the evaporation loss of SVIMs in PM2.5 under different sampling conditions. In the field tests, when a normal single Teflon filter (STF) sampler, which is like a Federal Reference Method (FRM) sampler, was used to sample PM2.5 at ambient conditions, a significant SVIM evaporation loss was observed, resulting in negative biases for total IMs (-25.68 ± 3.25%) and PM2.5 concentrations (-9.87 ± 4.27%). But if PM2.5 was sampled by a chilled Teflon filter sampler (CTF) at 4 °C following aerosol dehumidification so that relative humidity (RH) was controlled to within the 10-20% range (RHd), evaporation loss was minimized with a bias of <±10% for both total IMs and PM2.5based on the reference data. When RHd is below 10%, both IMs and PM2.5 are under-measured, but only PM2.5 is over-measured when RHd is >20%. A model considering predictable saturation ratios for NH4+, NO3-, and Cl- under various pressure drop, temperature and RH conditions was developed to predict accurately the actual concentrations of PM2.5 and its SVIMs for the STF. Additionally, the ISORROPIA-II model predicted SVIMs effectively for the CTF. In summary, using the CTF at optimized sampling conditions can achieve accurate measurement of both SVIMs and PM2.5 concentrations simultaneously. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
67. Prediction of cytology--histology discrepancy when Bethesda cytology reports benign results for thyroid nodules in women: with special emphasis on pregnancy.
- Author
-
Firat, Aysun and Unal, Ethem
- Subjects
- *
THYROID nodules , *CYTOLOGY , *THYROIDECTOMY , *HISTOLOGY , *NEEDLE biopsy , *SAMPLING errors - Abstract
Objectives: Benign category of Bethesda classification is generally well known to carry a false-negative rate of 0--3%. The current study was designed to investigate the rate of falsenegative cytology in patients who underwent thyroidectomy for presumably benign thyroid diseases. Predictive risk factors for false results and malignancy were evaluated along with cytology--histology discrepant cases. Materials and methods: Females who underwent thyroidectomy between May 2014 and December 2022 were included. Demographics, ultrasound (US) features, fine-needle aspiration (FNA) diagnosis, surgical indications and outcomes, final histology reports, risk factors, and malignancy rate were recorded. Cytology--histology discrepant cases were further evaluated for interpretation errors and risk factors. Statistical analyses were performed using Fisher's exact and Mann--Whitney U tests. Results: Of 581 women with a benign thyroid disease who underwent thyroidectomy, 91 was diagnosed as incidental carcinoma (15.6%) and most was T1a (4.9 ± 2.7 mm, 95.6%). Final histology reports revealed mostly papillary carcinoma (93.4%). Predictors of malignancy such as age, family history, previous radiation exposure, and iodine-deficient diet did not help in risk stratification (p > 0.05, for each). However, FNA taken during pregnancy was determined as a risk factor (n = 7, 7.6%, p < 0.05) since it may cause a delay in diagnosis. Cytology--histology discrepant cases were seen to be mostly due to sampling errors (45%, p < 0.05), followed by misinterpretations (37.3%, p < 0.05). There was no reason for discrepancy in 17.5%, and this was linked to inherent nature of thyroid nodule with overlapping cytologic features. Best identifiable risk factor for misinterpretation was pregnancy as well (n = 5, 14.7%, p < 0.05). Conclusions: Risk of malignancy in a presumably benign thyroid disease should not be ignored. Radiology--cytology correlation by an experienced dedicated team may help in decreasing sampling errors. Physiologic changes caused by pregnancy may shade malignant transformation in thyrocytes, and it would be appropriate to be cautious about benign FNA taken during this period. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
68. Integrated community models: A framework combining multispecies data sources to estimate the status, trends and dynamics of biodiversity.
- Author
-
Zipkin, Elise F., Doser, Jeffrey W., Davis, Courtney L., Leuenberger, Wendy, Ayebare, Samuel, and Davis, Kayla L.
- Subjects
- *
NUMBERS of species , *ENDANGERED species , *DATA integration , *SPECIES distribution , *SAMPLING errors , *BIODIVERSITY - Abstract
Data deficiencies among rare or cryptic species preclude assessment of community‐level processes using many existing approaches, limiting our understanding of the trends and stressors for large numbers of species. Yet evaluating the dynamics of whole communities, not just common or charismatic species, is critical to understanding and the responses of biodiversity to ongoing environmental pressures.A recent surge in both public science and government‐funded data collection efforts has led to a wealth of biodiversity data. However, these data collection programmes use a wide range of sampling protocols (from unstructured, opportunistic observations of wildlife to well‐structured, design‐based programmes) and record information at a variety of spatiotemporal scales. As a result, available biodiversity data vary substantially in quantity and information content, which must be carefully reconciled for meaningful ecological analysis.Hierarchical modelling, including single‐species integrated models and hierarchical community models, has improved our ability to assess and predict biodiversity trends and processes. Here, we highlight the emerging 'integrated community modelling' framework that combines both data integration and community modelling to improve inferences on species‐ and community‐level dynamics.We illustrate the framework with a series of worked examples. Our three case studies demonstrate how integrated community models can be used to extend the geographic scope when evaluating species distributions and community‐level richness patterns; discern population and community trends over time; and estimate demographic rates and population growth for communities of sympatric species. We implemented these worked examples using multiple software methods through the R platform via packages with formula‐based interfaces and through development of custom code in JAGS, NIMBLE and Stan.Integrated community models provide an exciting approach to model biological and observational processes for multiple species using multiple data types and sources simultaneously, thus accounting for uncertainty and sampling error within a unified framework. By leveraging the combined benefits of both data integration and community modelling, integrated community models can produce valuable information about both common and rare species as well as community‐level dynamics, allowing for holistic evaluation of the effects of global change on biodiversity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
69. Common Sampling Errors in Research Studies.
- Author
-
Spolarich, Ann E.
- Subjects
- *
EXPERIMENTAL design , *SAMPLE size (Statistics) , *ORAL hygiene , *SOCIAL media , *SAMPLING errors , *SURVEYS , *HEALTH , *INFORMATION resources , *DENTAL research - Abstract
Proper sample selection is based on the study purpose, research question(s), and study design. Investigators must use care to select a sample population that is representative of the source and target populations. Welldefined inclusion and exclusion criteria serve as guidance when screening potential candidates for eligibility for participation in a study. Sampling and non-sampling errors may influence study outcomes and generalizability of results. The purpose of this short report is to review common sampling errors made when designing a study and when reporting study outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
70. Correcting prevalence estimation for biased sampling with testing errors.
- Author
-
Zhou, Lili, Díaz‐Pachón, Daniel Andrés, Zhao, Chen, Rao, J. Sunil, and Hössjer, Ola
- Subjects
- *
SAMPLING errors , *ERROR rates , *COVID-19 - Abstract
Sampling for prevalence estimation of infection is subject to bias by both oversampling of symptomatic individuals and error‐prone tests. This results in naïve estimators of prevalence (ie, proportion of observed infected individuals in the sample) that can be very far from the true proportion of infected. In this work, we present a method of prevalence estimation that reduces both the effect of bias due to testing errors and oversampling of symptomatic individuals, eliminating it altogether in some scenarios. Moreover, this procedure considers stratified errors in which tests have different error rate profiles for symptomatic and asymptomatic individuals. This results in easily implementable algorithms, for which code is provided, that produce better prevalence estimates than other methods (in terms of reducing and/or removing bias), as demonstrated by formal results, simulations, and on COVID‐19 data from the Israeli Ministry of Health. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
71. Handling missing within‐study correlations in the evaluation of surrogate endpoints.
- Author
-
Collier, Willem, Haaland, Benjamin, Inker, Lesley, and Greene, Tom
- Subjects
- *
SAMPLING errors , *PERCEIVED quality , *STATISTICAL models - Abstract
Rigorous evaluation of surrogate endpoints is performed in a trial‐level analysis in which the strength of the association between treatment effects on the clinical and surrogate endpoints is quantified across a collection of previously conducted trials. To reduce bias in measures of the performance of the surrogate, the statistical model must account for the sampling error in each trial's estimated treatment effects and their potential correlation. Unfortunately, these within‐study correlations can be difficult to obtain, especially for meta‐analysis of published trial results where individual patient data is not available. As such, these terms are frequently partially or completely missing in the analysis. We show that improper handling of these missing terms can meaningfully alter the perceived quality of the surrogate and we introduce novel strategies to handle the missingness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
72. An application of stereo matching algorithm based on transfer learning on robots in multiple scenes.
- Author
-
Bi, Yuanwei, Li, Chuanbiao, Tong, Xiangrong, Wang, Guohui, and Sun, Haiwei
- Subjects
- *
BINOCULAR vision , *ROBOT vision , *ROBOTS , *SAMPLING errors , *ALGORITHMS , *AUTONOMOUS robots , *MOBILE robots - Abstract
Robot vision technology based on binocular vision holds tremendous potential for development in various fields, including 3D scene reconstruction, target detection, and autonomous driving. However, current binocular vision methods used in robotics engineering have limitations such as high costs, complex algorithms, and low reliability of the generated disparity map in different scenes. To overcome these challenges, a cross-domain stereo matching algorithm for binocular vision based on transfer learning was proposed in this paper, named Cross-Domain Adaptation and Transfer Learning Network (Ct-Net), which has shown valuable results in multiple robot scenes. First, this paper introduces a General Feature Extractor to extract rich general feature information for domain adaptive stereo matching tasks. Then, a feature adapter is used to adapt the general features to the stereo matching network. Furthermore, a Domain Adaptive Cost Optimization Module is designed to optimize the matching cost. A disparity score prediction module was also embedded to adaptively adjust the search range of disparity and optimize the cost distribution. The overall framework was trained using a phased strategy, and ablation experiments were conducted to verify the effectiveness of the training strategy. Compared with the prototype PSMNet, on KITTI 2015 benchmark, the 3PE-fg of Ct-Net in all regions and non-occluded regions decreased by 19.3 and 21.1% respectively, meanwhile, on the Middlebury dataset, the proposed algorithm improves the sample error rate at least 28.4%, which is the Staircase sample. The quantitative and qualitative results obtained from Middlebury, Apollo, and other datasets demonstrate that Ct-Net significantly improves the cross-domain performance of stereo matching. Stereo matching experiments in real-world scenes have shown that it can effectively address visual tasks in multiple scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
73. Smaller-the-better-type six sigma product index.
- Author
-
Chen, Kuen-Suan, Hsieh, Tsung-Hua, Yu, Chun-Min, and Yao, Kai-Chao
- Subjects
- *
SIX Sigma , *MACHINE tool industry , *SAMPLING errors , *MACHINE tools , *CONFIDENCE intervals , *COMMERCIAL product testing - Abstract
Based on some studies, there are many important parts of tool machines, all of which have some essential smaller-the-better-type quality characteristics. The six sigma quality index of the smaller-the-better type offers accurate measurement of the process yield and the six sigma quality level. In this paper, we first proposed a six sigma product index by integrating all evaluation indicators for products that contain several quality characteristics of the smaller-the-better type. Next, we derived the confidence interval of this six sigma product index and developed an evaluation model for product quality. When a product passes the evaluation of this model, not only can it be guaranteed that the product reaches the required quality level, but also a high rate of product yield can be ensured. In addition, we also created a product improvement testing model, which can avoid missing opportunities for improvement in the process to ensure improvement effects. This complete evaluation and improvement model is applicable to the entire machine tool industry chain. It can not only increase the product value of the machine tool industry chain but also decrease environmental pollution caused by rework or scrap, which is beneficial to companies to enhance their image of fulfilling social responsibilities. Apart from the above advantages, the model formed in this paper is based on confidence intervals, thereby reducing the chance of misjudgment resulting from sampling error. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
74. Monte Carlo drift correction – quantifying the drift uncertainty of global climate models.
- Author
-
Grandey, Benjamin S., Koh, Zhi Yang, Samanta, Dhrubajyoti, Horton, Benjamin P., Dauwels, Justin, and Chew, Lock Yue
- Subjects
- *
CLIMATE change models , *THERMAL expansion , *TIME series analysis , *SAMPLING errors , *RESEARCH personnel - Abstract
Global climate models are susceptible to drift, causing spurious trends in output variables. Drift is often corrected using data from a control simulation. However, internal climate variability within the control simulation introduces uncertainty to the drift correction process. To quantify this drift uncertainty, we develop a probabilistic technique: Monte Carlo drift correction (MCDC). MCDC samples the standard error associated with drift in the control time series. We apply MCDC to an ensemble of global climate models from the Coupled Model Intercomparison Project Phase 6 (CMIP6). We find that drift correction partially addresses a problem related to drift: energy leakage. Nevertheless, the energy balance of several models remains suspect. We quantify the drift uncertainty of global quantities associated with the Earth's energy balance and thermal expansion of the ocean. When correcting drift in a cumulatively integrated energy flux, we find that it is preferable to integrate the flux before correcting the drift: an alternative method would be to correct the bias before integrating the flux, but this alternative method amplifies the drift uncertainty. Assuming that drift is linear likely leads to an underestimation of drift uncertainty. Time series with weak trends may be especially susceptible to drift uncertainty: for historical thermosteric sea level rise since the 1850s, the drift uncertainty can range from 3 to 24 mm , which is of comparable magnitude to the impact of omitting volcanic forcing in control simulations. Derived coefficients – such as the ocean's expansion efficiency of heat – can also be susceptible to drift uncertainty. When evaluating and analysing global climate model data that are susceptible to drift, researchers should consider drift uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
75. BRASS: Permutation methods for binary traits in genetic association studies with structured samples.
- Author
-
Mbatchou, Joelle, Abney, Mark, and McPeek, Mary Sara
- Subjects
- *
BRASS , *GENETIC algorithms , *GENOME-wide association studies , *PERMUTATIONS , *FALSE positive error , *DOGS , *SAMPLING errors - Abstract
In genetic association analysis of complex traits, permutation testing can be a valuable tool for assessing significance when the distribution of the test statistic is unknown or not well-approximated. This commonly arises, e.g, in tests of gene-set, pathway or genome-wide significance, or when the statistic is formed by machine learning or data adaptive methods. Existing applications include eQTL mapping, association testing with rare variants, inclusion of admixed individuals in genetic association analysis, and epistasis detection among many others. For genetic association testing in samples with population structure and/or relatedness, use of naive permutation can lead to inflated type 1 error. To address this in quantitative traits, the MVNpermute method was developed. However, for association mapping of a binary trait, the relationship between the mean and variance makes both naive permutation and the MVNpermute method invalid. We propose BRASS, a permutation method for binary traits, for use in association mapping in structured samples. In addition to modeling structure in the sample, BRASS allows for covariates, ascertainment and simultaneous testing of multiple markers, and it accommodates a wide range of test statistics. In simulation studies, we compare BRASS to other permutation and resampling-based methods in a range of scenarios that include population structure, familial relatedness, ascertainment and phenotype model misspecification. In these settings, we demonstrate the superior control of type 1 error by BRASS compared to the other 6 methods considered. We apply BRASS to assess genome-wide significance for association analyses in domestic dog for elbow dysplasia (ED) and idiopathic epilepsy (IE). For both traits we detect previously identified associations, and in addition, for ED, we detect significant association with a SNP on chromosome 35 that was not detected by previous analyses, demonstrating the potential of the method. Author summary: To determine whether genetic association with a trait is significant, permutation methods are an attractive and popular approach when analytic methods based on distributional assumptions are not available, e.g., when applying machine learning or data adaptive methods, or when performing a multiple testing correction, e.g., to assess region-wide or genome-wide significance in association mapping studies. Existing applications include eQTL mapping, association testing with rare variants, inclusion of admixed individuals in genetic association analysis, and detection of genetic interaction among many others. However, when there is population structure in the sample, naive permutation of the data can lead to inflated significance of the association results. For continuous traits, linear mixed-model based approaches have been proposed for permutation-based tests that can also adjust for sample structure; however, these do not remain valid when applied to binary traits, as key features of binary data are not well accounted for. We propose BRASS, a permutation-based testing method for binary data that incorporates important characteristics of binary data in the trait model, can accommodate relevant covariates and ascertainment, and adjusts for the presence of structure in the sample. In simulations, we demonstrate the superior control of type 1 error by BRASS compared to other methods, and we apply BRASS in the context of correcting for multiple testing in two genome-wide association studies in domestic dog: one for elbow dysplasia and one for idiopathic epilepsy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
76. Quantized minimum error entropy with fiducial points for robust regression.
- Author
-
Zheng, Yunfei, Wang, Shiyuan, and Chen, Badong
- Subjects
- *
ENTROPY , *REGRESSION analysis , *SAMPLING errors , *SIGNAL processing , *INSTRUCTIONAL systems - Abstract
Minimum error entropy with fiducial points (MEEF) has received a lot of attention, due to its outstanding performance to curb the negative influence caused by non-Gaussian noises in the fields of machine learning and signal processing. However, the estimate of the information potential of MEEF involves a double summation operator based on all available error samples, which can result in large computational burden in many practical scenarios. In this paper, an efficient quantization method is therefore adopted to represent the primary set of error samples with a smaller subset, generating a quantized MEEF (QMEEF). Some basic properties of QMEEF are presented and proved from theoretical perspectives. In addition, we have applied this new criterion to train a class of linear-in-parameters models, including the commonly used linear regression model, random vector functional link network, and broad learning system as special cases. Experimental results on various datasets are reported to demonstrate the desirable performance of the proposed methods to perform regression tasks with contaminated data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
77. GEOMETRIC ERGODICITY FOR HAMILTONIAN MONTE CARLO ON COMPACT MANIFOLDS.
- Author
-
KOTA TAKEDA and TAKASHI SAKAJO
- Subjects
- *
MARKOV chain Monte Carlo , *SAMPLING errors , *INVARIANT measures - Abstract
We consider a Markov chain Monte Carlo method, known as Hamiltonian Monte Carlo (HMC), on compact manifolds in Euclidean space. It utilizes Hamiltonian dynamics to generate samples approximating a target distribution in high dimensions efficiently. The efficiency of HMC is characterized by its convergence property, called geometric ergodicity. This property is important to generate low-correlated samples. It also plays a crucial role in establishing the error estimate for the quadrature of bounded functions by HMC sampling, referred to as the Hoeffding-type inequality. While the geometric ergodicity has been proved for HMC on Euclidean space, it has not been established on manifolds. In this paper, we prove the geometric ergodicity for HMC on compact manifolds. As an example to confirm the efficiency of the proposed HMC method, we consider a sampling problem associated with the N-vortex problem on the unit sphere, which is a statistical model of two-dimensional turbulence. We apply HMC to approximate the statistical quantities with respect to the invariant measure of the N-vortex problem, called the Gibbs measure. We observe the organization of large vortex structures as seen in two-dimensional turbulence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
78. Replies to comments on "Which method delivers greater signal‐to‐noise ratio: Structural equation modelling or regression analysis with weighted composites?" by Yuan and Fang (2023).
- Author
-
Yuan, Ke‐Hai and Fang, Yongfei
- Subjects
- *
STRUCTURAL equation modeling , *REGRESSION analysis , *SIGNAL-to-noise ratio , *ERRORS-in-variables models , *LATENT variables , *PATH analysis (Statistics) , *SAMPLING errors - Abstract
When the indicators of the HT ht do not have predefined metrics, we can rewrite equation (8) as 9 HT ht where the HT ht are arbitrary positive numbers. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
79. Temperature, resources and predation interact to shape phytoplankton size–abundance relationships at a continental scale.
- Author
-
Gjoni, Vojsava, Glazier, Douglas S., Wesner, Jeff S., Ibelings, Bastiaan W., and Thomas, Mridul K.
- Subjects
- *
PHYTOPLANKTON , *BODY size , *GLOBAL warming , *PREDATION , *TEMPERATURE , *SAMPLING errors - Abstract
Aim: Communities contain more individuals of small species and fewer individuals of large species. According to the 'metabolic theory of ecology', the relationship of log mean abundance with log mean body size across communities should exhibit a slope of −3/4 that is invariant across environmental conditions. Here, we investigate whether this slope is indeed invariant or changes systematically across gradients in temperature, resource availability and predation pressure. Location: 1048 lakes across the USA. Time Period: 2012. Major Taxa Studied: Phytoplankton. Results: We found that the size–abundance relationship across all sampled phytoplankton communities was significantly lower than −3/4 and near −1 overall. More importantly, we found strong evidence that the environment affects the slope: it varies between −0.33 and −0.93 across interacting gradients of temperature, resource (phosphorus) supply and zooplankton predation pressure. Therefore, phytoplankton communities have orders of magnitude more small or large cells depending on environmental conditions across geographical locations. Conclusion: Our results emphasise the importance of the environmental factors' effect on macroecological patterns that arise through physiological and ecological processes. An investigation of the mechanisms underlying the link between individual energetics constrain and macroecological patterns would allow to predict how global warming and changes in nutrients will alter large‐scale ecological patterns in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
80. Assessing replicability with the sceptical p$$ p $$‐value: Type‐I error control and sample size planning.
- Author
-
Micheloud, Charlotte, Balabdaoui, Fadoua, and Held, Leonhard
- Subjects
- *
SAMPLE size (Statistics) , *SAMPLING errors , *EXPERIMENTAL economics , *ERROR rates , *FALSE positive error , *PRODUCTION planning - Abstract
We study a statistical framework for replicability based on a recently proposed quantitative measure of replication success, the sceptical p$$ p $$‐value. A recalibration is proposed to obtain exact overall Type‐I error control if the effect is null in both studies and additional bounds on the partial and conditional Type‐I error rate, which represent the case where only one study has a null effect. The approach avoids the double dichotomization for significance of the two‐trials rule and has larger project power to detect existing effects over both studies in combination. It can also be used for power calculations and requires a smaller replication sample size than the two‐trials rule for already convincing original studies. We illustrate the performance of the proposed methodology in an application to data from the Experimental Economics Replication Project. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
81. Smaller-the-better-type six sigma product index.
- Author
-
Chen, Kuen-Suan, Hsieh, Tsung-Hua, Yu, Chun-Min, and Yao, Kai-Chao
- Subjects
- *
SIX Sigma , *MACHINE tool industry , *SAMPLING errors , *MACHINE tools , *CONFIDENCE intervals , *COMMERCIAL product testing - Abstract
Based on some studies, there are many important parts of tool machines, all of which have some essential smaller-the-better-type quality characteristics. The six sigma quality index of the smaller-the-better type offers accurate measurement of the process yield and the six sigma quality level. In this paper, we first proposed a six sigma product index by integrating all evaluation indicators for products that contain several quality characteristics of the smaller-the-better type. Next, we derived the confidence interval of this six sigma product index and developed an evaluation model for product quality. When a product passes the evaluation of this model, not only can it be guaranteed that the product reaches the required quality level, but also a high rate of product yield can be ensured. In addition, we also created a product improvement testing model, which can avoid missing opportunities for improvement in the process to ensure improvement effects. This complete evaluation and improvement model is applicable to the entire machine tool industry chain. It can not only increase the product value of the machine tool industry chain but also decrease environmental pollution caused by rework or scrap, which is beneficial to companies to enhance their image of fulfilling social responsibilities. Apart from the above advantages, the model formed in this paper is based on confidence intervals, thereby reducing the chance of misjudgment resulting from sampling error. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
82. Convergence of turbulence statistics: random error of central moments computed from correlated data.
- Author
-
Belanger, Randy, Lavoie, Philippe, and Zingg, David W.
- Subjects
- *
TURBULENCE , *GAUSSIAN distribution , *TURBULENT boundary layer , *SAMPLING errors , *TURBULENT jets (Fluid dynamics) - Abstract
A new formula is derived for the random error of sample central moments from correlated data which does not assume an underlying distribution and is accurate to leading order in the number of sample elements. Central moments, being important quantities in turbulence research, require accurate error estimation. Many approaches have been followed in the past for estimating the random errors of central moments from correlated data. These include: simple extensions of the formula for independent data, using the formula for the random error of generic averages, assuming an underlying normal distribution, and using block bootstraps. All of these approaches are compared with the present formula using datasets from a turbulent boundary layer, freestream grid turbulence, and a turbulent round jet. For even-order sample central moments, many of the existing approaches perform well with differences of <15%. However, for odd-order sample central moments, only the block bootstrap methodology performs similarly well. For the same sample central moments, the other methods differ by as much as 200–1000%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
83. Improved gray correlation analysis and combined prediction model for aviation accidents.
- Author
-
Su, Siyu, Sun, Youchao, Peng, Chong, and Guo, Yuanyuan
- Subjects
- *
STATISTICAL correlation , *AIRCRAFT accidents , *PREDICTION models , *AERONAUTICAL safety measures , *REGRESSION trees , *SAMPLING errors , *AIRPLANE takeoff - Abstract
Purpose: The purpose of this paper is to identify the key influencing factors of aviation accidents and to predict the aviation accidents caused by the factors. Design/methodology/approach: This paper proposes an improved gray correlation analysis (IGCA) theory to make the relational analysis of aviation accidents and influencing factors and find out the critical causes of aviation accidents. The optimal varying weight combination model (OVW-CM) is constructed based on gradient boosted regression tree (GBRT), extreme gradient boosting (XGBoost) and support vector regression (SVR) to predict aviation accidents due to critical factors. Findings: The global aviation accident data from 1919 to 2020 is selected as the experimental data. The airplane, takeoff/landing and unexpected results are the leading causes of the aviation accidents based on IGCA. Then GBRT, XGBoost, SVR, equal-weight combination model (EQ-CM), variance-covariance combination model (VCW-CM) and OVW-CM are used to predict aviation accidents caused by airplane, takeoff/landing and unexpected results, respectively. The experimental results show that OVW-CM has a better prediction effect, and the prediction accuracy and stability are higher than other models. Originality/value: Unlike the traditional gray correlation analysis (GCA), IGCA weights the sample by distance analysis to more objectively reflect the degree of influence of different factors on aviation accidents. OVW-CM is built by minimizing the combined prediction error at sample points and assigns different weights to different individual models at different moments, which can make full use of the advantages of each model and has higher prediction accuracy. And the model parameters of GBRT, XGBoost and SVR are optimized by the particle swarm algorithm. The study can guide the analysis and prediction of aviation accidents and provide a scientific basis for aviation safety management. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
84. Information theoretic perspective on sample complexity.
- Author
-
Pereg, Deborah
- Subjects
- *
SUPERVISED learning , *DISTRIBUTION (Probability theory) , *STATISTICAL learning , *SAMPLING errors , *INFORMATION theory , *SAMPLE size (Statistics) - Abstract
The statistical supervised learning framework assumes an input–output set with a joint probability distribution that is reliably represented by the training dataset. The learning system is then required to output a prediction rule learned from the training dataset's input–output pairs. In this work, we investigate the relationship between the sample complexity, the empirical risk and the generalization error based on the asymptotic equipartition property (AEP) (Shannon, 1948). We provide theoretical guarantees for reliable learning under the information-theoretic AEP, with respect to the generalization error and the sample size in different settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
85. Design of generalized fuzzy multiple deferred state sampling plan for attributes.
- Author
-
Thomas, Julia T. and Kumar, Mahesh
- Subjects
- *
ACCEPTANCE sampling , *SAMPLING (Process) , *SAMPLING errors , *SAMPLE size (Statistics) - Abstract
In industry, for the quality inspection processes, acceptance sampling plans proved to be economically viable, but the unpredictability of the plan's characteristics made the use of conventional acceptance sampling plans less reliable. The generalized fuzzy multiple deferred state sampling plan (GFMDSSP) is suggested in this study for qualities that consider the difficulty in calculating the precise value of the percentage of defectives in a batch. The strategy is created with a minimal average sample size in mind and the performance measures have already been determined. An analysis of the current fuzzy acceptance sampling plans for characteristics is conducted, and an important conclusion is drawn regarding the effectiveness of the proposed scheme. Analysis of the impact of inspection errors on the sampling process reveals a decline in plan acceptance standards that is correlated with escalating inspection errors. Finally, some numerical examples are provided to support the findings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
86. Evolutionary rescue under demographic and environmental stochasticity.
- Author
-
Xu, Kuangyi, Vision, Todd J., and Servedio, Maria R.
- Subjects
- *
SAMPLING errors , *PROBABILITY theory - Abstract
Populations suffer two types of stochasticity: demographic stochasticity, from sampling error in offspring number, and environmental stochasticity, from temporal variation in the growth rate. By modelling evolution through phenotypic selection following an abrupt environmental change, we investigate how genetic and demographic dynamics, as well as effects on population survival of the genetic variance and of the strength of stabilizing selection, differ under the two types of stochasticity. We show that population survival probability declines sharply with stronger stabilizing selection under demographic stochasticity, but declines more continuously when environmental stochasticity is strengthened. However, the genetic variance that confers the highest population survival probability differs little under demographic and environmental stochasticity. Since the influence of demographic stochasticity is stronger when population size is smaller, a slow initial decline of genetic variance, which allows quicker evolution, is important for population persistence. In contrast, the influence of environmental stochasticity is population‐size‐independent, so higher initial fitness becomes important for survival under strong environmental stochasticity. The two types of stochasticity interact in a more than multiplicative way in reducing the population survival probability. Our work suggests the importance of explicitly distinguishing and measuring the forms of stochasticity during evolutionary rescue. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
87. Estimating reliabilities and correcting for sampling error in indices of within-person dynamics derived from intensive longitudinal data.
- Author
-
Schneider, Stefan and Junghaenel, Doerte U.
- Subjects
- *
PANEL analysis , *SAMPLING errors , *MULTILEVEL models , *INTRACLASS correlation , *STANDARD deviations - Abstract
Psychology has witnessed a dramatic increase in the use of intensive longitudinal data (ILD) to study within-person processes, accompanied by a growing number of indices used to capture individual differences in within-person dynamics (WPD). The reliability of WPD indices is rarely investigated and reported in empirical studies. Unreliability in these indices can bias parameter estimates and yield erroneous conclusions. We propose an approach to (a) estimate the reliability and (b) correct for sampling error of WPD indices using "Level-1 variance-known" (V-known) multilevel models (Raudenbush & Bryk, 2002). When WPD indices are calculated for each individual, the sampling variance of the observed WPD scores is typically falsely assumed to be zero. V-known models replace this "zero" with an approximate sampling variance fixed at Level 1 to estimate the true variance of the index at Level 2, following random effects meta-analysis principles. We demonstrate how V-known models can be applied to a broad range of emotion dynamics commonly derived from ILD, including indices of the average level (mean), variability (intraindividual standard deviation), instability (probability of acute change), bipolarity (correlation), differentiation (intraclass correlation), inertia (autocorrelation), and relative variability (relative standard deviation) of emotions. A simulation study shows the usefulness of V-known models to recover the true reliability of these indices. Using a 21-day diary study, we illustrate the implementation of the proposed approach to obtain reliability estimates and to correct for unreliability of WPD indices in real data. The techniques may facilitate psychometrically sound inferences from WPD indices in this burgeoning research area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
88. Concordance of HER2 Status between Core Needle Biopsy and Surgical Resected Specimens of Breast Cancer: A Narrative Review.
- Author
-
Maryam, Maghbool and Babak, Samizadeh
- Subjects
- *
CORE needle biopsy , *BREAST cancer , *SAMPLING errors , *CANCER patients - Abstract
Background & Objective: HER2, a molecular biomarker, is routinely evaluated in breast cancer patients to guide therapeutic decisions. However, the concordance of HER2 status between coreneedle biopsy (CNB) and surgical specimen (SS) samples is not always high, potentially affecting the accuracy of diagnosis and treatment. This study aims to review recent studies assessing the agreement of HER2 status between CNB and SS samples in breast cancer patients. Materials & Methods: A literature search was conducted in PubMed, Scopus, and Google Scholar databases from January 2018 to August 2023 using the keywords: concordance, core needle biopsy, resection, HER2 status. Ten articles meeting the inclusion criteria were selected for this review. Results: The results demonstrated variable concordance rates of HER2 status between CNB and SS samples, ranging from 83.3% to 99.5%. The primary factors influencing discordance were tumor heterogeneity, preoperative treatment, sampling error, and differing testing methods. Discordance was more prevalent in HER2-negative and HER2-low tumors compared to HER2-positive tumors. Conclusion: The concordance of HER2 status between CNB and SS samples is generally high but not perfect. Therefore, retesting HER2 status on SS samples is recommended to ensure optimal treatment decisions for breast cancer patients. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
89. Convolutional Neural Network‐Based Adaptive Localization for an Ensemble Kalman Filter.
- Author
-
Wang, Zhongrui, Lei, Lili, Anderson, Jeffrey L., Tan, Zhe‐Min, and Zhang, Yi
- Subjects
- *
LOCALIZATION (Mathematics) , *KALMAN filtering , *CONVOLUTIONAL neural networks , *STATISTICAL correlation , *SAMPLING errors - Abstract
Flow‐dependent background error covariances estimated from short‐term ensemble forecasts suffer from sampling errors due to limited ensemble sizes. Covariance localization is often used to mitigate the sampling errors, especially for high dimensional geophysical applications. Most applied localization methods, empirical or adaptive ones, multiply the Kalman gain or background error covariances by a distance‐dependent parameter, which is a simple linear filtering model. Here two localization methods based on convolutional neural networks (CNNs) learning from paired data sets are proposed. The CNN‐based localization function (CLF) aims to minimize the sampling error of the estimated Kalman gain, and the CNN‐based empirical localization function (CELF) aims to minimize the posterior error of state variables. These two CNN‐based localization methods can provide localization functions that are nonlinear, spatially and temporally adaptive, and non‐symmetric with respect to displacement, without requiring any prior assumptions for the localization functions. Results using the Lorenz05 model show that CLF and CELF can better capture the structures of the Kalman gain than the best Gaspari and Cohn (GC) localization function and the adaptive reference localization method. For both perfect‐ and imperfect‐model experiments, CLF produces smaller errors of the Kalman gain, prior and posterior than the best GC and reference localization, especially for spatially averaged observations. Without model error, CELF has smaller prior and posterior errors than the best GC and reference localization for spatially averaged observations, while with model error, CELF has smaller prior and posterior errors than the best GC and reference localization for single‐point observations. Plain Language Summary: Ensemble Kalman filters have been widely used for high‐dimensional geophysical applications, with the advantages to provide flow‐dependent background error covariances based on short‐term ensemble forecasts. Due to the massive computational costs for advancing ensemble simulations, limited ensemble members are commonly adopted, which results in sample‐estimated background error covariances contaminated by spurious noisy correlations. To remedy the sampling error, covariance localization that tapers the observation impact on state variables with distance, is commonly used. There are pre‐defined localization functions with tuning parameters and adaptive localization functions that are often based on correlation statistics. Here two purely data‐driven localization methods based on convolutional neural networks are proposed. These newly proposed localization functions are spatially and temporally adaptive, non‐symmetric with respect to displacement, with better captured structures of the Kalman gain than the empirical and adaptive localization methods. When they are applied in cycling assimilation, the localization methods based on the convolutional neural networks can produce improved analyses and forecasts. Key Points: Two CNN‐based localization methods are proposed to minimize sampling errors of estimated Kalman gain or posterior errors of state variablesCNN‐based localizations are adaptive in space and time, non‐symmetric with respect to displacement, and able to fit nonlinear functionsCNN‐based localizations effectively represent the Kalman gain, and lead to improved analyses and forecasts in cycling assimilations [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
90. Biconvex Clustering.
- Author
-
Chakraborty, Saptarshi and Xu, Jason
- Subjects
- *
K-nearest neighbor classification , *SAMPLING errors , *HEURISTIC - Abstract
Convex clustering has recently garnered increasing interest due to its attractive theoretical and computational properties, but its merits become limited in the face of high-dimensional data. In such settings, pairwise affinity terms that rely on k-nearest neighbors become poorly specified and Euclidean measures of fit provide weaker discriminating power. To surmount these issues, we propose to modify the convex clustering objective so that feature weights are optimized jointly with the centroids. The resulting problem becomes biconvex, and as such remains well-behaved statistically and algorithmically. In particular, we derive a fast algorithm with closed form updates and convergence guarantees, and establish finite-sample bounds on its prediction error. Under interpretable regularity conditions, the error bound analysis implies consistency of the proposed estimator. Biconvex clustering performs feature selection throughout the clustering task: as the learned weights change the effective feature representation, pairwise affinities can be updated adaptively across iterations rather than precomputed within a dubious feature space. We validate the contributions on real and simulated data, showing that our method effectively addresses the challenges of dimensionality while reducing dependence on carefully tuned heuristics typical of existing approaches. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
91. Identifying Bias in Social and Health Research: Measurement Invariance and Latent Mean Differences Using the Alignment Approach.
- Author
-
Tsaousis, Ioannis and Jaffari, Fathima M.
- Subjects
- *
LATENT variables , *FACTOR structure , *COGNITIVE Abilities Test , *MEASUREMENT errors , *PUBLIC health research , *SAMPLING errors , *COGNITIVE structures - Abstract
When comparison among groups is of major importance, it is necessary to ensure that the measuring tool exhibits measurement invariance. This means that it measures the same construct in the same way for all groups. In the opposite case, the test results in measurement error and bias toward a particular group of respondents. In this study, a new approach to examine measurement invariance was applied, which was appropriately designed when a large number of group comparisons are involved: the alignment approach. We used this approach to examine whether the factor structure of a cognitive ability test exhibited measurement invariance across the 26 universities of the Kingdom of Saudi Arabia. The results indicated that the P-GAT subscales were invariant across the 26 universities. Moreover, the aligned factor mean values were estimated, and factor mean comparisons of every group's mean with all the other group means were conducted. The findings from this study showed that the alignment procedure is a valuable method to assess measurement invariance and latent mean differences when a large number of groups are involved. This technique provides an unbiased statistical estimation of group means, with significance tests between group pairs that adjust for sampling errors and missing data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
92. Intraoperative frozen section in gynaecology cancers with special reference to ovarian tumours: time to "unfreeze" the pitfalls in the path of the Derby horse of Oncology.
- Author
-
Begum, Dimpy, Barmon, Debabrata, Baruah, Upasana, Ahmed, Shiraj, Gupta, Sakshi, and Bassetty, Karthik Chandra
- Subjects
- *
GYNECOLOGY , *OVARIAN diseases , *HORSES , *SAMPLING errors , *SENSITIVITY & specificity (Statistics) , *OVARIAN cancer - Abstract
Purpose: In an oncological set up the role of frozen section biopsy is undeniable. They serve as an important tool for surgeon's intraoperative decision making but the diagnostic reliability of intraoperative frozen section may vary from institute to institute. The surgeon should be well aware of the accuracy of the frozen section reports in their setup to enable them to take decisions based on the report. This is why we had conducted a retrospective study at Dr B. Borooah Cancer Institute, Guwahati, Assam, India to find out our institutional frozen section accuracy. Methods: The study was conducted from 1st January 2017 to 31st December 2022 (5 years). All gynaecology oncology patients who were operated on during the study period and had an intraoperative frozen section done were included in the study. Patients who had incomplete final histopathological report (HPR) or no final HPR were excluded from the study. Frozen section and final histopathology report were compared and analysed and discordant cases were analysed based on the degree of discordancy. Results: For benign ovarian disease, the IFS accuracy, sensitivity and specificity are 96.7%, 100% and 93%, respectively. For borderline ovarian disease the IFS accuracy, sensitivity and specificity are 96.7%, 80% and 97.6%, respectively. For malignant ovarian disease the IFS accuracy, sensitivity and specificity are 95.4%, 89.1% and 100%, respectively. Sampling error was the most common cause of discordancy. Conclusion: Intraoperative frozen section may not have 100% diagnostic accuracy but still it is the running horse of our oncological institute. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
93. Research on the Methods of Predicting Compressor Characteristic Curve.
- Author
-
Hao, Xuedi, Zhang, Zeyuan, Chi, Jinling, and He, Yangxue
- Subjects
- *
RESEARCH methodology , *ELLIPTIC equations , *SAMPLING errors , *GAS turbines , *COMPRESSORS , *PARAMETER estimation , *ARTIFICIAL neural networks , *FORECASTING , *SAMPLE size (Statistics) - Abstract
Compressors are one of the three major components of gas turbines, and their characteristic curves are used to analyze off-design performance. How to infer the characteristic curve based on different data is an important research topic. In this paper, PG9351FA gas turbine is taken as the research object. Two methods, artificial neural network and parameter estimation, are used to predict its characteristic curve, and the prediction accuracy and application conditions of the two methods are discussed. This article compares the two methods from the perspectives of known speed characteristic curve regression and unknown speed characteristic curve inference, analyzes the impact of sample size and sample error on their inference results, and quantitatively analyzes the error through statistical methods such as calculating the mean square deviation of the data. The application scope and conditions of different methods are provided. The research results show that the method based on neural network to infer the characteristic curve has high accuracy and is suitable for the prediction of known and unknown speed characteristic curves under sufficient data, but not for the prediction of unknown side curves. The elliptic equation fitting method based on parameter estimation has a slightly lower accuracy in processing the nearly vertical compressor characteristic curve, but it can be used as an effective and reliable method to infer the compressor characteristic curve in the case of a small amount of data. The modulization method based on parameter estimation has high accuracy and is applicable to the estimation of complete characteristic curve from partial data of known characteristic curve. In this paper, the application scope and conditions of these two methods are determined, which can provide reference for engineering practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
94. New Lomax-G family of distributions: Statistical properties and applications.
- Author
-
Sapkota, Laxmi Prasad, Kumar, Vijay, Gemeay, Ahmed M., Bakr, M. E., Balogun, Oluwafemi Samson, and Muse, Abdisalam Hassan
- Subjects
- *
PROBABILITY theory , *HAZARD function (Statistics) , *INFERENTIAL statistics , *GOODNESS-of-fit tests , *RESEARCH personnel , *SAMPLING errors , *MAXIMUM likelihood statistics - Abstract
This research article introduces a new family of distributions developed using the innovative beta-generated transformation technique. Among these distributions, the focus is on the inverse exponential power distribution, which exhibits unique reverse-J, inverted bathtub, or monotonically increasing hazard functions. This paper thoroughly investigates the distribution's key characteristics and utilizes the maximum likelihood estimation method to determine its associated parameters. To assess the accuracy of the estimation procedure, the researchers conducted a simulation experiment, revealing diminishing biases and mean square errors with increasing sample sizes, even when working with small samples. Moreover, the practical applicability of the proposed distribution is demonstrated by analyzing real-world COVID-19 and medical datasets. The article establishes that the proposed model outperforms existing models by using model selection criteria and conducting goodness-of-fit test statistics. The potential applications of this research extend to various fields where modeling and analyzing hazard functions or survival data are crucial. Additionally, the study contributes to advancing probability theory and statistical inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
95. Building Up of Fuzzy Evaluation Model of Life Performance Based on Type-II Censored Data.
- Author
-
Chiou, Kuo-Ching
- Subjects
- *
CENSORING (Statistics) , *SAMPLING errors , *DISTRIBUTION (Probability theory) , *PROBABILITY density function , *PRODUCT life cycle , *SEMICONDUCTOR industry - Abstract
The semiconductor industry is a rapidly growing sector. As collection technologies for production data continue to improve and the Internet of Things matures, production data analysis improves, thus accelerating progress towards smart manufacturing. This not only enhances the process quality, but also increases product lifetime and reliability. Under the assumption of exponential distribution, the ratio of lifetime and warranty has been proposed as a lifetime performance index for electronic products. As unknown parameters of the index, to use point estimates to assess lifetime performance may cause misjudgment due to sampling errors. In addition, cost and time limitations often lead to small sample sizes that can affect the results of the analysis. Type-II censored data are widely applied in production and manufacturing engineering. Thus, this paper proposes an unbiased and consistent estimator of lifetime performance based on type-II censored data. The 100(1 − α)% confidence interval of the proposed index is derived based on its probability density function. Overly small sample sizes not only make the length estimates of lifetime performance index intervals for electronic products too long, but they also increase sampling errors, which distort the estimation and test results. We therefore used the aforementioned interval to construct a fuzzy test model for the assessment of product lifetime and further help manufacturers to be more prudent and precise to evaluate the performance of product life cycles. A numerical example illustrates the applicability of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
96. How Sampling Errors in Covariance Estimates Cause Bias in the Kalman Gain and Impact Ensemble Data Assimilation.
- Author
-
Hodyss, Daniel and Morzfeld, Matthias
- Subjects
- *
SAMPLING errors , *KALMAN filtering , *NUMERICAL weather forecasting , *NONLINEAR equations - Abstract
Localization is the key component to the successful application of ensemble data assimilation (DA) to high-dimensional problems in the geosciences. We study the impact of sampling error and its amelioration through localization using both analytical development and numerical experiments. Specifically, we show how sampling error in covariance estimates accumulates and spreads throughout the entire domain during the computation of the Kalman gain. This results in a bias, which is the dominant issue in unlocalized ensemble DA, and, surprisingly, we find that it depends directly on the number of independent observations but only indirectly on the state dimension. Our derivations and experiments further make it clear that an important aspect of localization is a significant reduction of bias in the Kalman gain, which in turn leads to an increased accuracy of ensemble DA. We illustrate our findings on a variety of simplified linear and nonlinear test problems, including a cycling ensemble Kalman filter applied to the Lorenz-96 model. Significance Statement: The dampening of long-range correlations has been the key to the success of ensemble data assimilation in global numerical weather prediction. In this paper, we show how noise in covariance estimates propagates through the state estimation process and corrupts state estimates. We show that this noise results in a bias and that this bias depends on the number of observations and not, as might be expected, on the state dimension. We go on to show how dampening long-range covariances through a process referred to as "localization" helps to mitigate the detrimental effects of this sampling noise. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
97. Challenges for Inline Observation Error Estimation in the Presence of Misspecified Background Uncertainty.
- Author
-
Walsworth, Andrew, Poterjoy, Jonathan, and Satterfield, Elizabeth
- Subjects
- *
KALMAN filtering , *SAMPLING errors , *PRICE inflation - Abstract
For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance. Significance Statement: To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
98. On Robustness of Individualized Decision Rules.
- Author
-
Qi, Zhengling, Pang, Jong-Shi, and Liu, Yufeng
- Subjects
- *
OPTIMIZATION algorithms , *VALUE at risk , *SAMPLING errors , *INDIVIDUALIZED medicine , *RISK-taking behavior - Abstract
With the emergence of precision medicine, estimating optimal individualized decision rules (IDRs) has attracted tremendous attention in many scientific areas. Most existing literature has focused on finding optimal IDRs that can maximize the expected outcome for each individual. Motivated by complex individualized decision making procedures and the popular conditional value at risk (CVaR) measure, we propose a new robust criterion to estimate optimal IDRs in order to control the average lower tail of the individuals' outcomes. In addition to improving the individualized expected outcome, our proposed criterion takes risks into consideration, and thus the resulting IDRs can prevent adverse events. The optimal IDR under our criterion can be interpreted as the decision rule that maximizes the "worst-case" scenario of the individualized outcome when the underlying distribution is perturbed within a constrained set. An efficient non-convex optimization algorithm is proposed with convergence guarantees. We investigate theoretical properties for our estimated optimal IDRs under the proposed criterion such as consistency and finite sample error bounds. Simulation studies and a real data application are used to further demonstrate the robust performance of our methods. Several extensions of the proposed method are also discussed. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
99. High‐frequency square‐wave voltage injection position sensorless control method using single current sensor.
- Author
-
Lin, Zhichen, Chen, Wei, Yan, Yan, Wang, Zhiqiang, Shi, Tingna, and Xia, Changliang
- Subjects
- *
SQUARE waves , *PERMANENT magnet motors , *VOLTAGE , *SAMPLING errors , *POSITION sensors , *SENSOR placement - Abstract
High‐frequency (HF) square‐wave voltage injection position sensorless control method for interior permanent magnet synchronous motor (IPMSM) is widely utilised in zero and low speed range due to its good dynamic performance and easy implementation. However, this method relies on the sampling accuracy of current sensors for rotor position estimation. To overcome this restriction, an HF square‐wave voltage injection position sensorless control method for IPMSM using a single current sensor (SCS) is proposed. Firstly, the impact of current sampling errors on HF square‐wave voltage injection position sensorless control is analysed, and it is concluded that the scaling errors of current sensors will cause the estimated position to oscillate at twice the fundamental frequency. Based on this conclusion, the phase currents reconstruction technology with SCS is adopted to avoid the impact of scaling errors on rotor position estimation. To reconstruct the phase currents containing HF component, a PWM cycle is divided into two parts, sampling stage and injection stage. By this way, the impact of HF square‐wave voltage injection on current reconstruction can be avoided. Then, the rotor position estimation is realised. The experiments are performed on a 20‐kW IPMSM platform and the results verify the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
100. Remote sensing of salmonid spawning sites in freshwater ecosystems: The potential of low-cost UAV data.
- Author
-
Ponsioen, Lieke, Kapralova, Kalina H., Holm, Fredrik, and Hennig, Benjamin D.
- Subjects
- *
REMOTE sensing , *FRESH water , *SAMPLING errors , *WATER depth , *EMBRYOLOGY , *DRONE aircraft - Abstract
Salmonids are especially vulnerable during their embryonic development, but monitoring of their spawning grounds is rare and often relies on manual counting of their nests (redds). This method, however, is prone to sampling errors resulting in over- or underestimations of redd counts. Salmonid spawning habitat in shallow water areas can be distinguished by their visible reflection which makes the use of standard unmanned aerial vehicles (UAV) a viable option for their mapping. Here, we aimed to develop a standardised approach to detect salmonid spawning habitat that is easy and low-cost. We used a semi-automated approach by applying supervised classification techniques to UAV derived RGB imagery from two contrasting lakes in Iceland. For both lakes six endmember classes were obtained with high accuracies. Most importantly, producer's and user's accuracy for classifying spawning redds was >90% after applying post-classification improvements for both study areas. What we are proposing here is an entirely new approach for monitoring spawning habitats which will address some the major shortcomings of the widely used redd count method e.g. collecting and analysing large amounts of data cost and time efficiently, limiting observer bias, and allowing for precise quantification over different temporal and spatial scales. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.