104 results on '"68–95–99.7 rule"'
Search Results
2. Multi-infusion with integrated multiple pressure sensing allows earlier detection of line occlusions
- Author
-
Frank Doesburg, Roy Oelen, Maurits H. Renes, Daan J Touw, Maarten W. N. Nijsten, Pedro M Lourenço, Pharmaceutical Analysis, Critical care, Anesthesiology, Peri-operative and Emergency medicine (CAPE), Medicinal Chemistry and Bioanalysis (MCB), Groningen Research Institute for Asthma and COPD (GRIAC), Biopharmaceuticals, Discovery, Design and Delivery (BDDD), and Microbes in Health and Disease (MHD)
- Subjects
medicine.medical_specialty ,Computer applications to medicine. Medical informatics ,Multi-infusion ,R858-859.7 ,Health Informatics ,Co-occlusion ,Logistic regression ,Reduction (complexity) ,ALARM ,Internal medicine ,Occlusion ,Pressure ,medicine ,Humans ,Infusion ,Mathematics ,Research ,Health Policy ,68–95–99.7 rule ,Pressure sensor ,Regression ,Computer Science Applications ,Pharmaceutical Preparations ,Line (geometry) ,Cardiology ,Equipment Failure ,Infusion pumps ,Intravenous ,Algorithms - Abstract
Background Occlusions of intravenous (IV) tubing can prevent vital and time-critical medication or solutions from being delivered into the bloodstream of patients receiving IV therapy. At low flow rates (≤ 1 ml/h) the alarm delay (time to an alert to the user) can be up to 2 h using conventional pressure threshold algorithms. In order to reduce alarm delays we developed and evaluated the performance of two new real-time occlusion detection algorithms and one co-occlusion detector that determines the correlation in trends in pressure changes for multiple pumps. Methods Bench-tested experimental runs were recorded in triplicate at rates of 1, 2, 4, 8, 16, and 32 ml/h. Each run consisted of 10 min of non-occluded infusion followed by a period of occluded infusion of 10 min or until a conventional occlusion alarm at 400 mmHg occurred. The first algorithm based on binary logistic regression attempts to detect occlusions based on the pump’s administration rate Q(t) and pressure sensor readings P(t). The second algorithm continuously monitored whether the actual variation in the pressure exceeded a threshold of 2 standard deviations (SD) above the baseline pressure. When a pump detected an occlusion using the SD algorithm, a third algorithm correlated the pressures of multiple pumps to detect the presence of a shared occlusion. The algorithms were evaluated using 6 bench-tested baseline single-pump occlusion scenarios, 9 single-pump validation scenarios and 7 multi-pump co-occlusion scenarios (i.e. with flow rates of 1 + 1, 1 + 2, 1 + 4, 1 + 8, 1 + 16, and 1 + 32 ml/h respectively). Alarm delay was the primary performance measure. Results In the baseline single-pump occlusion scenarios, the overall mean ± SD alarm delay of the regression and SD algorithms were 1.8 ± 0.8 min and 0.4 ± 0.2 min, respectively. Compared to the delay of the conventional alarm this corresponds to a mean time reduction of 76% (P = 0.003) and 95% (P = 0.001), respectively. In the validation scenarios the overall mean ± SD alarm delay of the regression and SD algorithms were respectively 1.8 ± 1.6 min and 0.3 ± 0.2 min, corresponding to a mean time reduction of 77% and 95%. In the multi-pump scenarios a correlation > 0.8 between multiple pump pressures after initial occlusion detection by the SD algorithm had a mean ± SD alarm delay of 0.4 ± 0.2 min. In 2 out of the 9 validation scenarios an occlusion was not detected by the regression algorithm before a conventional occlusion alarm occurred. Otherwise no occlusions were missed. Conclusions In single pumps, both the regression and SD algorithm considerably reduced alarm delay compared to conventional pressure limit-based detection. The SD algorithm appeared to be more robust than the regression algorithm. For multiple pumps the correlation algorithm reliably detected co-occlusions. The latter may be used to localize the segment of tubing in which the occlusion occurs. Trial registration Not applicable.
- Published
- 2021
3. Healthy and clinical meta-data and aggregated mini-mental status exam scores for the Persian speaking population
- Author
-
Seyed Reza Alvani, Meena Imma Saleh, Seyed Mehrshad Parvin Hosseini, Lama R. Alameddine, and Amir Ramezani
- Subjects
education.field_of_study ,Population ,68–95–99.7 rule ,Cognition ,language.human_language ,Standard deviation ,language ,Normative ,Cognitive decline ,education ,Psychology ,Mini-Mental Status Exam ,General Psychology ,Persian ,Clinical psychology - Abstract
The Mini-Mental Status Examination (MMSE) is a widely used cognitive screening measure. The MMSE is used with diverse cultures, yet multiple factors may impact test performance, interpretation, and normative statistics. The current study observes factors specific to Iranians’ that influence performances on the Persian MMSE. A literature review compiled studies of the Persian MMSE administered to both healthy and clinical groups. Out of 1008 articles found, 45 met inclusion criteria. Meta-analysis of aggregate data was used to develop global means, standard deviations, and cutoff scores for both clinical and healthy groups. Iranian MMSE normative mean and standard deviation values were 27 and 2.2, respectively. Iranian MMSE clinical mean and standard deviation values were 22 and 5.7, respectively. An MMSE cut-off score of 22.6, or any score below 23 (e.g.
- Published
- 2021
4. Everything in Moderation: A Proposed Improvement to Variance Calculation for Visualizing Latent Endogenous Moderation
- Author
-
Matthew A. Diemer and Michael Frisby
- Subjects
Sociology and Political Science ,Simple (abstract algebra) ,Modeling and Simulation ,68–95–99.7 rule ,Econometrics ,General Decision Sciences ,Variance (accounting) ,Moderation ,General Economics, Econometrics and Finance ,Structural equation modeling ,Mathematics ,Visualization - Abstract
Despite the sacrifices to precision, researchers frequently visualize moderation by using the simple slopes approach with moderator endpoints of ± 1 and 2 standard deviations. Yet in structural equ...
- Published
- 2021
5. Modernization of bone age assessment: comparing the accuracy and reliability of an artificial intelligence algorithm and shorthand bone age to Greulich and Pyle
- Author
-
Hayley Eng, Anthony Cooper, Mina Gerges, and Harpreet Chhina
- Subjects
Male ,Intraclass correlation ,Radiography ,Standard deviation ,030218 nuclear medicine & medical imaging ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,law ,Age Determination by Skeleton ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Reliability (statistics) ,Stopwatch ,030203 arthritis & rheumatology ,business.industry ,68–95–99.7 rule ,Reproducibility of Results ,Bone age ,Gold standard (test) ,Shorthand ,Female ,Artificial intelligence ,business ,Algorithm - Abstract
Greulich and Pyle (GP) is one of the most common methods to determine bone age from hand radiographs. In recent years, new methods were developed to increase the efficiency in bone age analysis like the shorthand bone age (SBA) and automated artificial intelligence algorithms. The aim of this study is to evaluate the accuracy and reliability of these two methods and examine if the reduction in analysis time compromises their efficacy. Two hundred thirteen males and 213 females had their bone age determined by two separate raters using the SBA and GP methods. Three weeks later, the two raters repeated the analysis of the radiographs. The raters timed themselves using an online stopwatch. De-identified radiographs were securely uploaded to an automated algorithm developed by a group of radiologists in Toronto. The gold standard was determined to be the radiology report attached to each radiograph, written by experienced radiologists using GP. Intraclass correlation between each method and the gold standard fell within the range of 0.8–0.9, highlighting significant agreement. Most of the comparisons showed a statistically significant difference between the new methods and the gold standard; however, it may not be clinically significant as it ranges between 0.25 and 0.5 years. A bone age is considered clinically abnormal if it falls outside 2 standard deviations of the chronological age; standard deviations are calculated and provided in GP atlas. The shorthand bone age method and the automated algorithm produced values that are in agreement with the gold standard while reducing analysis time.
- Published
- 2020
6. Improvement of the Ushakov bound
- Author
-
Hidekazu Tanaka and Kensho Kobayashi
- Subjects
Statistics and Probability ,Combinatorics ,68–95–99.7 rule ,Upper and lower bounds ,Unimodal distribution ,Computer Science::Databases ,Computer Science::Cryptography and Security ,Mathematics - Abstract
Considering a discrete unimodal distribution, an upper bound on a tail probability about a mode is suggested, which can be shown to be sharper than both the Bienayme-Chebyshev bound and the Ushakov...
- Published
- 2020
7. Investigation of the effect of climate change on heat waves
- Author
-
Rebwar Dara, Safieh Javadinejad, and Forough Jafary
- Subjects
Maximum temperature ,Thermal ,68–95–99.7 rule ,Climate change ,Environmental science ,Climate model ,General Medicine ,Thermal wave ,Heat wave ,Atmospheric sciences - Abstract
The purpose of this research is to identify the heat waves of the South Sea of Iran and compare the conditions in the present and future. To reach this goal, the average daily temperature of 35 years has been used. Also, in order to predict future heat waves, the maximum temperature data of four models of the CMIP5 model series, according to the RCP 8.5 scenario, has been used for the period 2040-2074. In order to reverse the output of the climatic models, artificial neural networks were used to identify the thermal waves, and the Fumiaki index was used to determine the thermal waves. Using the programming in MATLAB software, the days when their temperature exceeded 2 standard deviations as a thermal wave were identified. The results of the research show that the short-term heat waves are more likely to occur. Heat waves in the base period have a significant but poorly developed trend, so that the frequency has increased in recent years. In the period from 2040 to 2074, the frequency of thermal waves has a significant decreasing trend, but usually with low coefficients. However, for some stations from 2040 to 2074, the frequency of predicted heat waves increased.
- Published
- 2020
8. Basics on Categorizing Travel-Time-Based Degrees of Satisfaction Using Triangular Fuzzy-Membership Functions
- Author
-
Rohini Kanthi, Akash Anand, Varghese George, M. S. Padmashree, and Moduga Tagore
- Subjects
Normal distribution ,Revealed preference ,68–95–99.7 rule ,Statistics ,Mode (statistics) ,Context (language use) ,Fuzzy logic ,Defuzzification ,Standard deviation ,Mathematics - Abstract
The travel desires of trip-makers in urban activity centres depend mainly on the location of residential areas, proximity to various activity centres, household characteristics, and socio-economic factors that influence the choice of travel modes. Decision-making with regard to the choice of a particular mode of travel is fuzzy in nature, and seldom follows a rigid rule-based approach. In this context, the fuzzy-logic approach was considered since it could handle inherent randomness in decision-making related to mode-choice. The present study focuses on the application of this technique making use of revealed preference survey data collected through CES and MVA Systra, later compiled and corrected in various stages at NITK. The difference between the actual travel time by a particular mode, and the theoretical travel time based on average vehicular speeds was used as an important indicator in determining the degrees of satisfaction of the trip-maker. This indicator was computed, and fitted using a normal distribution. It was assumed that indicator values between µ-3σ and µ could be considered for the category of satisfied trip-makers according to the three sigma rule where µ is the mean indicator value, and σ represents the standard deviation. The computed values of the indicators were used in classifying the data into 6 categories of degrees of satisfaction that formed the basic framework for modelling using fuzzy-logic technique. This paper aims at understanding the basic mathematical computations involved in defuzzification using the centroid method for triangular membership functions, and provides a comparison with results obtained using MATLAB.
- Published
- 2020
9. Digital neuropsychological test performance in a large sample of uninjured collegiate athletes
- Author
-
Sabrina M. Todaro, Jessica Saalfield, Fiona N. Conway, Scott A. Weismiller, Kelsey L. Piersol, Carrie Esopenko, Marsha E. Bates, Jennifer F. Buckman, Elisabeth A. Wilde, and Kyle Brostrand
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,biology ,Athletes ,Trail Making Test ,68–95–99.7 rule ,Sample (statistics) ,Neuropsychological test ,Audiology ,Standard score ,biology.organism_classification ,Article ,Cognitive test ,Neuropsychology and Physiological Psychology ,Developmental and Educational Psychology ,medicine ,Normative ,Psychology - Abstract
Digital neuropsychological test batteries are popular in college athletics; however, well-validated digital tests that are short and portable are needed to expand the feasibility of performing cognitive testing quickly, reliably, and outside standard clinical settings. This study assessed performance on digital versions of Trail Making Test (dTMT) and a modified Symbol Digit Modalities Test (dSDMT) in uninjured collegiate athletes (n = 537; 47% female) using the C3Logix baseline assessment module. Time to complete (dTMT) and the number of correct responses (dSDMT) were computed, transformed into z scores, and compared to age-matched normative data from analogous paper-and-pencil tests. Overall sample performance was compared to normative sample performance using Cohen's d. Sample averages on the dTMT, Part A, and dSDMT were similar to published norms; 97 and 92% of z scores fell within 2 standard deviations of normative means, respectively. The sample averaged faster completion times on dTMT, Part B than published norms, although 98% of z scores were within 2 standard deviations of the normative means. Brief, digitized tests may be useful in populations and testing environments when longer cognitive test batteries are impractical. Future studies should assess the ability of these tests to detect clinically relevant changes following a suspected head injury.
- Published
- 2021
10. Set of indicators for dependability evaluation of gas compression units
- Subjects
Sample size determination ,68–95–99.7 rule ,Statistics ,Outlier ,Dependability ,Availability factor ,Lorenz curve ,Confidence interval ,Standard deviation ,Mathematics - Abstract
The paper is dedicated to the improvement of the evaluation methods of one of the most important operating characteristics of gas compression units (GCUs), i.e. dependability, under the conditions of decreasing pipeline utilization rate. Currently, the dependability of units is characterized by a set of parameters based on the identification of the time spent by a unit in certain operational state. The paper presents the primary findings regarding the dependability coefficients of GPA-Ts-18 units, 41 of which are operated in multi-yard compressor stations (CSs) of one of Gazprom’s subsidiaries. The dependability indicators (technical state coefficient, availability coefficient, operational availability coefficient) identified as part of the research are given as well. GCUs were classified into groups depending on the coefficient values. The feasibility of using integral indicators in the analysis of GCU groups’ dependability was examined. It was proposed to use confidence intervals for identification of the integral level of dependability of the operated GCU stock and the ways of maintaining the operability of units under the conditions of decreasing main gas pipeline utilization rate. The Gini index was suggested for the purpose of generalized estimation of GCU groups’ dependability. It is shown that the advantage of the Gini coefficient is that is allows taking into account the ranks of the analyzed features in groups. The graphic interpretation of the findings was executed with a Lorenz curve. The paper implements the sigma rule that characterizes the probability of the actual coefficient value being within the confidence interval, i.e. prediction limits (upper and lower), within which the actual values will fall with a given probability. The confidence intervals were identified by the type of coefficients distribution and a standard deviation, ć. A histogram of an interval range of technical utilization coefficient distribution is given as an example. Testing of the hypothesis of the distribution type at confidence level 0.95 showed that the distribution of coefficients is normal. Using the moment method, the mathematical expectation and mean square deviation for the distribution of the values of each type of dependability indicators were established. Using the sigma rule, all extreme outliers among the GCUs in terms of the level of factor attribute were excluded from the body of input data. All units whose factor attribute value does not fall in the interval were excluded. According to the three sigma rule, 3 and 2 GCUs did not fall in the confidence interval (µ±3σ) in terms of the utilization factor and availability factor respectively. The performed analysis of causes of low availability coefficients of the above GCUs showed that the systems had been long in maintenance. The paper sets forth summary data on the maximum allowable value of the Gini index of dependability coefficients (CTU, CA, COA) depending on the sample size (the complete sample of 41 units and samples with the interval of 1, 2, 3 sigma). In case of higher values of Gini index it is recommended to adopt measures to individual units in order to improve the dependability of the operated GCU stock.
- Published
- 2018
11. An algorithm to detect non-background signals in greenhouse gas time series from European tall tower and mountain stations
- Author
-
Michal Heliasz, Michel Ramonet, Richard Engelen, Philippe Ciais, Martin Steinbacher, Jérôme Tarniewicz, L. Rivier, Dagmar Kubistin, Ivan Mammarella, Sébastien Conil, Meelis Mölder, Alex Resovsky, Jennifer Müller-Williams, Matthias Lindauer, Laboratoire des Sciences du Climat et de l'Environnement [Gif-sur-Yvette] (LSCE), Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), ICOS-RAMCES (ICOS-RAMCES), Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), ICOS-ATC (ICOS-ATC), Swiss Federal Laboratories for Materials Science and Technology [Dübendorf] (EMPA), Helsingin yliopisto = Helsingfors universitet = University of Helsinki, Lund University [Lund], Meteorologisches Observatorium Hohenpeißenberg (MOHp), Deutscher Wetterdienst [Offenbach] (DWD), European Centre for Medium-Range Weather Forecasts (ECMWF), Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université de Versailles Saint-Quentin-en-Yvelines (UVSQ), Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)-Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université de Versailles Saint-Quentin-en-Yvelines (UVSQ), University of Helsinki, Institute for Atmospheric and Earth System Research (INAR), Department of Physics, Micrometeorology and biogeochemical cycles, and Doctoral Programme in Atmospheric Sciences
- Subjects
Atmospheric Science ,010504 meteorology & atmospheric sciences ,Environmental engineering ,Residual ,Atmospheric sciences ,114 Physical sciences ,01 natural sciences ,Carbon cycle ,03 medical and health sciences ,Earthwork. Foundations ,Seasonal adjustment ,Physics::Atmospheric and Oceanic Physics ,030304 developmental biology ,0105 earth and related environmental sciences ,Polynomial regression ,[SDU.OCEAN]Sciences of the Universe [physics]/Ocean, Atmosphere ,0303 health sciences ,[STAT.AP]Statistics [stat]/Applications [stat.AP] ,TA715-787 ,68–95–99.7 rule ,Filter (signal processing) ,TA170-171 ,Annual cycle ,NITROGEN ,JUNGFRAUJOCH ,13. Climate action ,Greenhouse gas ,Environmental science - Abstract
We present a statistical framework to identify regional signals in station-based CO2 time series with minimal local influence. A curve-fitting function is first applied to the detrended time series to derive a harmonic describing the annual CO2 cycle. We then combine a polynomial fit to the data with a short-term residual filter to estimate the smoothed cycle and define a seasonally adjusted noise component, equal to 2 standard deviations of the smoothed cycle about the annual cycle. Spikes in the smoothed daily data which surpass this ±2σ threshold are classified as anomalies. Examining patterns of anomalous behavior across multiple sites allows us to quantify the impacts of synoptic-scale atmospheric transport events and better understand the regional carbon cycling implications of extreme seasonal occurrences such as droughts.
- Published
- 2021
12. Statistical Methods for Estimating the Pipelines Reliability
- Author
-
Asaf Hajiyev, Yasin Rustamov, and Narmina Abdullayeva
- Subjects
Pipeline transport ,Mathematical optimization ,Computer science ,Differential equation ,Numerical analysis ,68–95–99.7 rule ,Pipeline (software) ,Energy (signal processing) ,Reliability (statistics) ,Confidence and prediction bands - Abstract
The problem of estimating the reliability of the energy pipelines have a complicated structure and it is attractive from a theoretical and a practical point of view. The process of energy transportation through a pipeline can be described by a differential equation, but their solution by analytical, even numerical methods, faces some difficulties. One of the effective methods of their investigation is collecting data and carrying out their statistical analysis. The process of energy transportation through pipeline depends also on many parameters and moreover, their number can increase in time. In the paper, different approaches for investigating such problems are introduced. The method regarding the estimation of the main parameters and the construction of a confidence band for an unknown function, describing the behavior of energy transportation through a pipeline, is suggested. Numerical examples, demonstrating theoretical results, are given.
- Published
- 2020
13. USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS
- Author
-
J C Sáez, J B Blázquez, G Benito, and J Quiñones
- Subjects
Gaussian ,Probability density function ,Radiation Dosage ,01 natural sciences ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Radiation Monitoring ,0103 physical sciences ,Applied mathematics ,Radiology, Nuclear Medicine and imaging ,Probability ,Mathematics ,Models, Statistical ,Radiation ,Hermite polynomials ,010304 chemical physics ,Radiological and Ultrasound Technology ,68–95–99.7 rule ,Public Health, Environmental and Occupational Health ,General Medicine ,Skewness ,Radiological weapon ,symbols ,Algorithms - Abstract
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
- Published
- 2018
14. THE USE OF STATISTICAL CRITERIA FOR EVALUATION TEST OF DATA ON PROPERTIES OF INORGANIC SUBSTANCES
- Author
-
V. A. Dudarev and I. D. Tarasenko
- Subjects
three sigma rule ,Basis (linear algebra) ,Series (mathematics) ,68–95–99.7 rule ,Sampling (statistics) ,Chauvenet's criterion ,outlier ,Test (assessment) ,lcsh:Chemistry ,Chemistry ,lcsh:QD1-999 ,Statistics ,Outlier ,methods of mathematical statistics ,statistical test ,chauvenet's criterion ,grubbs criterion ,QD1-999 ,Mathematics ,Statistical hypothesis testing - Abstract
Three statistical criteria were compared on the basis of the sampling of chemical data in the article. When the criteria were compared, the level of significance was taken as equal to 0.05, because this value is most often used in the technical calculations. Evaluation of data on the one-sided outliers of variational series was carried out by using statistical criteria. With the help of Grubbs criterion for checking variation series, outliers were found neither on the right nor on the left. Further examination of the data with the use of three-sigma rule also showed no outliers. The latest statistical criterion used to detect errors was Chauvenet's criterion. When checking the data with this criterion, one error on the right was detected. No outliers were found on the left. According to the generalized results, namely, the vote on the three most statistical criteria, it can be concluded that the variational series belongs to the same general totality. Criteria presented in the article can be applied for the analysis of any data and for making conclusions based on them, if errors were found in the sampling.
- Published
- 2017
15. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends
- Author
-
Il-Moon Chung, Sungwook Choung, Weon Shik Han, Eungyu Park, Kue Young Kim, and Jina Jeong
- Subjects
Computer science ,Gaussian ,0208 environmental biotechnology ,68–95–99.7 rule ,02 engineering and technology ,computer.software_genre ,Regression ,020801 environmental engineering ,symbols.namesake ,Statistics ,Outlier ,symbols ,Range (statistics) ,Anomaly detection ,Median absolute deviation ,Data mining ,computer ,Water Science and Technology ,Quantile - Abstract
A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods – the three sigma rule (3 σ ), inter quantile range (IQR), and median absolute deviation (MAD) – that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3 σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.
- Published
- 2017
16. Comparison of threshold of toxicological concern (TTC) values to oral reference dose (RfD) values
- Author
-
Ly Ly Pham, Susan J. Borghoff, and Chad M. Thompson
- Subjects
Prioritization ,Databases, Factual ,Administration, Oral ,Class iii ,010501 environmental sciences ,Toxicology ,030226 pharmacology & pharmacy ,01 natural sciences ,Risk Assessment ,Hazardous Substances ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Low exposure ,Humans ,cardiovascular diseases ,Oral toxicity ,United States Environmental Protection Agency ,0105 earth and related environmental sciences ,Mathematics ,Reference dose ,No-Observed-Adverse-Effect Level ,Dose-Response Relationship, Drug ,68–95–99.7 rule ,General Medicine ,Emergency situations ,United States - Abstract
Thousands of chemicals have limited, or no hazard data readily available to characterize human risk. The threshold of toxicological concern (TTC) constitutes a science-based tool for screening level risk-based prioritization of chemicals with low exposure. Herein we compare TTC values to more rigorously derived reference dose (RfD) values for 288 chemicals in the U.S. Environmental Protection Agency's (US EPA) Integrated Risk Information System (IRIS) database. Using the Cramer decision tree and the Kroes tiered decision tree approaches to determine TTC values, the TCC for the majority of these chemicals were determined to be lower than their corresponding RfD values. The ratio of log10(RfD/TCC) was used to measure the differences between these values and the mean ratio for the substances evaluated was ~0.74 and ~0.79 for the Cramer and Kroes approach, respectively, when considering the Cramer Classes only. These data indicate that the RfD values for Cramer Class III compounds were, on average, ~6-fold higher than their TTC value. These analyses indicate that provisional oral toxicity values might be estimated from TTCs in data-poor or emergency situations; moreover, RfD values that are well below TTC values (e.g., 2 standard deviations below the log10(Ratio)) might be overly conservative and targets for re-evaluation.
- Published
- 2019
17. Kullback-Leibler distance-based enhanced detection of incipient anomalies
- Author
-
Fouzi Harrou, Ying Sun, and Muddu Madakyaru
- Subjects
0209 industrial biotechnology ,Engineering ,Kullback–Leibler divergence ,General Chemical Engineering ,Energy Engineering and Power Technology ,02 engineering and technology ,Management Science and Operations Research ,computer.software_genre ,Residual ,Industrial and Manufacturing Engineering ,Synthetic data ,020901 industrial engineering & automation ,Chart ,0202 electrical engineering, electronic engineering, information engineering ,Safety, Risk, Reliability and Quality ,Divergence (statistics) ,business.industry ,68–95–99.7 rule ,Statistical process control ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Anomaly detection ,Data mining ,business ,computer ,Food Science - Abstract
Accurate and effective anomaly detection and diagnosis of modern engineering systems by monitoring processes ensure reliability and safety of a product while maintaining desired quality. In this paper, an innovative method based on Kullback-Leibler divergence for detecting incipient anomalies in highly correlated multivariate data is presented. We use a partial least square (PLS) method as a modeling framework and a symmetrized Kullback-Leibler distance (KLD) as an anomaly indicator, where it is used to quantify the dissimilarity between current PLS-based residual and reference probability distributions obtained using fault-free data. Furthermore, this paper reports the development of two monitoring charts based on the KLD. The first approach is a KLD-Shewhart chart, where the Shewhart monitoring chart with a three sigma rule is used to monitor the KLD of the response variables residuals from the PLS model. The second approach integrates the KLD statistic into the exponentially weighted moving average monitoring chart. The performance of the PLS-based KLD anomaly-detection methods is illustrated and compared to that of conventional PLS-based anomaly detection methods. Using synthetic data and simulated distillation column data, we demonstrate the greater sensitivity and effectiveness of the developed method over the conventional PLS-based methods, especially when data are highly correlated and small anomalies are of interest. Results indicate that the proposed chart is a very promising KLD-based method because KLD-based charts are, in practice, designed to detect small shifts in process parameters.
- Published
- 2016
18. Fixed-Altitude Stair-Climbing Test Replacing the Conventional Symptom-Limited Test. A Pilot Study
- Author
-
Marcelo F. Jiménez, María Rodríguez, M. Teresa Gómez, Gonzalo Varela, and Nuria M. Novoa
- Subjects
medicine.medical_specialty ,Coefficient of determination ,business.industry ,Concordance ,68–95–99.7 rule ,Repeated measures design ,Regression analysis ,General Medicine ,Surgery ,Test (assessment) ,Linear regression ,Statistics ,medicine ,Climb ,business ,human activities - Abstract
Introduction The objective of this study was to investigate whether a patient's maximum capacity is comparable in 2 different stair-climbing tests, allowing the simplest to be used in clinical practice. Method Prospective, observational study of repeated measures on 33 consecutive patients scheduled for lung resection. Stair-climbing tests were: the standard test (climb to 27 m) and the alternative fixed-altitude test (climb to 12 m). In both cases, heart rate and oxygen saturation were monitored before and after the test. The power output of stair-climbing for each test (Watt1 for the standard and Watt2 for the fixed-altitude test) was calculated using the following equation: Power (W)=weight (kg)*9.8*height (m)/time (s). Concordance between tests was evaluated using a regression model and the residuals were plotted against Watt1. Finally, power output values were analyzed using a Bland–Altman plot. Results Twenty-one male and 12 female patients (mean age 63.2±11.2) completed both tests. Only 12 patients finished the standard test, while all finished the fixed-altitude test. Mean power output values were Watt1: 184.1±65 and Watt2: 214.5±75.1. The coefficient of determination (R2) in the linear regression was 0.67. No fixed bias was detected after plotting the residuals. The Bland–Altman plot showed that 32 out of 33 values were within 2 standard deviations of the differences between methods. Conclusions The results of this study show a reasonable level of concordance between both stair-climbing tests. The standard test can be replaced by the fixed-altitude test up to 12 m.
- Published
- 2015
19. Evaluation of the Generalizability of the Number of Abnormal Scores and the Overall Test Battery Mean as Measures of Performance Validity to a Different Test Battery
- Author
-
Jessica H. Stenclik, Graham M. Silk-Eglit, Andrea S. Miele, Robert J. McCaffrey, and Julie K. Lynch
- Subjects
Adult ,Male ,Battery (electricity) ,Test battery ,medicine.medical_specialty ,Psychometrics ,Sample (statistics) ,Neuropsychological Tests ,Audiology ,Sensitivity and Specificity ,Developmental psychology ,Disability Evaluation ,Current sample ,Reference Values ,Developmental and Educational Psychology ,medicine ,Humans ,Generalizability theory ,Data Curation ,68–95–99.7 rule ,Neuropsychology ,Reproducibility of Results ,Middle Aged ,Test (assessment) ,Neuropsychology and Physiological Psychology ,Brain Injuries ,Female ,Cognition Disorders ,Psychology - Abstract
Davis, Axelrod, McHugh, Hanks, and Millis (2013) documented that in a battery of 25 tests, producing 15, 10, and 5 abnormal scores at 1, 1.5, and 2 standard deviations below the norm-referenced mean, respectively, and an overall test battery mean (OTBM) of T ≤ 38 accurately identifies performance invalidity. However, generalizability of these findings to other samples and test batteries remains unclear. This study evaluated the use of abnormal scores and the OTBM as performance validity measures in a different sample that was administered a 25-test battery that minimally overlapped with Davis et al.'s test battery. Archival analysis of 48 examinees with mild traumatic brain injury seen for medico-legal purposes was conducted. Producing 18 or more, 7 or more, and 5 or more abnormal scores at 1, 1.5, and 2 standard deviations below the norm-referenced mean, respectively, and an OTBM of T ≤ 40 most accurately classified examinees; however, using Davis et al.'s proposed cutoffs in the current sample maintained specificity at or near acceptable levels. Due to convergence across studies, producing ≥5 abnormal scores at 2 standard deviations below the norm-referenced mean is the most appropriate cutoff for clinical implementation; however, for batteries consisting of a different quantity of tests than 25, an OTBM of T ≤ 38 is more appropriate.
- Published
- 2015
20. ROC surface assessment of the ANB angle and Wits appraisal's diagnostic performance with a statistically derived 'gold standard': does normalizing measurements have any merit?
- Author
-
Annemarie M Kuijpers-Jagtman, Ellen A. BeGole, and Hans L L Wellens
- Subjects
Male ,Adolescent ,Cephalometry ,Orthodontics ,Mandible ,030218 nuclear medicine & medical imaging ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Statistical significance ,Statistics ,Maxilla ,Humans ,Superimposition ,Child ,Mathematics ,Principal Component Analysis ,Receiver operating characteristic ,68–95–99.7 rule ,Lateral cephalograms ,030206 dentistry ,Gold standard (test) ,Sample mean and sample covariance ,Reconstructive and regenerative medicine Radboud Institute for Health Sciences [Radboudumc 10] ,ROC Curve ,Principal component analysis ,Female ,Anatomic Landmarks ,Malocclusion - Abstract
Contains fulltext : 177185.pdf (Publisher’s version ) (Closed access) Objective: To assess the ANB angle's and Wits appraisal's diagnostic performance using an extended version of Receiver Operating Curve (ROC) analysis, which renders ROC surfaces. These were calculated for both the conventional and normalized cephalometric tests (calculated by exchanging the patient's reference landmarks with those of the Procrustes superimposed sample mean shape).The required 'gold standard' was derived statistically, by applying generalized Procrustes superimposition (GPS) and principal component analysis (PCA) to the digitized landmarks, and ordering patients based upon their PC2 scores. Methods: Digitized landmarks of 200 lateral cephalograms (107 males, mean age: 12.8 years, SD: 2.2, 93 females, mean age: 13.2 years, SD: 1.7) were subjected to GPS and PCA. Upon calculating the conventional and normalized ANB and Wits values, ROC surfaces were constructed by varying not just the cephalometric test's cut-off value within each ROC curve, but also the gold standard cut-off value over different ROC curves in 220 steps between -2 and 2 standard deviations along PC2. The volume under the resulting ROC surfaces (VUS) served as a measure of overall diagnostic performance. The statistical significance of the volume differences was determined using permutation tests (1000 rounds, with replacement). Results: The diagnostic performance of the conventional ANB and Wits was remarkably similar for both Class I/II (81.1 and 80.75% VUS, respectively, P > 0.05). Normalizing the measurements improved all VUS highly significantly (91 and 87.2 per cent, respectively, P < 0.001). Conclusion: The conventional ANB and Wits do not differ in their diagnostic performance. Normalizing the measurements does seem to have some merit.
- Published
- 2017
21. Characterisation of a smartphone image sensor response to direct solar 305nm irradiation at high air masses
- Author
-
Damien P. Igoe, Joanna Turner, Alfio V. Parisi, and Abdurazaq Amar
- Subjects
Environmental Engineering ,Coefficient of determination ,Data collection ,010504 meteorology & atmospheric sciences ,business.industry ,media_common.quotation_subject ,010401 analytical chemistry ,68–95–99.7 rule ,Solar zenith angle ,Solar maximum ,01 natural sciences ,Pollution ,0104 chemical sciences ,Optics ,Sky ,Robustness (computer science) ,Environmental Chemistry ,Environmental science ,Image sensor ,business ,Waste Management and Disposal ,0105 earth and related environmental sciences ,media_common ,Remote sensing - Abstract
This research reports the first time the sensitivity, properties and response of a smartphone image sensor that has been used to characterise the photobiologically important direct UVB solar irradiances at 305 nm in clear sky conditions at high air masses. Solar images taken from Autumn to Spring were analysed using a custom Python script, written to develop and apply an adaptive threshold to mitigate the effects of both noise and hot-pixel aberrations in the images. The images were taken in an unobstructed area, observing from a solar zenith angle as high as 84° (air mass = 9.6) to local solar maximum (up to a solar zenith angle of 23°) to fully develop the calibration model in temperatures that varied from 2 °C to 24 °C. The mean ozone thickness throughout all observations was 281 ± 18 DU (to 2 standard deviations). A Langley Plot was used to confirm that there were constant atmospheric conditions throughout the observations. The quadratic calibration model developed has a strong correlation between the red colour channel from the smartphone with the Microtops measurements of the direct sun 305 nm UV, with a coefficient of determination of 0.998 and very low standard errors. Validation of the model verified the robustness of the method and the model, with an average discrepancy of only 5% between smartphone derived and Microtops observed direct solar irradiances at 305 nm. The results demonstrate the effectiveness of using the smartphone image sensor as a means to measure photobiologically important solar UVB radiation. The use of ubiquitous portable technologies, such as smartphones and laptop computers to perform data collection and analysis of solar UVB observations is an example of how scientific investigations can be performed by citizen science based individuals and groups, communities and schools.
- Published
- 2016
22. Developing an Advanced PM2.5 Exposure Model in Lima, Peru
- Author
-
Yang Liu, Jianzhao Bi, Odon Sánchez, Nadia N. Hansel, Qingyang Xiao, Kyle Steenland, William Checkley, Bryan N. Vu, and Gustavo F. Gonzales
- Subjects
Topography ,Accuracy and precision ,PM2.5 ,010504 meteorology & atmospheric sciences ,Mean squared error ,Science ,Decision trees ,air pollution ,Air pollution ,Weather forecasting ,WRF-chem ,010501 environmental sciences ,Lima ,Atmospheric sciences ,computer.software_genre ,complex mixtures ,01 natural sciences ,Article ,remote sensing ,Image resolution ,Machine learning ,Peru ,Solar radiation ,Relative humidity ,0105 earth and related environmental sciences ,MAIAC AOD ,Learning systems ,purl.org/pe-repo/ocde/ford#1.05.00 [https] ,Cloud fraction ,68–95–99.7 rule ,Remote sensing ,Random forests ,Albedo ,machine learning ,13. Climate action ,Weather Research and Forecasting Model ,Land use ,PM 2.5 ,General Earth and Planetary Sciences ,Environmental science ,computer ,random forest ,Random forest - Abstract
It is well recognized that exposure to fine particulate matter (PM2.5) affects health adversely, yet few studies from South America have documented such associations due to the sparsity of PM2.5 measurements. Lima’s topography and aging vehicular fleet results in severe air pollution with limited amounts of monitors to effectively quantify PM2.5 levels for epidemiologic studies. We developed an advanced machine learning model to estimate daily PM2.5 concentrations at a 1 km2 spatial resolution in Lima, Peru from 2010 to 2016. We combined aerosol optical depth (AOD), meteorological fields from the European Centre for Medium-Range Weather Forecasts (ECMWF), parameters from the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), and land use variables to fit a random forest model against ground measurements from 16 monitoring stations. Overall cross-validation R2 (and root mean square prediction error, RMSE) for the random forest model was 0.70 (5.97 μg/m3). Mean PM2.5 for ground measurements was 24.7 μg/m3 while mean estimated PM2.5 was 24.9 μg/m3 in the cross-validation dataset. The mean difference between ground and predicted measurements was −0.09 μg/m3 (Std.Dev. = 5.97 μg/m3), with 94.5% of observations falling within 2 standard deviations of the difference indicating good agreement between ground measurements and predicted estimates. Surface downwards solar radiation, temperature, relative humidity, and AOD were the most important predictors, while percent urbanization, albedo, and cloud fraction were the least important predictors. Comparison of monthly mean measurements between ground and predicted PM2.5 shows good precision and accuracy from our model. Furthermore, mean annual maps of PM2.5 show consistent lower concentrations in the coast and higher concentrations in the mountains, resulting from prevailing coastal winds blown from the Pacific Ocean in the west. Our model allows for construction of long-term historical daily PM2.5 measurements at 1 km2 spatial resolution to support future epidemiological studies.
- Published
- 2019
23. Evaluating field shape descriptors for estimating off-target application area in agricultural fields
- Author
-
Joe D. Luck, Scott A. Shearer, Rodrigo Sinaidi Zandonadi, and Timothy S. Stombaugh
- Subjects
Engineering ,business.industry ,68–95–99.7 rule ,Linear model ,Forestry ,Horticulture ,computer.software_genre ,Field (computer science) ,Computer Science Applications ,Power (physics) ,Reduction (complexity) ,Software ,Headland (agriculture) ,Data mining ,Precision agriculture ,business ,Agronomy and Crop Science ,computer - Abstract
The decision to adopt different precision agriculture (PA) technologies can be difficult for producers due to the high cost and complexity encountered with different systems. Thus, a method of estimating potential savings resulting from specific technologies based on operating conditions would considerably contribute the decision making process. For instance, the benefits of automatic section control can be dependent on the field size, shape, and field operation patterns. Thus, knowing the potential reduction in overlapped areas of a specific field would be desirable in order to justify the acquisition of such a technology package. Researchers have reported computational methods for estimating off-target application areas based on actual field boundaries. While the reported methods are valid, it requires a considerable amount of computational power to execute lengthy routines written on dedicated software. The goal of this study was to develop a simplified approach for estimating off-target application areas in agricultural fields considering the combined effects of field shape, field size, and implement width. The results revealed that the descriptor headland area (H) over the field area (A), presented the best relationship with off-target application. A linear model for predicting average off-target application was adjusted based on H/A and presented estimating errors within +/-6.7% at 2 standard deviations.
- Published
- 2013
24. Evaluation of a Kinematically-Driven Finite Element Footstrike Model
- Author
-
Tim Lucas, Heiko Schlarb, Andy R. Harland, Iain G. Hannah, and Daniel Stephen Price
- Subjects
Male ,0206 medical engineering ,Finite Element Analysis ,Biophysics ,Video Recording ,02 engineering and technology ,Kinematics ,Motion capture ,Sensitivity and Specificity ,Running ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Center of pressure (terrestrial locomotion) ,Pressure ,Humans ,Orthopedics and Sports Medicine ,Force platform ,Boundary value problem ,Ground reaction force ,Simulation ,Mathematics ,business.industry ,Foot ,Rehabilitation ,68–95–99.7 rule ,Structural engineering ,Equipment Design ,020601 biomedical engineering ,Finite element method ,Biomechanical Phenomena ,Shoes ,business ,030217 neurology & neurosurgery - Abstract
A dynamic finite element model of a shod running footstrike was developed and driven with 6 degree of freedom foot segment kinematics determined from a motion capture running trial. Quadratic tetrahedral elements were used to mesh the footwear components with material models determined from appropriate mechanical tests. Model outputs were compared with experimental high-speed video (HSV) footage, vertical ground reaction force (GRF), and center of pressure (COP) excursion to determine whether such an approach is appropriate for the development of athletic footwear. Although unquantified, good visual agreement to the HSV footage was observed but significant discrepancies were found between the model and experimental GRF and COP readings (9% and 61% of model readings outside of the mean experimental reading ± 2 standard deviations, respectively). Model output was also found to be highly sensitive to input kinematics with a 120% increase in maximum GRF observed when translating the force platform 2 mm vertically. While representing an alternative approach to existing dynamic finite element footstrike models, loading highly representative of an experimental trial was not found to be achievable when employing exclusively kinematic boundary conditions. This significantly limits the usefulness of employing such an approach in the footwear development process.
- Published
- 2016
25. Abnormalities in Diffusional Kurtosis Metrics Related to Head Impact Exposure in a Season of High School Varsity Football
- Author
-
Jens H. Jensen, Jillian E. Urban, Daryl A. Rosenbaum, Elizabeth M. Davenport, Kalyna Apkarian, Youngkyoo Jung, Alexander K. Powers, Christopher T. Whitlow, Eliza Szuch, Joseph A. Maldjian, Joel D. Stitzel, Gerard A. Gioia, and Mark A. Espeland
- Subjects
Male ,Adolescent ,Football ,computer.software_genre ,03 medical and health sciences ,0302 clinical medicine ,Voxel ,Linear regression ,Concussion ,Covariate ,Statistics ,medicine ,Humans ,Diffusion Kurtosis Imaging ,Brain Concussion ,Schools ,business.industry ,68–95–99.7 rule ,030229 sport sciences ,Original Articles ,medicine.disease ,White Matter ,Diffusion Magnetic Resonance Imaging ,Athletes ,Kurtosis ,Neurology (clinical) ,Seasons ,business ,computer ,030217 neurology & neurosurgery - Abstract
The purpose of this study was to determine whether the effects of cumulative head impacts during a season of high school football produce changes in diffusional kurtosis imaging (DKI) metrics in the absence of clinically diagnosed concussion. Subjects were recruited from a high school football team and were outfitted with the Head Impact Telemetry System (HITS) during all practices and games. Biomechanical head impact exposure metrics were calculated, including: total impacts, summed acceleration, and Risk Weighted Cumulative Exposure (RWE). Twenty-four players completed pre- and post-season magnetic resonance imaging, including DKI; players who experienced clinical concussion were excluded. Fourteen subjects completed pre- and post-season Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT). DKI-derived metrics included mean kurtosis (MK), axial kurtosis (K axial), and radial kurtosis (K radial), and white matter modeling (WMM) parameters included axonal water fraction, tortuosity of the extra-axonal space, extra-axonal diffusivity (De axial and radial), and intra-axonal diffusivity (Da). These metrics were used to determine the total number of abnormal voxels, defined as 2 standard deviations above or below the group mean. Linear regression analysis revealed a statistically significant relationship between RWE combined probability (RWECP) and MK. Secondary analysis of other DKI-derived and WMM metrics demonstrated statistically significant linear relationships with RWECP after covariate adjustment. These results were compared with the results of DTI-derived metrics from the same imaging sessions in this exact same cohort. Several of the DKI-derived scalars (Da, MK, K axial, and K radial) explained more variance, compared with RWECP, suggesting that DKI may be more sensitive to subconcussive head impacts. No significant relationships between DKI-derived metrics and ImPACT measures were found. It is important to note that the pathological implications of these metrics are not well understood. In summary, we demonstrate a single season of high school football can produce DKI measurable changes in the absence of clinically diagnosed concussion.
- Published
- 2016
26. Redundancy Elimination in Video Summarization
- Author
-
Hrishikesh Bhaumik, Susanta Chakraborty, and Siddhartha Bhattacharyya
- Subjects
Computer science ,business.industry ,05 social sciences ,68–95–99.7 rule ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Automatic summarization ,Redundancy (information theory) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Artificial intelligence ,Precision and recall ,business ,050107 human factors - Abstract
Video summarization is a task which aims at presenting the contents of a video to the user in a succinct manner so as to reduce the retrieval and browsing time. At the same time sufficient coverage of the contents is to be ensured. A trade-off between conciseness and coverage has to be reached as these properties are conflicting to each other. Various feature descriptors have been developed which can be used for redundancy removal in the spatial and temporal domains. This chapter takes an insight into the various strategies for redundancy removal. A method for intra-shot and inter-shot redundancy removal for static video summarization is also presented. High values of precision and recall illustrate the efficacy of the proposed method on a dataset consisting of videos with varied characteristics.
- Published
- 2016
27. It's time to move on from the bell curve
- Author
-
Lawrence R. Robinson
- Subjects
Percentile ,Physiology ,Statistics as Topic ,De Moivre's formula ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,symbols.namesake ,0302 clinical medicine ,Reference Values ,Physiology (medical) ,Humans ,Association (psychology) ,Coin flipping ,Electrodiagnosis ,05 social sciences ,68–95–99.7 rule ,Gauss ,050301 education ,Repeated measures design ,Electrophysiology ,symbols ,Normative ,Neurology (clinical) ,Psychology ,0503 education ,Mathematical economics ,030217 neurology & neurosurgery - Abstract
The bell curve was first described in the 18th century by de Moivre and Gauss to depict the distribution of binomial events, such as coin tossing, or repeated measures of physical objects. In the 19th and 20th centuries, the bell curve was appropriated, or perhaps misappropriated, to apply to biologic and social measures across people. For many years we used it to derive reference values for our electrophysiologic studies. There is, however, no reason to believe that electrophysiologic measures should approximate a bell-curve distribution, and empiric evidence suggests they do not. The concept of using mean ± 2 standard deviations should be abandoned. Reference values are best derived by using non-parametric analyses, such as percentile values. This proposal aligns with the recommendation of the recent normative data task force of the American Association of Neuromuscular & Electrodiagnostic Medicine and follows sound statistical principles. Muscle Nerve 56: 859-860, 2017.
- Published
- 2017
28. High-burn up 10×10 100%MOX ABWR core physics analysis with APOLLO2.8 and TRIPOLI-4.5 codes
- Author
-
Nicolas Huot, Nicolas Thiollay, Patrick Blaise, Philippe Fougeras, and A. Santamarina
- Subjects
Physics ,Experimental uncertainty analysis ,Nuclear Energy and Engineering ,Method of characteristics ,Lattice (order) ,Nuclear engineering ,68–95–99.7 rule ,Nuclear data ,MOX fuel ,Analysis method ,Burnup - Abstract
Within the frame of several extensive experimental core physics programs led between 1996 and 2008 between CEA and Japan Nuclear Energy Safety Organization (JNES), the FUBILA experiment has been conducted in the French EOLE Facility between 2005 and 2006 to obtain valuable data for the validation of core analysis methods related to full MOX advanced BWR and high-burn up BWR cores. During this experimental campaign, a particular FUBILA 10 × 10 Advanced BWR configuration devoted to the validation of high-burn up 100%MOX BWR bundles was built. It is characterized by an assembly average total Pu enrichment of 10.6 wt.% and in-channel void of 40%, representative of hot full power conditions at core mid-plane and average discharge burnup of 65 GWd/t. This paper details the validation work led on the TRIPOLI-4.5 Continuous Energy Monte Carlo code and APOLLO2.8/CEA2005V4 deterministic code package for the interpretation of this 10 × 10 high-burn up configuration. The APOLLO2.8/CEA2005V4 package relies on the deterministic lattice transport code APOLLO2.8 based on the Method of Characteristics (MOC), and its new CEA2005v4 multigroup library based on the latest JEFF-3.1.1 nuclear data file, processed also for the TRIPOLI-4.5 code. The results obtained on critical mass and radial pin-by-pin power distributions are presented. For critical mass, the calculation-to-experiment C – E on the k eff spreads from 300 pcm for TRIPOLI to 600 pcm for APOLLO2.8 in its Optimized BWR Scheme (OBS) in 26 groups. For pin-by-pin radial power distributions, all codes give acceptable results, with maximum discrepancies on C / E − 1 of the order of 3–4% for very heterogeneous bundles where P max / P min reaches 4, 2. These values are within 2 standard deviations of the experimental uncertainty. Those results demonstrate the capability of both codes and schemes to accurately predict Advanced High burnup 100%-MOX BWR key-neutron parameters.
- Published
- 2010
29. Comparison of image registration performed with MV cone beam CT and CT on rails and Syngo™ Adaptive Targeting software
- Author
-
Paweł F. Kukołowicz, Sylwia Zielińska-Dąbrowska, and Piotr Czebek-Szebek
- Subjects
Reproducibility ,Cancer Research ,Computer science ,business.industry ,68–95–99.7 rule ,Image registration ,accuracy of image registration ,CT on rails ,Anatomical sites ,Software ,Oncology ,Radiology Nuclear Medicine and imaging ,cone-beam CT ,MV Cone Beam CT ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Anthropomorphic phantom ,Artificial intelligence ,business ,Cone beam ct - Abstract
Background Minimization of geometric errors in treatment delivery is essential in modern conformal and intensity-modulated techniques. Aim In this paper two Siemens systems, MVision megavoltage cone beam CT, and CTVision (CT on rails), are compared. Material and Methods The reproducibility and uncertainty of the image registration procedure performed with Adaptive Targeting (AT) software were evaluated. Both systems were evaluated by means of simulating the clinical situation with an anthropomorphic phantom in three anatomical sites: head & neck, thorax and pelvis. Results The results for two methods of image registration, manual and automatic, were evaluated separately. The manual procedure was used by two users, more and less experienced. Conclusions The MVision system and CTVision and the Therapist Adaptive software ensure image registration with the uncertainty of about 2.0 mm (2 standard deviations). In the case of the automatic registration method better reproducibility of image registration was obtained for MVision. For CTVision the necessity of manual identification of the machine isocentre made the registration less reproducible. In the case of MVision, the automatic method was more reproducible than the manual one (smaller dispersion of results). In the case of CTVision, similar results were obtained for both registration methods. In the case of manual registration slightly better reproducibility for CT data acquired at 2 mm slice thickness and 2 mm slice separation than for data acquired at 5 mm slice thickness and 5 mm slice separation were obtained. Similar results of manual registration performed by more and less experienced users were obtained.
- Published
- 2009
- Full Text
- View/download PDF
30. Selecting an Appropriate Scan Rate: The '.65 Rule'
- Author
-
Heidi Horstmann Koester, Ed LoPresti, and Richard C. Simpson
- Subjects
Adult ,Male ,Horizontal scan rate ,Education, Continuing ,Computer science ,business.industry ,Coefficient of variation ,Rehabilitation ,68–95–99.7 rule ,Word error rate ,Physical Therapy, Sports Therapy and Rehabilitation ,Middle Aged ,Pennsylvania ,Self-Help Devices ,USable ,Standard deviation ,User-Computer Interface ,Software ,Assistive technology ,Task Performance and Analysis ,Statistics ,Humans ,Disabled Persons ,Female ,business - Abstract
Investigators have discovered that the ratio between a user's reaction time and an appropriate scan rate for that user is approximately .65, which we refer to as "the .65 rule." As part of a larger effort to develop software that automatically adapts the configuration of switch access software, data were collected comparing subject performance with a scan rate chosen using the .65 rule and a scan rate chosen by the user. Analysis of the data indicates that for many people, the .65 rule produces a scan rate that is approximately the same as the average switch press time plus 2 standard deviations. Further analysis demonstrates a relationship between the coefficient of variation (the standard deviation divided by the mean) and error rate. If accurate information is available about the mean, standard deviation, and distribution of a client's switch press time, a scan rate can be chosen that will yield a specific error level. If a rigorous statistical approach is impractical, the .65 rule will generally yield a usable scan rate based on mean press time alone.
- Published
- 2007
31. Development of the Listening in Spatialized Noise-Sentences Test (LISN-S)
- Author
-
Sharon Cameron and Harvey Dillon
- Subjects
Male ,Time Factors ,Psychometrics ,Speech Reception Threshold Test ,Hearing Tests ,68–95–99.7 rule ,Regression analysis ,Intelligibility (communication) ,Standard deviation ,Speech and Hearing ,Standard error ,Otorhinolaryngology ,Child, Preschool ,Statistics ,Speech Perception ,Humans ,Female ,Analysis of variance ,Cues ,Child ,Noise ,Psychology - Abstract
The goals of this research were to develop and evaluate a new version of the Listening in Spatialized Noise Test (LISN; Cameron DillonNewall, 2006a) by incorporating a simplified and more objective response protocol to make the test suitable for assessing the ability of children as young as 5 yr to understand speech in background noise. The LISN-Sentences test (LISN-S; CameronDillon, Reference Note 1) produces a three-dimensional auditory environment under headphones and is presented by using a personal computer. A simple repetition response protocol is used to determine speech reception thresholds (SRTs) for sentences presented in competing speech under various conditions. In four LISN-S conditions, the maskers are manipulated with respect to location (0 degrees versus +/-90 degrees azimuth) and vocal quality of the speaker(s) of the stories (same as, or different than, the speaker of the target sentences). Performance is measured as two SRT measures and three "advantage" measures. These advantage measures represent the benefit in decibels gained when either talker, spatial, or both talker and spatial cues combined, are incorporated in the maskers. This use of difference scores minimizes the effects of between-listener variation in factors such as linguistic skills and general cognitive ability on LISN-S performance.An initial experiment was conducted to determine the relative intelligibility of the sentences used in the test. Up to 30 sentences were presented adaptively to 24 children ages 8 to 9 yr to estimate the SRT (eSRT). Fifty sentences each were then presented at each participant's eSRT, eSRT +2 dB, and eSRT -2 dB. Psychometric functions were fitted and the sentences were adjusted in amplitude for equal intelligibility. After adjustment, intelligibility increased across sentences by approximately 17% for each 1 dB increase in signal-to-noise ratio (SNR). A second experiment was conducted to gather normative data on the LISN-S from 82 children with normal hearing, ages 5 to 11 yr.For the 82 children in the normative data study, regression analysis showed that there was a strong trend of decreasing SRT and increasing advantage as age increased across all LISN-S performance measures. Analysis of variance revealed that significant differences in performance were most pronounced between the 5-yr-olds and the other age groups on the LISN-S measures that assess the ability to use spatial cues to understand speech in background noise, suggesting that binaural processing skills are still developing at age 5 yr. Inter-participant variation in performance on the various SRT and advantage measures was minimal for all groups, including the 5- and 6-yr-olds who exhibited standard deviations ranging from only 1.0 dB to 1.8 dB across measures. The intra-participant standard error ranged from 0.6 dB to 2.0 dB across age groups and conditions. Total time taken to administer all four LISN-S conditions was on average 12 minutes.The LISN-S provides a quick, objective method of measuring a child's ability to understand speech in background noise. The small degree of inter- and intra-participant variation in the 5- and 6-yr-old children suggests that the test is capable of assessing auditory processing in this age group. However, because there appears to be a strong developmental curve in binaural processing skills in the 5-yr-olds, it is suggested that the LISN-S be used clinically with children from 6 yr of age. Cut-off scores, calculated as 2 standard deviations below the mean adjusted for age, were calculated for each performance measure for children ages 6 to 11 yr. These scores, which represent the level below which performance on the LISN-S is considered to be outside normal limits, will be used to in future studies with children with suspected central auditory processing disorder.
- Published
- 2007
32. Nutrient Intake Values (NIVs): A Recommended Terminology and Framework for the Derivation of Values
- Author
-
Janet C King, Hester H. Vorster, and Daniel Tomé
- Subjects
030309 nutrition & dietetics ,Geography, Planning and Development ,Population ,Nutrient intake ,Biology ,Risk Assessment ,Standard deviation ,Nutrition Policy ,Terminology ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,Nutrient ,Terminology as Topic ,Statistics ,Humans ,Nutritional Physiological Phenomena ,030212 general & internal medicine ,education ,0303 health sciences ,education.field_of_study ,Nutrition and Dietetics ,business.industry ,Health Policy ,68–95–99.7 rule ,Environmental resource management ,Nutritional Requirements ,Nutrition Assessment ,business ,Risk assessment ,Food Science - Abstract
Although most countries and regions around the world set recommended nutrient intake values for their populations, there is no standardized terminology or framework for establishing these standards. Different terms used for various components of a set of dietary standards are described in this paper and a common set of terminology is proposed. The recommended terminology suggests that the set of values be called nutrient intake values (NIVs) and that the set be composed of three different values. The average nutrient requirement (ANR) reflects the median requirement for a nutrient in a specific population. The individual nutrient level (INLx) is the recommended level of nutrient intake for all healthy people in the population, which is set at a certain level x above the mean requirement. For example, a value set at 2 standard deviations above the mean requirement would cover the needs of 98% of the population and would be INL98. The third component of the NIVs is an upper nutrient level (UNL), which is the highest level of daily nutrient intake that is likely to pose no risk of adverse health effects for almost all individuals in a specified life-stage group. The proposed framework for deriving a set of NIVs is based on a statistical approach for determining the midpoint of a distribution of requirements for a set of nutrients in a population (the ANR), the standard deviation of the requirements, and an individual nutrient level that assures health at some point above the mean, e.g., 2 standard deviations. Ideally, a second set of distributions of risk of excessive intakes is used as the basis for a UNL.
- Published
- 2007
33. Do you reckon it's normally distributed?
- Author
-
Michael Wagner, Julien Farlin, and Marius Majewsky
- Subjects
Normal distribution ,Environmental Engineering ,0504 sociology ,05 social sciences ,Statistics ,68–95–99.7 rule ,050401 social sciences methods ,Environmental Chemistry ,050109 social psychology ,0501 psychology and cognitive sciences ,Pollution ,Waste Management and Disposal ,Standard deviation - Published
- 2015
34. It's not all in the numbers
- Author
-
Mark I. Travin
- Subjects
Background subtraction ,medicine.diagnostic_test ,business.industry ,68–95–99.7 rule ,Spherical coordinate system ,Sampling (statistics) ,030204 cardiovascular system & hematology ,030218 nuclear medicine & medical imaging ,law.invention ,03 medical and health sciences ,Myocardial perfusion imaging ,0302 clinical medicine ,law ,Medicine ,Radiology, Nuclear Medicine and imaging ,Tomography ,Polar coordinate system ,Cardiology and Cardiovascular Medicine ,Nuclear medicine ,business ,Gamma camera - Abstract
Quantitative analysis is an integral part of nuclear cardiology. It is a major strength and a key advantage that separates this technique from other noninvasive imaging methods. Nuclear cardiology is ‘‘inherently quantitative’’ in that image displays depict the number of counts detected by a gamma camera, reflecting the amount of radiotracer in the entity being imaged. Perhaps the earliest use of quantitative measurements was by Parker et al who, in contrast to a previously reported geometric method, measured background corrected Tc-labeled human serum albumin ventricular blood pool counts from electrocardiographically gated end-diastolic and endsystolic digital planar left anterior oblique images to derive a counts-based left ventricular ejection fraction (LVEF). Shortly after the introduction of thallium-201 (Tl) for myocardial perfusion imaging (MPI), efforts to apply quantitation began. As the clinical potential of MPI became recognized, methods to quantitate visual regional tracer uptake were developed. While a multiple slice profile method was considered, circumferential count profile techniques became preferred. Building on these, Garcia et al developed a comprehensive computerized space/time quantitation method. After applying proximity weighted interpolative background subtraction, for each of three standard planar views, 60 6 spaced interval radii originating from the visually determined center of the ventricle were created, with maximal ventricular wall counts along each radius plotted as a function of angular coordinates aligned in reference to the apex, normalized to the maximum value. Portions of profiles from patients with suspected disease that were[2 standard deviations (SD) below ‘‘normal’’ patient (\1% likelihood of coronary disease) means were considered abnormal. It was recognized from the outset that profile curves were representations of relative rather than absolute tracer uptake, thus limited in the ability to detect balanced flow reduction. The advent of single photon emission computer tomography (SPECT) prompted efforts to quantitate three-dimensional distribution of radiotracer, with generation of two-dimensional polar coordinate maps (i.e., ‘‘bullseye’’ displays), thereby enhancing conceptualization of defect distribution and percentages (%) of abnormal myocardium. Maximal count normalized circumferential profiles were generated from short-axis (SA) slices from the most apical to most basal cuts (analogous to the planar technique), but also from the apical profiles (60 -120 ) of the vertical long-axis (VLA) slices to be placed at the center of the plot to represent the apex, with the SA profiles mapped as increasingly larger circles toward the base. Similar to planar techniques, data from low CAD likelihood cases were used such that patient counts [2.5 SD below the mean were considered abnormal. Sophistication increased after the advent of technetium-99m (Tc) tracers in that geometric methods were modified such that sampling of SA slices used cylindrical coordinates and sampling of the apical region used spherical coordinates, ensuring perpendicular myocardial radial sampling at all points to more accurately measure tracer distribution. With availability of increased computer power, Germano et al developed an alternative algorithm that replaced circumferential profiles with true three-dimensional sampling, and with the generated sampling rays assessed counts through the entire myocardial thickness from endocardial to epicardial surface (‘‘whole myocardium sampling),’’ rather than using only maximal counts. A total perfusion defect (TPD) parameter were derived, designed to combine Reprint requests: Mark I. Travin, MD, FASNC, Division of Nuclear Medicine, Department of Radiology, Montefiore Medical Center and the Albert Einstein College of Medicine, 111 E. 210th Street, Bronx, NY 10467-2490; mtravin@attglobal.net J Nucl Cardiol 2016;23:436–41. 1071-3581/$34.00 Copyright 2015 American Society of Nuclear Cardiology.
- Published
- 2015
35. Solution of Linear and Non Linear Regression Problem by K Nearest Neighbour Approach: By Using Three Sigma Rule
- Author
-
Tarun Kumar
- Subjects
Polynomial regression ,Supervisor ,business.industry ,68–95–99.7 rule ,Pattern recognition ,Function (mathematics) ,k-nearest neighbors algorithm ,ComputingMethodologies_PATTERNRECOGNITION ,Function approximation ,Artificial intelligence ,business ,Cluster analysis ,Nonlinear regression ,Algorithm ,Mathematics - Abstract
K Nearest Neighbor is one of the simplest method for classification as well as regression problem. That is the reason it is widely adopted. KNN is a supervised method that uses estimation based on values of neighbors. Though KNN came into existence in decade of 1990, it still demands improvements based on domain in which it is being used. Now the researchers have invented methods in which multiple techniques can be combined in some order such that advantages of each technique covers the disability of techniques being combined for example, KNN-Kernel based algorithms are being used for clustering. Though heavy applicability of KNN in classification problems, it is not that much used in function estimation problems. This paper is an attempt in using KNN as function estimation problem. The approach is made for linear as well as nonlinear regression problem. We have made an assumption that supervisor data given is reliable. We have considered here two dimensional data to illustrate the idea which is equally applicable to n-dimensional data for some large but finite n.
- Published
- 2015
36. Erfassung von Aufmerksamkeitsdefiziten bei Patienten mit obstruktivem Schlafapnoesyndrom mit unterschiedlichen Fahrsimulationsprogrammen
- Author
-
Karl-Heinz Rühle, Winfried Randerath, and A. Büttner
- Subjects
Pulmonary and Respiratory Medicine ,medicine.medical_specialty ,business.industry ,68–95–99.7 rule ,Mean value ,Neuropsychology ,Driving simulator ,Poison control ,CarSim ,Audiology ,medicine.disease ,Obstructive sleep apnea ,medicine ,business ,Normal range ,Simulation - Abstract
Patients with obstructive sleep apnea syndrome suffer from reduced continuous attention due to neuropsychological deficits. Among other means, driving simulator programs are employed for registration and objectification as well as observation of the course of therapy. While Steer Clear by Findley et al. and the driving simulator Carda (Randerath et al.) represent pure continuous attention tests, the new driving simulator test Carsim, measures attention interactively and continuously. This way, more complex functions are recorded. We therefore investigated Carda and the program Carsim to study the various features of both methods. For this purpose, 105 OSAS patients were tested on both driving simulators concerning the mistake rate in Carda and the time of tracking deviations in Carsim. We defined the normal range by using the mean value +/- 2 standard deviations from our earlier publications in healthy persons without sleep disorders. With Carda the mistake rate exceeded in 10 of 105 patients (9.5%) the normal range and with Carsim the frequency of tracking deviations exceeded in 49 of 105 patients (46.7%) the normal range. The incidence of deviation from normal was significantly higher with Carsim testing. By additionally testing the number of pathological cases is with Carda increased from 46.7% to 51.4%. The tests characterize different components of neuropsychological deficits. Driving simulators with tracking tasks describe neuropsychological deficits in comparison with those measuring only components of reaction in a higher percentage. Language: de
- Published
- 2003
37. Quantitative EEG Normative Databases: Validation and Clinical Correlation
- Author
-
Rebecca A. Walker, Carl J. Biver, D. North, Richard T. Curtin, and Robert W. Thatcher
- Subjects
Artifact (error) ,medicine.diagnostic_test ,Database ,Logarithm ,Gaussian ,68–95–99.7 rule ,Electroencephalography ,computer.software_genre ,Clinical Psychology ,symbols.namesake ,Neuropsychology and Physiological Psychology ,Metric (mathematics) ,Statistics ,medicine ,Range (statistics) ,symbols ,Normative ,Psychology ,computer - Abstract
SUMMARY The quantitative digital electroencephalogram (QEEG) was recorded from 19 scalp locations from 625 screened and evaluated normal individuals ranging in age from two months to 82 years. After editing to remove artifact, one-year to five-year groupings were selected to produce different average age groups. Estimates of gaussian distributions and logarithmic transforms of the digital EEG were used to establish approximate gaussian distributions when necessary for different variables and age groupings. The sensitivity of the lifespan database was determined by gaussian cross-validation for any selection of age range in which the average percentage of Z-scores ± 2 standard deviations (SD) equals approximately 2.3% and the average percentage for ± 3 SD equals approximately 0.13%. It was hypothesized that measures of gaussian cross-validation of Z-scores is a common metric by which the statistical sensitivity of any normative database for any age grouping can be calculated. This theory was tested by comp...
- Published
- 2003
38. Reliability and Significance of Measurements of a-Wave Latency in Rats
- Author
-
Byron L. Lam, Eriko Fujiwara, Mu Liu, D. I. Hamasaki, Jean-Marie A. Parel, Hui Qiu, and G Inana
- Subjects
Millisecond ,medicine.medical_specialty ,Time Factors ,genetic structures ,medicine.diagnostic_test ,Electrodiagnosis ,Coefficient of variation ,68–95–99.7 rule ,Reproducibility of Results ,Dark Adaptation ,General Medicine ,Stimulus (physiology) ,Audiology ,Retina ,Rats ,Ophthalmology ,Time course ,Electroretinography ,Reaction Time ,medicine ,Animals ,Rats, Long-Evans ,Erg ,Mathematics - Abstract
Purpose: To determine whether measurements of the a-wave latency of the electroretinogram (ERG) can be made as reliably as that of the implicit time (IT) in rats. In addition, to determine the relationship between the potential level selected for the latency and the baseline potential level. Methods: ERGs, elicited by different stimulus intensities, were recorded from Long-Evans rats. The a-wave latency was determined by measuring the time between the stimulus onset and the beginning of the negative-going a-wave, and the IT was measured as the time between the stimulus onset and the peak of the a-wave. To test the reliability of the measurements of the latency, the a-wave latency and the IT were measured by three independent observers for the same 15 ERGs. Results: The mean a-wave latency was approximately 14 milliseconds, and the mean a-wave implicit time was approximately 36 milliseconds. The mean of the a-wave latency and the IT, as measured by the three observers, were within 1 millisecond of each other. The coefficient of variation was as good for the latency as for the IT of the a-wave. The potential level selected for the latency was lower than the mean baseline potential level by 1 to 2 standard deviations. Conclusions: Selection of the a-wave latencies can be made as reliably as that for the IT. Because the a-wave latency is not affected by the activity of the second order neurons, the latency is a better measure than the IT of the time course of the a-wave.
- Published
- 2002
39. Application of computational statistical physics to scale invariance and universality in economic phenomena
- Author
-
Luís A. Nunes Amaral, Parameswaran Gopikrishnan, H. E. Stanley, Vasiliki Plerou, and Michael A. Salinger
- Subjects
Hardware and Architecture ,68–95–99.7 rule ,General Physics and Astronomy ,Probability distribution ,Probability density function ,Symmetry breaking ,Statistical physics ,Scale invariance ,Power law ,Scaling ,Universality (dynamical systems) ,Mathematics - Abstract
This paper discusses some of the similarities between work being done by economists and by computational physicists seeking to contribute to economics. We also mention some of the differences in the approaches taken and seek to justify these different approaches by developing the argument that by approaching the same problem from different points of view, new results might emerge. In particular, we review two such new results. Specifically, we discuss the two newly-discovered scaling results that appear to be “universal”, in the sense that they hold for widely different economies as well as for different time periods: (i) the fluctuation of price changes of any stock market is characterized by a probability density function (PDF), which is a simple power law with exponent 4 extending over 10 2 standard deviations (a factor of 10 8 on the -axis); this result is analogous to the Gutenberg‐Richter power law describing the histogram of earthquakes of a given strength; (ii) for a wide range of economic organizations, the histogram that shows how size of organization is inversely correlated to fluctuations in size with an exponent 0 2. Neither of these two new empirical laws has a firm theoretical foundation. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behavior of the response function at the critical point (zero magnetic field) leads to large fluctuations. We discuss a curious “symmetry breaking” for values of above a certain threshold value ;h e r e is defined to be the local first moment of the probability distribution of demand —the difference between the number of shares traded in buyer-initiated and seller-initiated trades. This feature is qualitatively identical to the behavior of the probability density of the magnetization for fixed values of the inverse temperature. 2002 Published by Elsevier Science B.V.
- Published
- 2002
40. Precision of estimates of mean and peak spinal loads in lifting
- Author
-
Marco J.M. Hoozemans, Jaap H. van Dieën, Allard J. van der Beek, Margriet G. Mullender, Orale Celbiologie (OUD, ACTA), Public and occupational health, CCA - Cancer Treatment and quality of life, and Kinesiology
- Subjects
Male ,Analysis of Variance ,Flexion angle ,Lift (data mining) ,Rehabilitation ,68–95–99.7 rule ,Biomedical Engineering ,Biophysics ,Sagittal plane ,Spine ,Biomechanical Phenomena ,Weight-Bearing ,medicine.anatomical_structure ,Lumbar ,Statistics ,medicine ,Statistical precision ,Humans ,Orthopedics and Sports Medicine ,SDG 7 - Affordable and Clean Energy ,Low Back Pain ,Mathematics - Abstract
A bootstrap procedure was used to determine the statistical precision of estimates of mean and peak spinal loads during lifting as function of the numbers of subjects and measurements per subject included in a biomechanical study. Data were derived from an experiment in which 10 subjects performed 360 lifting trials each. The maximum values per lift of the lumbar flexion angle, L5S1 sagittal plane moment, and L5S1 compression force were determined. From the data set thus compiled, 3000 samples were randomly drawn for each combination of number of subjects and number of measurements considered. The coefficients of variation of mean and peak (defined as mean plus 2 standard deviations) spinal loads across these samples were calculated. The coefficients of variation of the means of the three parameters of spinal load decreased as a linear function of the number of subjects to a power of about -0.48 and number of measurements to a power of about -0.06, while the corresponding powers for peak loads were about -0.44 and -0.11. © 2002 Elsevier Science Ltd. All rights reserved.
- Published
- 2002
41. Does Assessment Type Matter? A Measurement Invariance Analysis of Online and Paper and Pencil Assessment of the Community Assessment of Psychic Experiences (CAPE)
- Author
-
Jim van Os, Cécile Henquet, Marloes Vleeschouwer, Willemijn A. van Gastel, C. D. Schubart, Inez Myin-Germeys, Marco P. Boks, Manon H.J. Hillegers, Eske M. Derks, Psychiatrie & Neuropsychologie, RS: MHeNs - R2 - Mental Health, Obstetrics and Gynaecology, Amsterdam Neuroscience, Amsterdam Public Health, and Adult Psychiatry
- Subjects
Questionnaires ,Male ,Health Screening ,Psychometrics ,Non-Clinical Medicine ,Social and Behavioral Sciences ,Surveys and Questionnaires ,Statistics ,Psychology ,Child ,Psychiatry ,Multidisciplinary ,Middle Aged ,Test (assessment) ,Mental Health ,Medicine ,Female ,Metric (unit) ,Public Health ,Research Article ,Test Evaluation ,Adult ,Paper ,medicine.medical_specialty ,Adolescent ,Science ,education ,Sample (statistics) ,Sensitivity and Specificity ,Young Adult ,Diagnostic Medicine ,medicine ,Humans ,Psychological testing ,Measurement invariance ,Statistical Methods ,Health Care Quality ,Pencil (mathematics) ,Internet ,Psychological Tests ,business.industry ,68–95–99.7 rule ,Reproducibility of Results ,Communication in Health Care ,Psychotic Disorders ,business ,Mathematics - Abstract
BackgroundThe psychometric properties of an online test are not necessarily identical to its paper and pencil original. The aim of this study is to test whether the factor structure of the Community Assessment of Psychic Experiences (CAPE) is measurement invariant with respect to online vs. paper and pencil assessment.MethodThe factor structure of CAPE items assessed by paper and pencil (N = 796) was compared with the factor structure of CAPE items assessed by the Internet (N = 21,590) using formal tests for Measurement Invariance (MI). The effect size was calculated by estimating the Signed Item Difference in the Sample (SIDS) index and the Signed Test Difference in the Sample (STDS) for a hypothetical subject who scores 2 standard deviations above average on the latent dimensions.ResultsThe more restricted Metric Invariance model showed a significantly worse fit compared to the less restricted Configural Invariance model (χ(2)(23) = 152.75, pConclusionsOur findings did not support measurement invariance with respect to assessment method. Because of the small effect sizes, the measurement differences between the online assessed CAPE and its paper and pencil original can be neglected without major consequences for research purposes. However, a person with a high vulnerability for psychotic symptoms would score 4.80 points lower on the total scale if the CAPE is assessed online compared to paper and pencil assessment. Therefore, for clinical purposes, one should be cautious with online assessment of the CAPE.
- Published
- 2014
42. Analysis of the theoretical bias in dark matter direct detection
- Author
-
Riccardo Catena
- Subjects
Physics ,Coupling constant ,Physics beyond the Standard Model ,68–95–99.7 rule ,Dark matter ,FOS: Physical sciences ,Astronomy and Astrophysics ,Parameter space ,dark matter theory ,dark matter experiments ,Standard deviation ,Momentum ,High Energy Physics - Phenomenology ,High Energy Physics - Phenomenology (hep-ph) ,Effective field theory ,Statistical physics - Abstract
Fitting the model "A" to dark matter direct detection data, when the model that underlies the data is "B", introduces a theoretical bias in the fit. We perform a quantitative study of the theoretical bias in dark matter direct detection, with a focus on assumptions regarding the dark matter interactions, and velocity distribution. We address this problem within the effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle. We analyze 24 benchmark points in the parameter space of the theory, using frequentist and Bayesian statistical methods. First, we simulate the data of future direct detection experiments assuming a momentum/velocity dependent dark matter-nucleon interaction, and an anisotropic dark matter velocity distribution. Then, we fit a constant scattering cross section, and an isotropic Maxwell-Boltzmann velocity distribution to the simulated data, thereby introducing a bias in the analysis. The best fit values of the dark matter particle mass differ from their benchmark values up to 2 standard deviations. The best fit values of the dark matter-nucleon coupling constant differ from their benchmark values up to several standard deviations. We conclude that common assumptions in dark matter direct detection are a source of potentially significant bias., Comment: 22 pages, 7 figures, replaced to match the published version
- Published
- 2014
- Full Text
- View/download PDF
43. Sensitivity and specificity of WAIS–III/WMS–III demographically corrected factor scores in neuropsychological assessment
- Author
-
Michael J. Taylor and Robert K. Heaton
- Subjects
medicine.medical_specialty ,Echoic memory ,medicine.diagnostic_test ,Psychometrics ,Working memory ,General Neuroscience ,68–95–99.7 rule ,Wechsler Adult Intelligence Scale ,Audiology ,Developmental psychology ,Psychiatry and Mental health ,Clinical Psychology ,Visual memory ,medicine ,Normative ,Neurology (clinical) ,Neuropsychological assessment ,Psychology - Abstract
This study explored the neurodiagnostic utility of 6 factor scores identified by recent exploratory and confirmatory factor analyses of the WAIS–III and WMS–III: Verbal Comprehension, Perceptual Organization, Processing Speed, Working Memory, Auditory Memory and Visual Memory. Factor scores were corrected for age, education, sex and ethnicity to minimize their influences on diagnostic accuracy. Cut-offs at 1, 1.5 and 2 standard deviations (SDs) below the standardization sample mean were applied to data from the overlapping test normative samples (N = 1073) and 6 clinical samples described in the WAIS–III/WMS–III Technical Manual (N = 126). The analyses suggest that a 1 SD cut-off yields the most balanced levels of sensitivity and specificity; more strict (1.5 or 2 SD) cut-offs generally result in trading modest gains in specificity for larger losses in sensitivity. Finally, using combinations of WAIS–III/WMS–III factors together as test batteries, we explored the sensitivity and specificity implications of varying diagnostic decision rules (e.g., 1 vs. 2 impaired factors = “impairment”). For most of the disorders considered here, even a small (e.g., 3 factor) WAIS–III/WMS–III battery provides quite good overall diagnostic accuracy. (JINS, 2001, 7, 867–874.)
- Published
- 2001
44. Similarities and differences between physics and economics
- Author
-
Luís A. Nunes Amaral, Vasiliki Plerou, Xavier Gabaix, Parameswaran Gopikrishnan, and H. E. Stanley
- Subjects
Statistics and Probability ,Critical point (thermodynamics) ,68–95–99.7 rule ,Exponent ,Stock market ,Condensed Matter Physics ,Mathematical economics ,Scaling ,Power law ,Mathematics - Abstract
In this opening talk, we discuss some of the similarities between work being done by economists, and by physicists seeking to contribute to economics. We also mention some of the dierences in the approaches taken, and justify these dierent approaches by developing the argument that by approaching the same problem from dierent points of view new results might emerge. In particular, we review some recent results, for example the 1nding that there are two new universal scaling models in economics: (i) the 3uctuation of price changes of any stock market is characterized by a PDF which is a simple power law with exponent 4 that extends over 10 2 standard deviations (a factor of 10 8 on the y-axis); (ii) for a wide range of economic organizations, the histogram that shows how size of organization is inversely correlated to 3uc- tuations in size with an exponent ≈ 1=6. Neither of these two new laws has a 1rm theoretical foundation. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behavior of the response function at the critical point (zero magnetic 1eld) leads to large 3uctuations. c � 2001 Published by Elsevier Science B.V.
- Published
- 2001
45. Detection of Inadequate Effort on Neuropsychological Testing A Meta-Analytic Review of Selected Procedures
- Author
-
David T. R. Berry, Stephen A Orey, Chad D. Vickery, Monica J. Harris, and Tina Hanlon Inman
- Subjects
medicine.diagnostic_test ,Psychometrics ,68–95–99.7 rule ,General Medicine ,Neuropsychological test ,medicine.disease ,Standard deviation ,Test (assessment) ,Developmental psychology ,Psychiatry and Mental health ,Clinical Psychology ,Lie detection ,Neuropsychology and Physiological Psychology ,Malingering ,Statistics ,medicine ,Neuropsychological assessment ,Psychology - Abstract
Thirty-two studies of commonly researched neuropsychological malingering tests were meta-analytically reviewed to evaluate their effectiveness in discriminating between honest responders and dissimulators. Overall, studies using the Digit Memory Test (DMT), Portland Digit Recognition Test (PDRT), 15-Item Test, 21-Item Test, and the Dot Counting Test had average effect sizes indicating that dissimulators obtain scores that are approximately 1.1 standard deviations below those of honest responders. The DMT separated the means of groups of honest and dissimulating responders by approximately 2 standard deviations, whereas the 21-Item Test and the PDRT separated the groups by nearly 1.5 and 1.25 standard deviations, respectively. The 15-Item Test and the Dot Counting Test were less effective, separating group means by approximately 3/4 of a standard deviation. Although the DMT, PDRT, 15-, and 21-Item Tests all demonstrated very high specificity rates, at the level of individual classification, the DMT had the highest sensitivity and overall hit-rate parameters. The PDRT and 15-Item Test demonstrated moderate sensitivity, whereas the 21-Item Test demonstrated poor sensitivity. The less than perfect sensitivities of all the measures included in this review argue against their use in isolation as malingering screening devices.
- Published
- 2001
46. Post‐tonsillectomy bleeding: How much is too much?
- Author
-
Brian W. Blakley
- Subjects
medicine.medical_specialty ,business.industry ,General surgery ,medicine.medical_treatment ,68–95–99.7 rule ,MEDLINE ,Postoperative Hemorrhage ,Bleed ,Confidence interval ,Standard deviation ,Surgery ,Tonsillectomy ,Otorhinolaryngology ,medicine ,Humans ,business ,Complication ,Quality assurance - Abstract
Complication rates become important in discussions for informed surgical consent and for quality assurance purposes. In an attempt to quantify literature-based rates for post-tonsillectomy bleeding, a MEDLINE search was carried out. Of 4,610 papers 63 reported post-tonsillectomy bleeding rates. The weighted mean, standard deviation and 95% confidence intervals were calculated for those papers. The mean (4.5%) plus 2 standard deviations (9.4%) suggests a maximum "expected" sustained bleeding rate of 13.9%. In the literature, which should reflect optimum results, there were 3 reports of bleed rates in the 18-20% range. These data may be useful for quality assurance committees and individual clinicians.
- Published
- 2009
47. ULNAR NERVE MOTOR CONDUCTION TO THE ABDUCTOR DIGITI MINIMI1
- Author
-
Ralph M. Buschbacher
- Subjects
medicine.medical_specialty ,Percentile ,education.field_of_study ,business.industry ,Rehabilitation ,68–95–99.7 rule ,Elbow ,Population ,Repeated measures design ,Physical Therapy, Sports Therapy and Rehabilitation ,Audiology ,Nerve conduction velocity ,medicine.anatomical_structure ,Amplitude ,Physical therapy ,Medicine ,business ,Ulnar nerve ,education - Abstract
Ulnar motor study to the abductor digiti minimi is commonly performed, but a more extensive database of normative values using modern electrodiagnostic and statistical techniques and temperature control is needed for this test. Demographic subgroups of age, gender, and height should be evaluated using a large subject population to determine whether separate normal ranges should be created for subsets of the general population. In this study, 248 volunteers were tested to measure ulnar motor latency, amplitude, area, duration, and nerve conduction velocity. Side-to-side and distal-to-proximal variability was analyzed. A repeated measures analysis of variance was performed with the waveform measures as the dependent variables and age, gender, and height as independent variables. None of the results were found to vary significantly (at the P < or = 0.01 level) with the subjects' physical characteristics, and thus, the data for all subjects were pooled to create a normative database. The normal range was derived as mean +/- 2 standard deviations and at the 97th (third) percentile of observed values. Mean latency was 3.0 +/- 0.3 ms, and amplitude was 11.6 +/- 2.1 mV. Mean nerve conduction velocity was 61 m/s across all segments tested. The upper limit of normal side-to-side variability (mean + 2 standard deviations) for latency was 0.6 ms; for amplitude, it was 3.6 mV. The upper limit of normal drop in conduction velocity across the elbow was 15 m/s (at the 97th percentile). Additional data are presented for all variables measured, as well as for side-to-side variability and distal-to-proximal change.
- Published
- 1999
48. MEDIAN NERVE MOTOR CONDUCTION TO THE ABDUCTOR POLLICIS BREVIS1
- Author
-
Ralph M. Buschbacher
- Subjects
medicine.medical_specialty ,Percentile ,business.industry ,Rehabilitation ,68–95–99.7 rule ,Repeated measures design ,Physical Therapy, Sports Therapy and Rehabilitation ,Audiology ,Median nerve ,Nerve conduction velocity ,Surgery ,Amplitude ,Sample size determination ,Medicine ,Latency (engineering) ,business - Abstract
The median motor conduction study to the abductor pollicis brevis is one of the most commonly performed electrodiagnostic studies, yet there is a need for a more comprehensive normative database for this test. Demographic subgroups of age, gender, and height need to be evaluated with a large enough sample size using modern statistical and electrodiagnostic techniques. In this study, 249 subjects were tested and the following were recorded: latency, amplitude, area, duration, and nerve conduction velocity (NCV). A repeated measures analysis of variance was performed with the waveform measures as the dependent variables and age, gender, and height as the independent variables. Factors that were significant at the P < or = 0.01 level were used to create separate normative databases. Gender was found to be associated with different results for latency and NCV. Age was found to be associated with different results for latency, amplitude, area, and NCV. Once these statistically significant factors were determined, Tukey adjusted pair-wise comparisons of least squares means were used to collapse categories (by decade for age) that were not significantly different from each other at the P < or = 0.05 level. Categories for measures that differed by clinically insignificant amounts (defined as 0.2 ms or less for time measures, 2 m/s or less for NCV, or 5% or less for amplitude and area) were combined as well. Side-to-side and proximal-to-distal differences were analyzed. The normal range was derived as mean +/- 2 standard deviations and at the 97th (third) percentiles of observed values. The findings are presented in this article but include a mean latency of 3.7 +/- 0.5 ms, a mean amplitude of 10.2 +/- 3.6 mV, and a mean nerve conduction velocity of 57 +/- 5 m/s. Subgroupings based on demographic characteristics, percentile distributions, side-to-side, and proximal-to-distal variations are presented.
- Published
- 1999
49. Using radial basis function neural networks to recognize shifts in correlated manufacturing process parameters
- Author
-
Deborah F. Cook and Chih-Chou Chiu
- Subjects
Engineering ,Artificial neural network ,business.industry ,68–95–99.7 rule ,Process (computing) ,Statistical process control ,computer.software_genre ,Industrial and Manufacturing Engineering ,Standard deviation ,Data set ,Process control ,Control chart ,Artificial intelligence ,Data mining ,business ,computer - Abstract
Traditional statistical process control (SPC) techniues of control charting are not applicable in many process industries because data from these facilities are autocorrelated. Therefore the reduction in process variability obtained through the use of SPC techniques has not been realized in process industries. Techniques are needed to serve the same function as SPC control charts, that is to identify process shifts, in correlated parameters. Radial basis function neural networks were developed to identify shifts in process parameter values from papermaking and viscosity data sets available in the literature. Time series residual control charts were also developed for the data sets. Networks were successful at separating data that were shifted 1.5 and 2 standard deviations from nonshifted data for both the papermaking and viscosity parameter values. The network developed on the basis of the papermaking data set was also able to separate shifts of 1 standard deviation from nonshifted data. The SPC control charts were not able to identify the same process shifts. The radial basis function neural networks can be used to identify shifts in process parameters, thus allowing improved process control in manufacturing processes that generate correlated process data.
- Published
- 1998
50. [Untitled]
- Author
-
Martial G. Bourassa, Michael R. Buchanan, Leonard Schwartz, Charles M. Peniston, and Stephanie J. Brister
- Subjects
medicine.medical_specialty ,Reproducibility ,medicine.diagnostic_test ,business.industry ,Coefficient of variation ,68–95–99.7 rule ,Interobserver reproducibility ,Hematology ,Surgery ,Paired samples ,Bleeding time ,medicine ,Cardiology and Cardiovascular Medicine ,Nuclear medicine ,business ,Normal range ,Biological variability - Abstract
The bleeding time is a readily and easily performed clinical test with immediate results, but there is a degree of subjectivity in its performance and interpretation. We performed a study on 27 volunteers designed to determine the normal range, interobserver reproducibility, and biological variability of the test. Bleeding times in these normal subjects ranged from as low as 129 seconds to as high as 803 seconds. The interobserver variability was 106 seconds (2 standard deviations of the mean of the differences of paired results of repeated measurements), and the coefficient of variation was 18%. For bleeding times taken on the same subjects 6 weeks apart, when the same nurse performed the test at both visits, the difference was 150 seconds (2 standard deviations of the mean of the differences of paired samples) and the coefficient of variation was 27%, and they were essentially the same if a different nurse performed the tests at each visit. There is a wide range in the bleeding times among subjects. However, within individuals there is little biological variability, and most of the difference over time is due to interobserver variability. This suggests that changes in bleeding time are clinically useful in predicting platelet responsiveness in individual patients.
- Published
- 1998
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.