157 results on '"Charles C Taylor"'
Search Results
52. Using small bias nonparametric density estimators for confidence interval estimation
- Author
-
Marco Di Marzio and Charles C. Taylor
- Subjects
Statistics and Probability ,Bootstrapping (electronics) ,Kernel (statistics) ,Statistics ,Econometrics ,Nonparametric statistics ,Estimator ,Statistics, Probability and Uncertainty ,U-statistic ,Confidence interval ,CDF-based nonparametric confidence interval ,Multivariate kernel density estimation ,Mathematics - Abstract
Confidence intervals for densities built on the basis of standard nonparametric theory are doomed to have poor coverage rates due to bias. Studies on coverage improvement exist, but reasonably behaved interval estimators are needed. We explore the use of small bias kernel-based methods to construct confidence intervals, in particular using a geometric density estimator that seems better suited for this purpose.
- Published
- 2009
- Full Text
- View/download PDF
53. Maximum likelihood estimation using composite likelihoods for closed exponential families
- Author
-
Kanti V. Mardia, Charles C. Taylor, Gareth Hughes, and John T. Kent
- Subjects
Statistics and Probability ,Pseudolikelihood ,Restricted maximum likelihood ,Applied Mathematics ,General Mathematics ,Normalizing constant ,Bivariate von Mises distribution ,Maximum likelihood sequence estimation ,Agricultural and Biological Sciences (miscellaneous) ,Statistics::Computation ,Exponential family ,Expectation–maximization algorithm ,Statistics ,Statistics::Methodology ,Computer Science::Symbolic Computation ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Likelihood function ,Mathematics - Abstract
In certain multivariate problems the full probability density has an awkward normalizing constant, but the conditional and/or marginal distributions may be much more tractable. In this paper we investigate the use of composite likelihoods instead of the full likelihood. For closed exponential families, both are shown to be maximized by the same parameter values for any number of observations. Examples include log-linear models and multivariate normal models. In other cases the parameter estimate obtained by maximizing a composite likelihood can be viewed as an approximation to the full maximum likelihood estimate. An application is given to an example in directional data based on a bivariate von Mises distribution. Copyright 2009, Oxford University Press.
- Published
- 2009
- Full Text
- View/download PDF
54. On boosting kernel regression
- Author
-
Marco Di Marzio and Charles C. Taylor
- Subjects
Statistics and Probability ,Analysis of covariance ,Boosting (machine learning) ,Iterative method ,Applied Mathematics ,Estimator ,Cross-validation ,Kernel method ,Statistics ,Kernel regression ,Applied mathematics ,Statistics, Probability and Uncertainty ,Smoothing ,Mathematics - Abstract
In this paper we propose a simple multistep regression smoother which is constructed in an iterative manner, by learning the Nadaraya-Watson estimator with L2boosting. We find, in both theoretical analysis and simulation experiments, that the bias converges exponentially fast, and the variance diverges exponentially slow. The firs t boosting step is analyzed in more detail, giving asymptotic expressions as functions of the smoothing parameter, and relationships with previous work are explored. Practical performance is illustrated by both simulated and real data.
- Published
- 2008
- Full Text
- View/download PDF
55. A multivariate von mises distribution with applications to bioinformatics
- Author
-
Kanti V. Mardia, Gareth Hughes, Harshinder Singh, and Charles C. Taylor
- Subjects
Statistics and Probability ,Multivariate statistics ,Univariate ,Multivariate normal distribution ,Bivariate analysis ,Conditional probability distribution ,Wald test ,Statistics::Computation ,Statistics ,von Mises distribution ,Statistics::Methodology ,Applied mathematics ,Statistics, Probability and Uncertainty ,Marginal distribution ,Mathematics - Abstract
Motivated by problems of modelling torsional angles in molecules, Singh, Hnizdo & Demchuk (2002) proposed a bivariate circular model which is a natural torus analogue of the bivariate normal distribution and a natural extension of the univariate von Mises distribution to the bivariate case. The authors present here a multivariate extension of the bivariate model of Singh, Hnizdo & Demchuk (2002). They study the conditional distributions and investigate the shapes of marginal distributions for a special case. The methods of moments and pseudo-likelihood are considered for the estimation of parameters of the new distribution. The authors investigate the efficiency of the pseudo-likelihood approach in three dimensions. They illustrate their methods with protein data of conformational angles.
- Published
- 2008
- Full Text
- View/download PDF
56. Simulating Correlated Marked-point Processes
- Author
-
Chong-Yu Xu, R.J. Fowell, Peter A. Dowd, Kanti V. Mardia, and Charles C. Taylor
- Subjects
Statistics and Probability ,Correlation ,Computer science ,Simulated annealing ,Statistics ,Inference ,Marked point process ,Statistics, Probability and Uncertainty ,Algorithm ,Point process - Abstract
The area of marked-point processes is well developed but simulation is still a challenging problem when mark correlations are to be included. In this paper we propose the use of simulated annealing to incorporate the spatial mark correlation into the simulations of correlated marked-point processes. Such a simulation has wide applications in areas such as inference and goodness-of-fit investigations of proposed models. The technique is applied to a forest dataset for which the results are extremely encouraging.
- Published
- 2007
- Full Text
- View/download PDF
57. Parametric circular-circular regression and diagnostic analysis
- Author
-
Charles C. Taylor and Orathai Polsen
- Subjects
Diagnostic analysis ,von Mises distribution ,Applied mathematics ,Regression ,Parametric statistics ,Mathematics - Published
- 2015
- Full Text
- View/download PDF
58. Hierarchical Bayesian modelling of spatial age-dependent mortality
- Author
-
Ian L. Dryden, N. Miklós Arató, and Charles C. Taylor
- Subjects
Statistics and Probability ,Markov chain ,Applied Mathematics ,Posterior probability ,Markov chain Monte Carlo ,Conditional probability distribution ,Markov model ,Binomial distribution ,Computational Mathematics ,symbols.namesake ,Metropolis–Hastings algorithm ,Computational Theory and Mathematics ,Prior probability ,Statistics ,Econometrics ,symbols ,Quantitative Biology::Populations and Evolution ,Mathematics - Abstract
Hierarchical Bayesian modelling is considered for the number of age-dependent deaths in different geographic regions. The model uses a conditional binomial distribution for the number of age-dependent deaths, a new family of zero mean Gaussian Markov random field models for incorporating spatial correlations between neighbouring regions, and an intrinsic Gaussian model for including correlations between age-dependent mortality rates. Age-dependent mortality rates are estimated for each region, and approximate credibility intervals based on summaries of samples from the posterior distribution are obtained from Markov chain Monte Carlo simulation. The consequent maps of mortality rates are less variable and smoother than those which would be obtained from naive estimates, and various inferences may be drawn from the results. The prior spatial model includes some of the common conditional autoregressive spatial models used in epidemiology, and so model uncertainty in this family can be accounted for. The methodology is illustrated with an actuarial data set of age-dependent deaths in 150 geographic regions of Hungary. Sensitivity to the prior distributions is discussed, as well as relative risks for certain covariates (males in towns, females in towns, males in villages, females in villages).
- Published
- 2006
- Full Text
- View/download PDF
59. On boosting kernel density methods for multivariate data: density estimation and classification
- Author
-
Charles C. Taylor and Marco Di Marzio
- Subjects
Computer Science::Machine Learning ,Statistics and Probability ,Boosting (machine learning) ,business.industry ,Computer science ,Kernel density estimation ,Online machine learning ,Pattern recognition ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Ensemble learning ,Multivariate kernel density estimation ,Unsupervised learning ,Artificial intelligence ,Gradient boosting ,Statistics, Probability and Uncertainty ,business ,computer - Abstract
Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice-versa. Due to good practical performance, boosting is one of the most studied machine learning techniques.
- Published
- 2005
- Full Text
- View/download PDF
60. Kernel density classification and boosting: an L2 analysis
- Author
-
M. Di Marzio and Charles C. Taylor
- Subjects
Statistics and Probability ,business.industry ,Kernel density estimation ,Pattern recognition ,Multivariate kernel density estimation ,Theoretical Computer Science ,Kernel method ,Computational Theory and Mathematics ,Variable kernel density estimation ,Kernel embedding of distributions ,Polynomial kernel ,Radial basis function kernel ,Kernel regression ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,Mathematics - Abstract
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification. A relative newcomer to the classification portfolio is "boosting", and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.
- Published
- 2005
- Full Text
- View/download PDF
61. Boosted Regression Estimates of Spatial Data: Pointwise Inference
- Author
-
Marco Di Marzio and Charles C. Taylor
- Subjects
Pointwise ,Statistics and Probability ,Boosting (machine learning) ,General Mathematics ,Statistics ,Econometrics ,Nonparametric statistics ,Estimator ,Inference ,Spatial analysis ,Cross-validation ,Regression ,Mathematics - Abstract
In this study simple nonparametric techniques have been adopted to estimate the trend surface of the Swiss rainfall data. In particular we employed the Nadaraya-Watson smoother and in addition, an adapted-by boosting-version of it. Additionally, we have explored the use of the Nadaraya-Watson estimator for the construction of pointwise confidence intervals. Overall, boosting does seem to improve the estimate as much as previous examples and the results indicate that cross-validation can be successfully used for parameter selection on real datasets. In addition, our estimators compare favorably with most of the techniques previously used on this dataset.
- Published
- 2005
- Full Text
- View/download PDF
62. Non-Stationary Spatiotemporal Analysis of Karst Water Levels
- Author
-
J. Kovács, Charles C. Taylor, Ian L. Dryden, and L. Márkus
- Subjects
Statistics and Probability ,Data set ,Covariance function ,Kriging ,Stochastic modelling ,Econometrics ,Estimator ,Applied mathematics ,Hydrograph ,Statistics, Probability and Uncertainty ,Covariance ,Cross-validation ,Mathematics - Abstract
Summary We consider non-stationary spatiotemporal modelling in an investigation into karst water levels in western Hungary. A strong feature of the data set is the extraction of large amounts of water from mines, which caused the water levels to reduce until about 1990 when the mining ceased, and then the levels increased quickly. We discuss some traditional hydrogeological models which might be considered to be appropriate for this situation, and various alternative stochastic models. In particular, a separable space–time covariance model is proposed which is then deformed in time to account for the non-stationary nature of the lagged correlations between sites. Suitable covariance functions are investigated and then the models are fitted by using weighted least squares and cross-validation. Forecasting and prediction are carried out by using spatiotemporal kriging. We assess the performance of the method with one-step-ahead forecasting and make comparisons with naïve estimators. We also consider spatiotemporal prediction at a set of new sites. The new model performs favourably compared with the deterministic model and the naïve estimators, and the deformation by time shifting is worthwhile.
- Published
- 2005
- Full Text
- View/download PDF
63. Chain plot: a tool for exploiting bivariate temporal structures
- Author
-
András Zempléni and Charles C. Taylor
- Subjects
Statistics and Probability ,Probability plot ,Partial residual plot ,Applied Mathematics ,Bivariate analysis ,Probability plot correlation coefficient plot ,Computational Mathematics ,Exploratory data analysis ,Computational Theory and Mathematics ,Chain (algebraic topology) ,Bivariate data ,Statistics ,Q–Q plot ,Algorithm ,Mathematics - Abstract
In this paper we present a graphical tool useful for visualizing the cyclic behaviour of bivariate time series. We investigate its properties and link it to the asymmetry of the two variables concerned. We also suggest adding approximate confidence bounds to the points on the plot and investigate the effect of lagging to the chain plot. We conclude our paper by some standard Fourier analysis, relating and comparing this to the chain plot.
- Published
- 2004
- Full Text
- View/download PDF
64. Boosting kernel density estimates: A bias reduction technique?
- Author
-
Marco Di Marzio and Charles C. Taylor
- Subjects
Statistics and Probability ,Boosting (machine learning) ,Applied Mathematics ,General Mathematics ,Statistics ,Kernel density estimation ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Agricultural and Biological Sciences (miscellaneous) ,Bias reduction ,Mathematics - Abstract
SUMMARY This paper proposes an algorithm for boosting kernel density estimates. We show that boosting is closely linked to a previously proposed method of bias reduction and indicate how it should enjoy similar properties. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.
- Published
- 2004
- Full Text
- View/download PDF
65. Bayesian Texture Segmentation of Weed and Crop Images Using Reversible Jump Markov Chain Monte Carlo Methods
- Author
-
Mark R. Scarr, Charles C. Taylor, and Ian L. Dryden
- Subjects
Statistics and Probability ,Random field ,Markov random field ,business.industry ,Posterior probability ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Reversible-jump Markov chain Monte Carlo ,Mixture model ,Markov model ,Computer Science::Graphics ,Metropolis–Hastings algorithm ,Computer Science::Computer Vision and Pattern Recognition ,Prior probability ,Statistics ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Summary A Bayesian method for segmenting weed and crop textures is described and implemented. The work forms part of a project to identify weeds and crops in images so that selective crop spraying can be carried out. An image is subdivided into blocks and each block is modelled as a single texture. The number of different textures in the image is assumed unknown. A hierarchical Bayesian procedure is used where the texture labels have a Potts model (colour Ising Markov random field) prior and the pixels within a block are distributed according to a Gaussian Markov random field, with the parameters dependent on the type of texture. We simulate from the posterior distribution by using a reversible jump Metropolis–Hastings algorithm, where the number of different texture components is allowed to vary. The methodology is applied to a simulated image and then we carry out texture segmentation on the weed and crop images that motivated the work.
- Published
- 2003
- Full Text
- View/download PDF
66. Nonparametric regression for spherical data
- Author
-
Charles C. Taylor, Agnese Panzera, and Marco Di Marzio
- Subjects
Statistics and Probability ,Polynomial regression ,Polynomial ,Mathematical optimization ,Dimension (vector space) ,Statistics, Probability and Uncertainty ,Local polynomial fitting ,Spherical-linear regression ,Spherical-spherical regression ,Regression ,Nonparametric regression ,Curse of dimensionality ,Interpretability ,Mathematics ,Parametric statistics - Abstract
We develop nonparametric smoothing for regression when both the predictor and the response variables are defined on a sphere of whatever dimension. A local polynomial fitting approach is pursued, which retains all the advantages in terms of rate optimality, interpretability, and ease of implementation widely observed in the standard setting. Our estimates have a multi-output nature, meaning that each coordinate is separately estimated, within a scheme of a regression with a linear response. The main properties include linearity and rotational equivariance. This research has been motivated by the fact that very few models describe this kind of regression. Such current methods are surely not widely employable since they have a parametric nature, and also require the same dimensionality for prediction and response spaces, along with nonrandom design. Our approach does not suffer these limitations. Real-data case studies and simulation experiments are used to illustrate the effectiveness of the method.
- Published
- 2014
67. The K ‐Function for Nearly Regular Point Processes
- Author
-
Charles C. Taylor, Ian L. Dryden, and Rahman Farnoosh
- Subjects
Statistics and Probability ,Biometry ,Movement ,Gaussian ,Equilateral triangle ,Models, Biological ,General Biochemistry, Genetics and Molecular Biology ,Square (algebra) ,Point process ,Regular grid ,Combinatorics ,symbols.namesake ,Animals ,Computer Simulation ,Mathematics ,Models, Statistical ,General Immunology and Microbiology ,Estimation theory ,Applied Mathematics ,Chlamydomonas ,Mathematical analysis ,Estimator ,General Medicine ,Grid ,symbols ,General Agricultural and Biological Sciences - Abstract
Summary. We propose modeling a nearly regular point pattern by a generalized Neyman-Scott process in which the offspring are Gaussian perturbations from a regular mean configuration. The mean configuration of interest is an equilateral grid, but our results can be used for any stationary regular grid. The case of uniformly distributed points is first studied as a benchmark. By considering the square of the interpoint distances, we can evaluate the first two moments of the K-function. These results can be used for parameter estimation, and simulations are used to both verify the theory and to assess the accuracy of the estimators. The methodology is applied to an investigation of regularity in plumes observed from swimming microorganisms.
- Published
- 2001
- Full Text
- View/download PDF
68. Transformation- and label-invariant neural network for the classification of landmark data
- Author
-
Charles C. Taylor, Kanti V. Mardia, and R. Southworth
- Subjects
Statistics and Probability ,Landmark ,Artificial neural network ,business.industry ,Computer science ,Simulated data ,Pattern recognition ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Invariant (mathematics) ,business - Abstract
One method of expressing coarse information about the shape of an object is to describe the shape by its landmarks, which can be taken as meaningful points on the outline of an object. We consider a situation in which we want to classify shapes into known populations based on their landmarks, invariant to the location, scale and rotation of the shapes. A neural network method for transformation-invariant classification of landmark data is presented. The method is compared with the (non-transformation-invariant) complex Bingham rule; the two techniques are tested on two sets of simulated data, and on data that arise from mice vertebrae. Despite the obvious advantage of the complex Bingham rule because of information about rotation, the neural network method compares favourably.
- Published
- 2000
- Full Text
- View/download PDF
69. On bias in maximum likelihood estimators
- Author
-
Charles C. Taylor, Harry Southworth, and Kanti V. Mardia
- Subjects
Statistics and Probability ,Mean squared error ,Sample size determination ,Applied Mathematics ,Statistics ,Concentration parameter ,Score ,Estimator ,Cauchy distribution ,Statistics, Probability and Uncertainty ,Likelihood function ,Scale parameter ,Mathematics - Abstract
It is well known that maximum likelihood estimators are often biased, and it is of use to estimate the expected bias so that we can reduce the mean square errors of our parameter estimates. Expressions for estimating the bias in maximum likelihood estimates have been given by Cox and Hinkley (1974) , (Theoretical Statistics, Chapman & Hall, London). In this paper, a new simple expression is derived for estimating the first-order bias in maximum likelihood estimates. This is then applied to certain specific models; namely, estimating the bias in the concentration parameter in the von Mises–Fisher distribution – for which there are some ad hoc results in the literature ( Best and Fisher, 1981 , Comm. Statist. Ser. B. Simulation Comput. 31, 493–502) – and for the scale parameter in the Cauchy distribution. In both problems, the first-order bias is found to be linear in the parameter and the sample size. Simulations are used to verify our theoretical results and to give some indication of how large the parameter and the sample size need to be for our estimates of bias to hold well.
- Published
- 1999
- Full Text
- View/download PDF
70. Procrustes Shape Analysis of Planar Point Subsets
- Author
-
Ian L. Dryden, M. R. Faghihi, and Charles C. Taylor
- Subjects
Statistics and Probability ,Delaunay triangulation ,Gaussian ,Mathematical analysis ,Isotropy ,Covariance ,Equilateral triangle ,Combinatorics ,symbols.namesake ,symbols ,Statistics, Probability and Uncertainty ,Statistic ,Mathematics ,Shape analysis (digital geometry) ,Central limit theorem - Abstract
Summary Consider a set of points in the plane randomly perturbed about a mean configuration by Gaussian errors. In this paper a Procrustes statistic based on the shapes of subsets of the points is studied, and its approximate distribution is found for small variations. We derive various properties of the distribution including the first two moments, a central limit result and a scaled χ2–-approximation. We concentrate on the independent isotropic Gaussian error case, although the results are valid for general covariance structures. We investigate triangle subsets in detail and in particular the situation where the population mean is regular (i.e. a Delaunay triangulation of the mean of the process is comprised of equilateral triangles of the same size). We examine the variance of the statistic for differently shaped regions and provide an asymptotic result for general shaped regions. The results are applied to an investigation of regularity in human muscle fibre cross-sections.
- Published
- 1997
- Full Text
- View/download PDF
71. Classification and kernel density estimation
- Author
-
Charles C. Taylor
- Subjects
Mean squared error ,Variable kernel density estimation ,Computer science ,Kernel density estimation ,Estimator ,Word error rate ,Bayes error rate ,Astronomy and Astrophysics ,Algorithm ,Smoothing ,Weighting - Abstract
The method of kernel density estimation can be readily used for the purposes of classification, and an easy-to-use package ( alloc 80) is now in wide circulation. It is known that this method performs well (at least in relative terms) in the case of bimodal, or heavily skewed distributions. In this article we first review the method, and describe the problem of choosing h, an appropriate smoothing parameter. We point out that the usual approach of choosing h to minimize the asymptotic integrated mean squared error is not entirely appropriate, and we propose an alternative estimate of the classification error rate, which is the target of interest. Unfortunately, it seems that analytic results are hard to come by, but simulations indicate that the proposed estimator has smaller mean squared error than the usual cross-validation estimate of error rate. A second topic which we briefly explore is that of classification of drifting populations. In this case, we outline two general approaches to updating a classifier based on new observations. One of these approaches is limited to parametric classifiers; the other relies on weighting of observations, and is more generally applicable. We use an example from the credit industry as well as some simulated data to illustrate the methods.
- Published
- 1997
- Full Text
- View/download PDF
72. Boosting Kernel Estimators
- Author
-
Marco Di Marzio and Charles C. Taylor
- Subjects
Majority rule ,Boosting (machine learning) ,Polynomial kernel ,Variable kernel density estimation ,Kernel density estimation ,Estimator ,Statistical model ,Gradient boosting ,Algorithm ,Mathematics - Abstract
A boosting algorithm [1, 2] could be seen as a way to improve the fit of statistical models. Typically, M predictions are operated by applying a base procedure—called a weak learner—to M reweighted samples. Specifically, in each reweighted sample an individual weight is assigned to each observation. Finally, the output is obtained by aggregating through majority voting. Boosting is a sequential ensemble scheme, in the sense the weight of an observation at step m depends (only) on the step m − 1. It appears clear that we obtain a specific boosting scheme when we choose a loss function, which orientates the data re-weighting mechanism, and a weak learner.
- Published
- 2012
- Full Text
- View/download PDF
73. Matching markers and unlabeled configurations in protein gels
- Author
-
Kanti V. Mardia, Charles C. Taylor, and Emma M. Petty
- Subjects
Statistics and Probability ,High probability ,Electrophoresis ,FOS: Computer and information sciences ,Computer science ,business.industry ,Pattern recognition ,shape ,Statistics - Applications ,Western Blots ,Modeling and Simulation ,Expectation–maximization algorithm ,Applications (stat.AP) ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,Shape analysis (digital geometry) - Abstract
Unlabeled shape analysis is a rapidly emerging and challenging area of statistics. This has been driven by various novel applications in bioinformatics. We consider here the situation where two configurations are matched under various constraints, namely, the configurations have a subset of manually located "markers" with high probability of matching each other while a larger subset consists of unlabeled points. We consider a plausible model and give an implementation using the EM algorithm. The work is motivated by a real experiment of gels for renal cancer and our approach allows for the possibility of missing and misallocated markers. The methodology is successfully used to automatically locate and remove a grossly misallocated marker within the given data set., Comment: Published in at http://dx.doi.org/10.1214/12-AOAS544 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org)
- Published
- 2012
- Full Text
- View/download PDF
74. Comparative trials in classification of image data
- Author
-
Charles C. Taylor and R. J. Henery
- Subjects
Statistics and Probability ,Pixel ,Artificial neural network ,Computer science ,Bayesian probability ,Decision tree ,Linear discriminant analysis ,computer.software_genre ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Data mining ,Statistics, Probability and Uncertainty ,Focus (optics) ,computer - Abstract
In this paper, we describe some results of an ESPRIT project known as StatLog whose purpose is the comparison of classification algorithms. We give a brief summary of some of the algorithms in the project: discriminant analysis; nearest neighbours; decision trees; neural net methods; SMART; kernel methods and other Bayesian approaches.We focus on data sets derived from images, ranging from raw pixel data to features and summaries extracted from such data.
- Published
- 1994
- Full Text
- View/download PDF
75. A Note on Density Estimation for Circular Data
- Author
-
Charles C. Taylor, Marco Di Marzio, and Agnese Panzera
- Subjects
Product (mathematics) ,Kernel (statistics) ,Kernel density estimation ,von Mises yield criterion ,Estimator ,Applied mathematics ,Torus ,Density estimation ,Smoothing ,Mathematics - Abstract
We discuss kernel density estimation for data lying on the d-dimensional torus (d ≥ 1). We consider a specific class of product kernels, and formulate exact and asymptotic L 2 properties for the estimators equipped with these kernels. We also obtain the optimal smoothing for the case when the kernel is defined by the product of von Mises densities. A brief simulation study illustrates the main findings.
- Published
- 2011
- Full Text
- View/download PDF
76. A comparison of block and semi-parametric bootstrap methods for variance estimation in spatial statistics
- Author
-
Mohsen Mohammadzadeh, N. Iranpanah, and Charles C. Taylor
- Subjects
Statistics and Probability ,Statistics::Theory ,Estimation theory ,Applied Mathematics ,Bootstrap aggregating ,Estimator ,Semiparametric model ,Computational Mathematics ,Computational Theory and Mathematics ,Kriging ,Statistics ,Statistics::Methodology ,Block size ,Spatial analysis ,Block (data storage) ,Mathematics - Abstract
Efron (1979) introduced the bootstrap method for independent data but it cannot be easily applied to spatial data because of their dependency. For spatial data that are correlated in terms of their locations in the underlying space the moving block bootstrap method is usually used to estimate the precision measures of the estimators. The precision of the moving block bootstrap estimators is related to the block size which is difficult to select. In the moving block bootstrap method also the variance estimator is underestimated. In this paper, first the semi-parametric bootstrap is used to estimate the precision measures of estimators in spatial data analysis. In the semi-parametric bootstrap method, we use the estimation of the spatial correlation structure. Then, we compare the semi-parametric bootstrap with a moving block bootstrap for variance estimation of estimators in a simulation study. Finally, we use the semi-parametric bootstrap to analyze the coal-ash data.
- Published
- 2011
77. Growth and development of human muscle: A quantitative morphological study of whole vastus lateralis from childhood to adult age
- Author
-
Charles C. Taylor, Michael Sjöström, Jan Lexell, and Ann-Sofie Nordlund
- Subjects
Muscle tissue ,medicine.medical_specialty ,education.field_of_study ,Physiology ,Vastus lateralis muscle ,Population ,Anatomy ,Biology ,Adult age ,Cellular and Molecular Neuroscience ,medicine.anatomical_structure ,Endocrinology ,Human muscle ,Physiology (medical) ,Internal medicine ,Peripheral nervous system ,medicine ,Neurology (clinical) ,Fiber ,education ,Myofibril - Abstract
The mechanisms underlying the increase in volume of muscle tissue, and the functional development of muscle fibers from childhood through adolescence to adult age, have been studied. Cross sections of autopsied whole vastus lateralis muscle from 22 previously physically healthy males, 5 to 37 years of age, were prepared enzyme histochemically (myofibrillar ATPase) and examined morphometrically. The data obtained on muscle cross-sectional area, size, total number, and proportion of type 1 (slow-twitch) and type 2 (fast-twitch) fibers were analyzed using linear regression techniques. The results show that the increase in muscle cross-sectional area from childhood to adult age is caused by an increase in mean fiber size. This is accompanied by a functional development of the fiber population: the proportion of type 2 fibers increases significantly from the age of 5 (approx. 35%) to the age of 20 (approx. 50%), which, in the absence of any discernible effect on the total number of fibers, is most likely caused by a transformation of type 1 to type 2 fibers.
- Published
- 1992
- Full Text
- View/download PDF
78. A morphometrical comparison of right and left whole human vastus lateralis muscle: how to reduce sampling errors in biopsy techniques
- Author
-
Jan Lexell and Charles C. Taylor
- Subjects
Adult ,Male ,medicine.diagnostic_test ,Physiology ,Vastus lateralis muscle ,business.industry ,Biopsy ,Muscles ,Significant difference ,Reproducibility of Results ,Fiber size ,Sampling error ,Systematic sampling ,General Medicine ,Anatomy ,Functional Laterality ,Maximum difference ,medicine ,Humans ,business ,Left vastus lateralis - Abstract
In studies of the effects of different training programmes, one muscle--most commonly the vastus lateralis--is used for the experiment while the contralateral muscle serves as a control, at the same time as muscle biopsies are taken from both sides. In order to increase the reliability of such studies, the sources and the magnitude of the sampling errors in the biopsy techniques need to be assessed in detail. In this study, cross-sections of whole right and left vastus lateralis muscle from six young sedentary right-handed men were prepared, and the total number and size of fibres and the proportion of the different fibre types were calculated. A significant difference (P less than 0.05-P less than 0.001) between the right and the left muscle was found for at least one of the three variables in each of the six men, but there was no systematic difference and, therefore, no significant right-left difference for the whole group. The maximum difference between the right and the left side for the mean fibre size was 25% and for the fibre type proportion 5%; these differences are much smaller than the known variation within individual muscles. In conclusion, any study involving biopsies from both the right and the left vastus lateralis may use either muscle for the experiment while the contralateral muscle serves as a control without leading to systematic sampling error, whereas the errors involved in taking small samples from each muscle are much more important to control and to reduce.
- Published
- 1991
- Full Text
- View/download PDF
79. Evidence of fibre hyperplasia in human skeletal muscles from healthy young men?
- Author
-
Anders Eriksson, Michael Sjöström, Charles C. Taylor, and Jan Lexell
- Subjects
Adult ,Male ,Physiology ,Functional Laterality ,Physiology (medical) ,Humans ,Regeneration ,Medicine ,Orthopedics and Sports Medicine ,Adenosine triphosphatase ,Hyperplasia ,business.industry ,Muscles ,Significant difference ,Public Health, Environmental and Occupational Health ,Right lower leg ,Mean age ,General Medicine ,Anatomy ,medicine.disease ,Adaptation, Physiological ,Anterior tibialis ,Laterality ,business ,Myofibril - Abstract
Cross-sections (thickness 10 microns) of whole autopsied left and right anterior tibialis muscles of seven young previously healthy right-handed men (mean age 23 years, range 18-32 years) were prepared for light-microscope enzyme histochemistry. Muscle cross-sectional area and total number of fibres, mean fibre size (indirectly determined) and proportion of the different fibre types (type 1 and type 2 on basis of myofibrillar adenosine triphosphatase characteristics), in each muscle cross-section were determined. The analysis showed that the cross-sectional area of the left muscle was significantly larger (P less than 0.05), and the total number of fibres was significantly higher (P less than 0.05), than for the corresponding right muscle. There was no significant difference for the mean fibre size or the proportion of the two fibre types. The results imply that long-term asymmetrical low-level daily demands on muscles of the left and the right lower leg in right-handed individuals provide enough stimuli to induce an enlargement of the muscles on the left side, and that this enlargement is due to an increase in the number of muscle fibres (fibre hyperplasia). Calculations based on the data also explain why the underlying process of hyperplasia is difficult, or even impossible, to detect in standard muscle biopsies.
- Published
- 1991
- Full Text
- View/download PDF
80. Estimating the Dimension of a Fractal
- Author
-
Charles C. Taylor and James R. Taylor
- Subjects
Statistics and Probability ,010102 general mathematics ,01 natural sciences ,010104 statistics & probability ,Box counting ,Fractal ,Dimension (vector space) ,Complete information ,Statistics ,Statistical analysis ,Limit (mathematics) ,0101 mathematics ,Algorithm ,Mathematics - Abstract
SUMMARY We suggest refinements of the box counting method which address the obvious problems caused by the incomplete information and inaccessibility of the limit. A method for the statistical analysis of these corrected data is developed and tested on simulated and real data.
- Published
- 1991
- Full Text
- View/download PDF
81. A generative, probabilistic model of local protein structure
- Author
-
Anders Krogh, Kanti V. Mardia, Jesper Ferkinghoff-Borg, Thomas Hamelryck, Wouter Boomsma, and Charles C. Taylor
- Subjects
Models, Molecular ,Multidisciplinary ,Theoretical computer science ,Models, Statistical ,Continuous modelling ,Computer science ,Amino Acid Motifs ,Probabilistic logic ,Proteins ,Statistical model ,Protein structure prediction ,Biological Sciences ,Bioinformatics ,Prime (order theory) ,Generative model ,Fragment (logic) ,Generative grammar - Abstract
Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence–structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.
- Published
- 2008
82. Orthogonal series estimators and cross-validation
- Author
-
Charles C. Taylor
- Subjects
Statistics and Probability ,Polynomial ,Mathematical optimization ,Series (mathematics) ,Applied Mathematics ,Estimator ,Probability density function ,Variance (accounting) ,Expected value ,Cross-validation ,Rate of convergence ,Modeling and Simulation ,Applied mathematics ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
The use of cross-validation is considered in conjunction with orthogonal series estimators for a probability density function. We attempt to establish a data-based procedure which will select both the optimal choice of series, and the best trade-off between bias-squared and variance, i.e. series length. Although the expected value of the estimator looks promising, the rate of convergence is very slow. Simulations illustrate the theoretical results.
- Published
- 1990
- Full Text
- View/download PDF
83. Protein bioinformatics and mixtures of bivariate von Mises distributions for angular data
- Author
-
Kanti V. Mardia, Ganesh Subramaniam, and Charles C. Taylor
- Subjects
Statistics and Probability ,Likelihood Functions ,Models, Statistical ,General Immunology and Microbiology ,Myoglobin ,Protein Conformation ,Applied Mathematics ,Directional statistics ,Computational Biology ,Proteins ,Multivariate normal distribution ,General Medicine ,Bivariate analysis ,Bioinformatics ,General Biochemistry, Genetics and Molecular Biology ,Protein Structure, Secondary ,Malate Dehydrogenase ,Expectation–maximization algorithm ,von Mises distribution ,von Mises yield criterion ,Marginal distribution ,General Agricultural and Biological Sciences ,Algorithms ,Mathematics ,Ramachandran plot - Abstract
Summary A fundamental problem in bioinformatics is to characterize the secondary structure of a protein, which has traditionally been carried out by examining a scatterplot (Ramachandran plot) of the conformational angles. We examine two natural bivariate von Mises distributions—referred to as Sine and Cosine models—which have five parameters and, for concentrated data, tend to a bivariate normal distribution. These are analyzed and their main properties derived. Conditions on the parameters are established which result in bimodal behavior for the joint density and the marginal distribution, and we note an interesting situation in which the joint density is bimodal but the marginal distributions are unimodal. We carry out comparisons of the two models, and it is seen that the Cosine model may be preferred. Mixture distributions of the Cosine model are fitted to two representative protein datasets using the expectation maximization algorithm, which results in an objective partition of the scatterplot into a number of components. Our results are consistent with empirical observations; new insights are discussed.
- Published
- 2007
84. Air ionisation and colonisation/infection with methicillin-resistant Staphylococcus aureus and Acinetobacter species in an intensive care unit
- Author
-
Clive B. Beggs, Neil J. Todd, Judith K. Donnelly, Stephen G. Dean, P. Andrew Sleigh, Judith Thornton, Kevin G. Kerr, Andleeb Qureshi, and Charles C. Taylor
- Subjects
Meticillin ,Static Electricity ,Air Microbiology ,Critical Care and Intensive Care Medicine ,medicine.disease_cause ,law.invention ,Microbiology ,law ,Intensive care ,medicine ,Humans ,Prospective Studies ,Antibacterial agent ,Aerosols ,Cross Infection ,Infection Control ,Chi-Square Distribution ,Cross-Over Studies ,biology ,business.industry ,Acinetobacter ,Staphylococcal Infections ,biology.organism_classification ,Methicillin-resistant Staphylococcus aureus ,Intensive care unit ,Colonisation ,Intensive Care Units ,Staphylococcus aureus ,Methicillin Resistance ,business ,Plastics ,medicine.drug ,Acinetobacter Infections - Abstract
To determine effect of negative air ions on colonisation/infection with methicillin-resistant Staphylococcus aureus (MRSA) and Acinetobacter species in an intensive care unit. Prospective single-centre cross-over study in an adult general intensive care unit. 201 patients whose stay on the unit exceeded 48 hour's duration. Six negative air ionisers were installed on the unit but not operational for the first 5 months of the study (control period). Devices were then operational for the following 5.5 months. 30 and 13 patients were colonised/infected with MRSA and Acinetobacter spp., respectively, over 10.5 months. No change in MRSA colonisation/infection was observed compared with the 5 month control period. Acinetobacter cases were reduced from 11 to 2 (p = 0.007). Ionisers may have a role in the prevention of Acinetobacter infections.
- Published
- 2004
85. Assessment of cooked alpaca and llama meats from the statistical analysis of data collected using an 'electronic nose'
- Author
-
Karen Neely, Charles C. Taylor, Olivia Prosser, and Paul F. Hamlyn
- Subjects
Veterinary medicine ,Geography ,Electronic nose ,otorhinolaryngologic diseases ,food and beverages ,Statistical analysis ,Cooked food ,Artificial nose ,Chemical sensor ,Production chain ,Food Science ,Plastic bag - Abstract
As part of an EU-funded project to assist in developing the production chain of meat from camelids in South America we have investigated the possibility of using an electronic nose to distinguish between the different types of meat of commercial interest. On-site monitoring of freshly cooked camelid meat using a Bloodhound electronic nose has been carried out in Peru and Bolivia. Sampling was carried out using inert, collapsible plastic bags. Linear discriminant analysis of data generated by the electronic nose classified the samples of meat. Some problems experienced in analysing the data relating to sample size are discussed.
- Published
- 2000
86. Knowledge based geometric object recognition
- Author
-
Kanti V. Mardia, Charles C. Taylor, J.D. Burrows, and R.J. Morris
- Subjects
Channel (digital image) ,business.industry ,Template matching ,Cognitive neuroscience of visual object recognition ,Pattern recognition ,Geometric shape ,Object (computer science) ,Object detection ,Computer vision ,Geometric primitive ,Artificial intelligence ,Multiple edges ,business ,Mathematics - Abstract
The task of recognising rigid objects in an image can be greatly eased by exploiting particular geometric features of the objects. Rather than trying to match the whole object we can just match these particular features. A considerable speed advantage can be obtained over techniques like template matching, J.D. Burrows et al. (1995), as there are efficient algorithms for detecting various geometric primitives. This approach can also cope with small differences in the actual composition of the objects, as a geometric description can ignore these differences. Similarly the algorithm can be made robust with respect to clutter and occlusion/obscurement problems. We concentrate on detecting detonators in two channel X ray images. A selection of dummy detonators is presented. We exploit two particular geometric features: each detonator contains at least one dark object and we call these blobs cold spots; each detonator is approximately a cylinder which we can represent as two parallel edges lying on either side of the cold spot.
- Published
- 1997
- Full Text
- View/download PDF
87. An understanding of muscle fibre images
- Author
-
M. R. Faghihi, Ian L. Dryden, and Charles C. Taylor
- Subjects
Delaunay triangulation ,business.industry ,media_common.quotation_subject ,Isotropy ,Pattern recognition ,Geometry ,Equilateral triangle ,Normal muscle ,Test statistic ,Artificial intelligence ,Cluster analysis ,business ,Random variable ,Normality ,Mathematics ,media_common - Abstract
Images of muscle biopsies reveal a mosaic pattern of two (slow-twitch and fast-twitch) fibre-types. An analysis of such images can indicate some neuromuscular disorder. We briefly review some methods which analyse the arrangement of the fibres (e.g. clustering of fibre type) and the fibre sizes. The proposed methodology uses the cell centres as a set of landmarks from which a Delaunay triangulation is created. The shapes of these (correlated) triangles are then used in a test statistic, to ascertain normality of a muscle. Our “normal muscle” model supposes that the fibres are hexagonal (so that the triangulation is made up of equilateral triangles) with a perturbation of specified isotropic variance of the fibre centres. We obtain the distribution of the test statistic as an approximate function of a χ2 random variable, so that a formal test can be carried out.
- Published
- 1995
- Full Text
- View/download PDF
88. Statistical methods in learning
- Author
-
Bob Henery, Ross D. King, Rafael Molina, Alistair Sutherland, and Charles C. Taylor
- Subjects
Cart ,Computer science ,business.industry ,Pattern recognition ,Quadratic classifier ,Generalization error ,Backpropagation ,PEARL (programming language) ,ComputingMethodologies_PATTERNRECOGNITION ,Hide node ,Artificial intelligence ,Polytree ,business ,K nearest neighbour ,computer ,computer.programming_language - Abstract
In this paper we describe an ESPRIT project known as ‘Stat-Log’ whose purpose is the comparison of learning algorithms. We give a brief summary of some of the algorithms in the project: linear and quadratic discriminant analysis, k nearest neighbour, CART, backpropagation, SMART, ALLOC80 and Pearl's polytree algorithm. We discuss the results obtained for two datasets, one of handwritten digits and the other of vehicle silhouettes.
- Published
- 1993
- Full Text
- View/download PDF
89. Machine Learning and Statistics: The Interface
- Author
-
Michael J. Turmon, Gholamreza Nakhaeizadeh, and Charles C. Taylor
- Subjects
Statistics and Probability ,business.industry ,Computer science ,Algorithmic learning theory ,Decision tree learning ,ID3 algorithm ,Decision tree ,Linear classifier ,Decision rule ,Quadratic classifier ,Machine learning ,computer.software_genre ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Statistical theory ,business ,computer - Abstract
Statistical Properties of Tree-Based Approaches to Classification The Decision Tree Algorithm CAL5 Based on a Statistical Approach to its Splitting Algorithm Probabilistic Symbolic Classifiers: An Empirical Comparison from a Statistical Perspective A Multistrategy Approach to Learning Multiple Dependent Concepts Quality of Decision Rules - Definition and Classification Schemes for Multiple Rules DIPOL - A Hybrid Piecewise Linear Classifier Combining Classification Procedures Distance-based Decision Trees Learning Fuzzy Controllers from Examples Some Developments in Statistical Credit Scoring Combination of Statistical and Other Learning Methods to Predict Financial Time Series.
- Published
- 1998
- Full Text
- View/download PDF
90. Machine Learning, Neural, and Statistical Classification
- Author
-
John F. Elder IV, Donald Michie, David J. Spiegelhalter, and Charles C. Taylor
- Subjects
Statistics and Probability ,Statistics, Probability and Uncertainty - Published
- 1996
- Full Text
- View/download PDF
91. Variability in muscle fibre areas in whole human quadriceps muscle: how to reduce sampling errors in biopsy techniques
- Author
-
Charles C. Taylor and Jan Lexell
- Subjects
Adult ,Male ,Adolescent ,medicine.diagnostic_test ,Histocytochemistry ,Physiology ,Vastus lateralis muscle ,business.industry ,Biopsy ,Muscles ,Statistics as Topic ,Quadriceps muscle ,Sampling error ,Microtomy ,General Medicine ,Anatomy ,Human muscle ,Reference Values ,Reference values ,Humans ,Medicine ,Muscle fibre ,Nuclear medicine ,business - Abstract
A single biopsy is a poor estimator of the muscle fibre cross-sectional area (CSA) for a whole human muscle because of the large variability in the fibre area within a muscle. To determine how the sampling errors in biopsy techniques can be reduced, data on the CSA of type 1 and type 2 fibres obtained from cross-sections of whole vastus lateralis muscle of young men, have been analysed statistically. To obtain a good estimate of the mean fibre CSA in a biopsy, measuring all fibres in that biopsy gives the best result. To obtain a good estimate of the mean fibre CSA for a whole muscle, the number of biopsies has a much greater influence on the sampling error than the number of fibres measured in each biopsy, but the number of biopsies needed to obtain a given sampling error can vary by a factor of two. If the fibre CSA in three or more biopsies is measured, it is sufficient to measure only 25 fibres in each biopsy. If less than three biopsies are taken, there is no worthwhile reduction in sampling error when more than 100 fibres are measured. To determine the mean fibre CSA for a whole group of individuals, our preference is to maximize the number of individuals, and only take single biopsies. In conclusion, to determine the mean fibre CSA for this particular muscle with a certain precision, we suggest analysis of three biopsies, taken from different depths of the muscle, and measurement of 25 fibres in each biopsy.
- Published
- 1989
- Full Text
- View/download PDF
92. Multiscale sources of spatial variation in soil. III. Improved methods for fitting the nested model to one-dimensional semivariograms
- Author
-
P. A. Burrough and Charles C. Taylor
- Subjects
Mathematics (miscellaneous) ,Hydrogeology ,Fractal ,Series (mathematics) ,Statistics ,Earth and Planetary Sciences (miscellaneous) ,Degrees of freedom (statistics) ,Spatial variability ,Generalized least squares ,Variogram ,Nested set model ,Mathematics - Abstract
The previous paper in this series presented a one-dimensional stochastic nested model to account for superimposed sources of soil variation at various scales. This paper shows how the nested model can be fitted to experimental data using weighted or generalized least-squares methods that account for correlations between consecutive terms that had previously been neglected. This paper also presents a method of estimating “effective degrees of freedom” for each sampling interval and thus for estimating 90% confidence limits for the semivariogram of the nested model.
- Published
- 1986
- Full Text
- View/download PDF
93. Bootstrap choice of the smoothing parameter in kernel density estimation
- Author
-
Charles C. Taylor
- Subjects
Statistics and Probability ,Mean squared error ,Applied Mathematics ,General Mathematics ,Kernel density estimation ,Probability density function ,Density estimation ,Agricultural and Biological Sciences (miscellaneous) ,Kernel method ,Mean integrated squared error ,Resampling ,Statistics ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Algorithm ,Smoothing ,Mathematics - Abstract
SUMMARY Cross-validation based on integrated squared error has already been applied to the choice of smoothing parameter in the kernel method of density estimation. In this paper, an alternative resampling plan, based on the bootstrap, is proposed in an attempt to estimate mean integrated squared error. This leads to a further data-based choice of smoothing parameter. The two methods are compared and some simulations and examples demonstrate the relative merits. For large samples, the bootstrap performs better than cross-validation for many distributions.
- Published
- 1989
- Full Text
- View/download PDF
94. Akaike's information criterion and the histogram
- Author
-
Charles C. Taylor
- Subjects
Statistics and Probability ,Bayesian information criterion ,Applied Mathematics ,General Mathematics ,Histogram ,Statistics ,Statistics, Probability and Uncertainty ,Akaike information criterion ,Stepwise regression ,General Agricultural and Biological Sciences ,Agricultural and Biological Sciences (miscellaneous) ,Class (biology) ,Mathematics - Abstract
SUMMARY By interpreting the histogram as a step-function, we explore the use of Akaike's information criterion in an automatic procedure to determine the histogram class width. We obtain an asymptotic relationship and present some results from a small simulation study.
- Published
- 1987
- Full Text
- View/download PDF
95. What is the cause of the ageing atrophy?
- Author
-
Jan Lexell, Michael Sjöström, and Charles C. Taylor
- Subjects
Fiber type ,Vastus lateralis muscle ,Physiology ,Skeletal muscle ,Anatomy ,Biology ,Thigh ,Amyotrophy ,medicine.disease ,Atrophy ,medicine.anatomical_structure ,Neurology ,Ageing ,medicine ,Neurology (clinical) ,Fiber - Abstract
In order to study the effects of increasing age on the human skeletal muscle, cross-sections (15 micron) of autopsied whole vastus lateralis muscle from 43 previously physically healthy men between 15 and 83 years of age were prepared and examined. The data obtained on muscle area, total number, size, proportion and distribution of type 1 (slow-twitch) and type 2 (fast-twitch) fibers were analysed using multivariate regression. The results show that the ageing atrophy of this muscle begins around 25 years of age and thereafter accelerates. This is caused mainly by a loss of fibers, with no predominant effect on any fiber type, and to a lesser extent by a reduction in fiber size, mostly of type 2 fibers. The results also suggest the occurrence of several other age-related adaptive mechanisms which could influence fiber sizes and fiber number, as well as enzyme histochemical fiber characteristics.
- Published
- 1988
- Full Text
- View/download PDF
96. A new method for unfolding sphere size distributions
- Author
-
Charles C. Taylor
- Subjects
Matrix (mathematics) ,Histology ,Distribution (mathematics) ,Opacity ,Quartic function ,Mathematical analysis ,Kernel density estimation ,Statistical parameter ,Piecewise ,Probability density function ,Pathology and Forensic Medicine ,Mathematics - Abstract
SUMMARY Spherical particles are embedded in an opaque matrix. Circular profiles with radii X1,…, Xn are then observed from a cross-section. To estimate the distribution of the particle sizes a two-stage method is proposed. The probability density of the profile radii is statistically estimated by a smooth, piecewise quartic polynomial, and then an inversion formula is used. The method has some advantages over existing techniques in that the estimate is continuous, is quite straightforward computationally and it involves a less subjective choice of statistical parameters. In the case of several types of distributions, the procedure performed well for simulated data.
- Published
- 1983
- Full Text
- View/download PDF
97. Analysis of sampling errors in biopsy techniques using data from whole muscle cross sections
- Author
-
Michael Sjöström, Charles C. Taylor, and Jan Lexell
- Subjects
Adult ,Male ,Muscle biopsy ,medicine.diagnostic_test ,Fiber type ,Adolescent ,Physiology ,business.industry ,Muscles ,Biopsy, Needle ,Statistics as Topic ,Sampling error ,Anatomy ,Biology ,Physiology (medical) ,Biopsy ,medicine ,Humans ,Sampling (medicine) ,Fiber ,Nuclear medicine ,business ,Young male - Abstract
Because of the large variability in the proportion of fiber types within a whole muscle, a single biopsy is a poor estimator of the fiber type proportion for a whole muscle. Data on the proportions of type I and II fibers, obtained from cross sections of whole human muscles (vastus lateralis) from young male individuals, have therefore been analyzed statistically in order to determine the sampling errors involved in muscle biopsy techniques. For the purpose of obtaining a good estimate of the fiber type proportion in a whole biopsy, counting all fibers is of great benefit compared with counting only half of the fiber number. The required number of biopsies to obtain a given sampling error of the mean proportion of fiber types in the whole muscle can vary by a factor of six. If less than three biopsies are taken from a muscle, there is a substantial reduction in sampling error taking biopsies with at least 600 fibers. For more than three biopsies there is a small gain in sampling greater than 150 fibers. The precision of the estimate of the mean proportion of fiber types for a group is increased with the number of biopsies per individual and number of individuals. In conclusion, for the muscle in this study, complete counting of three biopsies, each greater than 150 fibers, sampled from different depths of the muscle is recommended.
- Published
- 1985
98. Variability in muscle fibre areas in whole human quadriceps muscle. How much and why?
- Author
-
Jan Lexell and Charles C. Taylor
- Subjects
Adenosine Triphosphatases ,Adult ,Male ,education.field_of_study ,Physiology ,Vastus lateralis muscle ,Histocytochemistry ,Muscles ,Population ,Quadriceps muscle ,Small sample ,Anatomy ,Biology ,Myofibrils ,Reference Values ,Needle biopsy ,Humans ,Muscle fibre ,Myofibril ,education ,Fibre type - Abstract
To determine the variability in fibre areas in the human vastus lateralis muscle, cross-sections (15 microns) of whole autopsied muscles from eight young men have been prepared, and the cross-sectional area (CSA) of 375 type 1 and 375 type 2 fibres has been measured in five different regions throughout each muscle. The CSA of both fibre types varied significantly within all muscle cross-sections. Fibres in the deep parts of the muscle were larger than superficially. There was a significant correlation between the CSA of the two fibre types within each region: if a fibre of a given type was small, or large, the other fibre type was also small, or large. The CSA of type 2 fibres was larger than the CSA of type 1 fibres in 26 of the 40 regions: regions with type 1 fibres larger than type 2 fibres were mostly (71%) found deep in the muscle. The standard deviation of the CSA of type 1 fibres was significantly larger than for type 2 fibres in 35 of the 40 regions. In conclusion, the CSA of the different fibre types in the vastus lateralis of young men varies non-randomly. The pattern of variation, both throughout the muscle and in small sample regions, supports the general opinion that the functional demands placed on the fibre population are an important factor in the development of the fibre properties.
- Published
- 1989
99. Statistical approaches to image restoration
- Author
-
A. N. Walder, Kanti V. Mardia, J.D. Burrows, J.A. Little, and Charles C. Taylor
- Subjects
Point spread function ,Deblurring ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Thresholding ,Image (mathematics) ,Geography ,Computer vision ,Artificial intelligence ,business ,Spatial analysis ,Image restoration - Abstract
Our work has been based on X-ray images of suitcases typical of those which can be seen at an airport. Specifically we have been trying to locate components, such as a detonator, wires and (less successfully) plastic explosive, which would constitute a bomb. Standard thresholding techniques to extract lead solder, for example, do not use any spatial information. By using prior knowledge, a number of contextual techniques have been explored. Our first step is to use a deblurring algorithm on the image; it is convolved with a blurred point spread function and then deconvolved with a prior encouraging neighbouring pixels to be alike. The results obtained always seem to improve the visual quality of images.
100. Letter to the Editor
- Author
-
Charles C. Taylor
- Subjects
Ophthalmology ,Rehabilitation - Published
- 1970
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.