268 results on '"univariate"'
Search Results
2. The cost-effectiveness of transcatheter aortic valve implantation: exploring the Italian National Health System perspective and different patient risk groups
- Author
-
G. Barbieri, F Meucci, Giuseppe Turchetti, F Saia, Valentina Lorenzoni, P Candolfi, S Berti, G L Martinelli, and A G Cerillo
- Subjects
medicine.medical_specialty ,Cost effectiveness ,Cost-Benefit Analysis ,Economics, Econometrics and Finance (miscellaneous) ,Time horizon ,Economic ,030204 cardiovascular system & hematology ,Transcatheter Aortic Valve Replacement ,03 medical and health sciences ,Indirect costs ,0302 clinical medicine ,Aortic valve replacement ,medicine ,Humans ,030212 general & internal medicine ,Stroke ,health care economics and organizations ,Heart Valve Prosthesis Implantation ,Original Paper ,Transcatheter aortic valve implantation ,Health economics ,I18 ,business.industry ,Health Policy ,Aortic stenosis ,I12 ,Univariate ,Aortic Valve Stenosis ,medicine.disease ,Quality-adjusted life year ,Treatment Outcome ,Italy ,Cost-effectiveness ,Emergency medicine ,Quality-Adjusted Life Years ,business - Abstract
Objectives To assess the cost-effectiveness (CE) of transcatheter aortic valve implantation (TAVI) in Italy, considering patient groups with different surgical risk. Methods A Markov model with a 1-month cycle length, comprising eight different health states, defined by the New York Heart Association functional classes (NYHA I–IV), with and without stroke plus death, was used to estimate the CE of TAVI for intermediate-, high-risk and inoperable patients considering surgical aortic valve replacement or medical treatment as comparators according to the patient group. The Italian National Health System perspective and 15-year time horizon were considered. In the base-case analysis, effectiveness data were retrieved from published efficacy data and total direct costs (euros) were estimated from national tariffs. A scenario analysis considering a micro-costing approach to estimate procedural costs was also considered. The incremental cost-effectiveness ratio (ICER) was expressed both in terms of costs per life years gained (LYG) and costs per quality adjusted life years (QALY). All outcomes and costs were discounted at 3% per annum. Univariate and probabilistic sensitivity analyses (PSA) were performed to assess robustness of results. Results Over a 15-year time horizon, the higher acquisition costs for TAVI were partially offset in all risk groups because of its effectiveness and safety profile. ICERs were €8338/QALY, €11,209/QALY and €10,133/QALY, respectively, for intermediate-, high-risk and inoperable patients. ICER values were slightly higher in the scenario analysis. PSA suggested consistency of results. Conclusions TAVI would be considered cost-effective at frequently cited willingness-to-pay thresholds; further studies could clarify the CE of TAVI in real-life scenarios.
- Published
- 2021
3. Spectral methods for imputation of missing air quality data
- Author
-
Moshenberg, Shai, Lerner, Uri, and Fishbain, Barak
- Published
- 2015
- Full Text
- View/download PDF
4. Approximation of Irregular Geometric Data by Locally Calculated Univariate Cubic L 1 Spline Fits
- Author
-
Wang, Ziteng, Lavery, John, and Fang, Shu-Cherng
- Published
- 2014
- Full Text
- View/download PDF
5. Cryptographic Applications of Capacity Theory: On the Optimality of Coppersmith’s Method for Univariate Polynomials
- Author
-
Brett Hemenway, Zachary Scherr, Nadia Heninger, and Ted Chinburg
- Subjects
Discrete mathematics ,Polynomial ,010102 general mathematics ,Univariate ,people.profession ,0102 computer and information sciences ,Auxiliary function ,01 natural sciences ,010201 computation theory & mathematics ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Prime factor ,Algebraic curve ,0101 mathematics ,Lattice reduction ,Coppersmith ,people ,Monic polynomial ,Mathematics - Abstract
We draw a new connection between Coppersmith’s method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith’s method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith’s bound for univariate polynomials is optimal in the sense that there are no auxiliary polynomials of the type he used that would allow finding roots of size \(N^{1/d+\epsilon }\) for any monic degree-d polynomial modulo N. Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of improvements for special cases or even superpolynomial-time improvements to Coppersmith’s bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size \(N^{1/d+\epsilon }\) unless N has a very small prime factor.
- Published
- 2016
- Full Text
- View/download PDF
6. A Neural Network-Based Forecasting Model for Univariate Sales Forecasting
- Author
-
Zhaoxia Guo
- Subjects
Sales forecasting ,Artificial neural network ,Series (mathematics) ,business.industry ,Computer science ,Univariate ,Training (meteorology) ,Machine learning ,computer.software_genre ,Sample size determination ,Artificial intelligence ,Probabilistic forecasting ,Time series ,business ,computer - Abstract
This chapter addresses the time series forecasting performance of sparsely connected neural networks (SCNNs). A novel type of SCNNs is presented based on the Apollonian networks. In terms of three types of publicly available benchmark data, a series of experiments are conducted to compare the forecasting performance of the proposed SCNNs, randomly connected SCNNs and traditional feed-forward neural networks. The comparison results show that the proposed networks generate the best time series forecasting performance and the traditional networks generate the worst in terms of training speed and forecasting accuracy. The performance of the proposed SCNNs is evaluated further based on different training sample sizes and training accuracy measures. The experimental results indicate that larger training sample sizes do not necessarily give better forecasts while forecasts based on training accuracy measures, MAD and MAPE, are generally superior to those based on MSE and MASE.
- Published
- 2016
- Full Text
- View/download PDF
7. Solving Linear Equations Modulo Unknown Divisors: Revisited
- Author
-
Liqiang Peng, Rui Zhang, Yao Lu, and Dongdai Lin
- Subjects
Discrete mathematics ,Divisor ,Modulo ,Univariate ,Context (language use) ,010103 numerical & computational mathematics ,0102 computer and information sciences ,01 natural sciences ,Prime (order theory) ,law.invention ,Algebra ,Integer ,010201 computation theory & mathematics ,law ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0101 mathematics ,Cryptanalysis ,Linear equation ,Mathematics - Abstract
We revisit the problem of finding small solutions to a collection of linear equations modulo an unknown divisor p for a known composite integer N. In CaLC 2001, Howgrave-Graham introduced an efficient algorithm for solving univariate linear equations; since then, two forms of multivariate generalizations have been considered in the context of cryptanalysis: modular multivariate linear equations by Herrmann and May Asiacrypt'08 and simultaneous modular univariate linear equations by Cohn and Heninger ANTS'12. Their algorithms have many important applications in cryptanalysis, such as factoring with known bits problem, fault attacks on RSA signatures, analysis of approximate GCD problem, etc. In this paper, by introducing multiple parameters, we propose several generalizations of the above equations. The motivation behind these extensions is that some attacks on RSA variants can be reduced to solving these generalized equations, and previous algorithms do not apply. We present new approaches to solve them, and compared with previous methods, our new algorithms are more flexible and especially suitable for some cases. Applying our algorithms, we obtain the best analytical/experimental results for some attacks on RSA and its variants, specifically,We improve May's results PKC'04 on small secret exponent attack on RSA variant with moduli $$N = p^rq$$$$r\ge 2$$.We experimentally improve Boneh et al.'s algorithm Crypto'98 on factoring $$N=p^rq$$$$r\ge 2$$ with known bits problem.We significantly improve Jochemsz-May' attack Asiacrypt'06 on Common Prime RSA.We extend Nitaj's result Africacrypt'12 on weak encryption exponents of RSA and CRT-RSA.
- Published
- 2015
- Full Text
- View/download PDF
8. Testing a Statistical Hypothesis
- Author
-
Thorsten Dickhaus and Vladimir Spokoiny
- Subjects
Normal distribution ,symbols.namesake ,Alternative hypothesis ,Likelihood-ratio test ,Univariate ,Pearson's chi-squared test ,symbols ,Applied mathematics ,Null hypothesis ,Statistical hypothesis testing ,Mathematics ,Univariate Normal Distribution - Abstract
This chapter deals with parametric test theory for a sample of independent and identically distributed observables. First, we introduce basic notions like null hypothesis, alternative hypothesis, errors of the first and of the second kind, significance level, and power. Then, we develop the theory of likelihood ratio tests, starting with the Neyman–Pearson lemma for testing simple hypotheses and some extensions to test problems with composite hypotheses. Based on exact distribution theory, Z-tests, t-tests, and chi-square tests are constructed for the parameters of a univariate normal distribution. Finally, likelihood ratio tests for general univariate exponential families are discussed and illustrated by means of several examples.
- Published
- 2014
- Full Text
- View/download PDF
9. Seemingly Unrelated Regressions
- Author
-
Badi H. Baltagi
- Subjects
Multivariate statistics ,Statistics ,Univariate ,Single equation ,Unrelated regression ,Seemingly unrelated regressions ,Mathematics - Abstract
When asked “How did you get the idea for SUR ?” Zellner responded: “On a rainy night in Seattle in about 1956 or 1957, I somehow got the idea of algebraically writing a multivariate regression model in single equation form. When I figured out how to do that, everything fell into place because then many univariate results could be carried over to apply to the multivariate system and the analysis of the multivariate system is much simplified notationally, algebraically and, conceptually. “ Read the interview of Professor Arnold Zellner by Rossi (1989, p. 292).
- Published
- 2014
- Full Text
- View/download PDF
10. An Extension and Efficient Calculation of the Horner’s Rule for Matrices
- Author
-
Katsuyoshi Ohara, Akira Terui, and Shinichi Tajima
- Subjects
Discrete mathematics ,Matrix (mathematics) ,Degree (graph theory) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,MathematicsofComputing_NUMERICALANALYSIS ,Univariate ,Computer Science::Symbolic Computation ,Extension (predicate logic) ,Symbolic computation ,Algorithm ,Mathematics - Abstract
We propose an efficient method for calculating “matrix polynomials” by extending the Horner’s rule for univariate polynomials. We extend the Horner’s rule by partitioning it by a given degree to reduce the number of matrix-matrix multiplications. By this extension, we show that we can calculate matrix polynomials more efficiently than by using naive Horner’s rule. An implementation of our algorithm is available on the computer algebra system Risa/Asir, and our experiments have demonstrated that, at suitable degree of partitioning, our new algorithm needs significantly shorter computing time as well as much smaller amount of memory, compared to naive Horner’s rule. Furthermore, we show that our new algorithm is effective for matrix polynomials not only over multiple-precision integers, but also over fixed-precision (IEEE standard) floating-point numbers by experiments.
- Published
- 2014
- Full Text
- View/download PDF
11. Non-uniform Interpolatory Subdivision Based on Local Interpolants of Minimal Degree
- Author
-
Jörg Peters and Kestutis Karčiauskas
- Subjects
Spline (mathematics) ,business.industry ,Univariate ,Applied mathematics ,business ,Subdivision ,Discrete curvature ,Mathematics - Abstract
This paper presents new univariate linear non-uniform interpolatory subdivision constructions that yield high smoothness, C 3 and C 4, and are based on least-degree spline interpolants. This approach is motivated by evidence, partly presented here, that constructions based on high-degree local interpolants fail to yield satisfactory shape, especially for sparse, non-uniform samples. While this improves on earlier schemes, a broad consideration of alternatives yields two technically simpler constructions that result in comparable shape and smoothness: careful pre-processing of sparse, non-uniform samples and interlaced fitting with splines of increasing smoothness. We briefly compare these solutions to recent non-linear interpolatory subdivision schemes.
- Published
- 2014
- Full Text
- View/download PDF
12. A Statistical Model for Higher Order DPA on Masked Devices
- Author
-
Pei Luo, Liwei Zhang, A. Adam Ding, and Yunsi Fei
- Subjects
Computer science ,business.industry ,Univariate ,Statistical model ,Cryptography ,Masking (Electronic Health Record) ,Formal proof ,Power analysis ,Hardware_INTEGRATEDCIRCUITS ,Side channel attack ,business ,Algorithm ,Computer Science::Cryptography and Security ,Block cipher - Abstract
A popular effective countermeasure to protect block cipher implementations against differential power analysis DPA attacks is to mask the internal operations of the cryptographic algorithm with random numbers. While the masking technique resists against first-order univariate DPA attacks, higher-order multivariate attacks were able to break masked devices. In this paper, we formulate a statistical model for higher-order DPA attack. We derive an analytic success rate formula that distinctively shows the effects of algorithmic confusion property, signal-noise-ratio SNR, and masking on leakage of masked devices. It further provides a formal proof for the centered product combination function being optimal for higher-order attacks in very noisy scenarios. We believe that the statistical model fully reveals how the higher-order attack works around masking, and would offer good insights for embedded system designers to implement masking techniques.
- Published
- 2014
- Full Text
- View/download PDF
13. Brainatic: A System for Real-Time Epileptic Seizure Prediction
- Author
-
Gianpietro Favaro, Bruno Direito, Cesar Teixeira, Vincent Navarro, Andreas Schulze-Bonhage, Francisco Sales, António Dourado, Catalina Alvarado, Michel Le Van Quyen, Matthias Ihle, Björn Schelter, Hinnerk Feldwisch-Drentrup, and Mojtaba Bandarabadi
- Subjects
Artificial neural network ,Computer science ,business.industry ,Univariate ,Pattern recognition ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Software ,Multilayer perceptron ,medicine ,Radial basis function ,False alarm ,Epileptic seizure ,Artificial intelligence ,medicine.symptom ,business - Abstract
A new system developed for real-time scalp EEG-based epileptic seizure prediction is presented, based on real time classification by machine learning methods, and named Brainatic. The system enables the consideration of previously trained classifiers for real-time seizure prediction. The software facilitates the computation of 22 univariate measures (features) per electrode, and classification using support vector machines (SVM), multilayer perceptron (MLP) neural networks and radial basis functions (RBF) neural networks. Brainatic was able to operate in real-time on a dual Intel® AtomTM netbook with 2GB of RAM, and was used to perform the clinical and ambulatory tests of the EU project EPILEPSIAE.
- Published
- 2014
- Full Text
- View/download PDF
14. Multivariate Dimension Polynomials of Inversive Difference Field Extensions
- Author
-
Alexander Levin
- Subjects
Pure mathematics ,Dimension (vector space) ,Difference polynomials ,Field extension ,Discrete orthogonal polynomials ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Univariate ,Inversive ,Term (logic) ,Wu's method of characteristic set ,Mathematics - Abstract
In this paper we introduce a method of characteristic sets with respect to several term orderings for inversive difference polynomials. Using this technique, we prove the existence and obtain a method of computation of multivariate dimension polynomials of finitely generated inversive difference field extensions. We also find new invariants of such extensions that are not carried by univariate dimension polynomials.
- Published
- 2014
- Full Text
- View/download PDF
15. 3D Face Recognition by Functional Data Analysis
- Author
-
Dania Porro-Muñoz, Anier Revilla-Eng, Isneri Talavera-Bustamante, Stefano Berretti, and Francisco José Silva-Mata
- Subjects
Computer science ,business.industry ,Univariate ,Functional data analysis ,Basis function ,Pattern recognition ,Feature selection ,Facial recognition system ,Least squares ,Face (geometry) ,Three-dimensional face recognition ,Artificial intelligence ,Representation (mathematics) ,business ,Curse of dimensionality - Abstract
This work proposes the use of functional data analysis to represent 3D faces for recognition tasks. This approach allows exploiting and studying characteristics of the continuous nature of this type of data. The basic idea of our proposal is to approximate the 3D face surface through an expansion of a basis functions set. These functions are used for a global representation of the entire face, and a local representation, where pre-selected face regions are used to construct multiple local representations. In both cases, the functions are fitted to the 3D data by means of the least squares method. Univariate attribute selection is finally applied to reduce the dimensionality of the new representation. The experiments prove the validity of the proposed approach, showing competitive results with respect to the state of the art solutions. Moreover, the dimensionality of the data is considerably reduced with respect to the original size, which is one of the goals of using this approach.
- Published
- 2014
- Full Text
- View/download PDF
16. The Basic Polynomial Algebra Subprograms
- Author
-
Marc Moreno Maza, Svyatoslav Covanov, Farnam Mansouri, Ning Xie, Yuzhen Xie, and Changbo Chen
- Subjects
Algebra ,Polynomial ,Computer science ,Factorization of polynomials ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Algebra representation ,Univariate ,Multiplication ,Basic Linear Algebra Subprograms ,Prime (order theory) ,Integer (computer science) - Abstract
The Basic Polynomial Algebra Subprograms (BPAS) provides arithmetic operations (multiplication, division, root isolation, etc.) for univariate and multivariate polynomials over prime fields or with integer coefficients. The code is mainly written in CilkPlus [10] targeting multicore processors. The current distribution focuses on dense polynomials and the sparse case is work in progress. A strong emphasis is put on adaptive algorithms as the library aims at supporting a wide variety of situations in terms of problem sizes and available computing resources. One of the purposes of the BPAS project is to take advantage of hardware accelerators in the development of polynomial systems solvers. The BPAS library is publicly available in source at www.bpaslib.org .
- Published
- 2014
- Full Text
- View/download PDF
17. Adaptive Multiscale Time-Frequency Analysis
- Author
-
Cheolsoo Park, David Looney, Naveed ur Rehman, and Danilo P. Mandic
- Subjects
Multivariate statistics ,Signal processing ,Motor imagery ,Computer science ,business.industry ,Univariate ,Common spatial pattern ,Basis function ,Pattern recognition ,Artificial intelligence ,business ,Hilbert–Huang transform ,Time–frequency analysis - Abstract
Time-frequency analysis techniques are now adopted as standard in many applied fields, such as bio-informatics and bioengineering, to reveal frequency-specific and time-locked event-related information of input data. Most standard time-frequency techniques, however, adopt fixed basis functions to represent the input data and are thus suboptimal. To this cause, an empirical mode decomposition (EMD) algorithm has shown considerable prowess in the analysis of nonstationary data as it offers a fully data-driven approach to signal processing. Recent multivariate extensions of the EMD algorithm, aimed at extending the framework for signals containing multiple channels, are even more pertinent in many real world scenarios where multichannel signals are commonly obtained, e.g., electroencephalogram (EEG) recordings. In this chapter, the multivariate extensions of EMD are reviewed and it is shown how these extensions can be used to alleviate the long-standing problems associated with the standard (univariate) EMD algorithm. The ability of the multivariate extensions of EMD as a powerful real world data analysis tool is demonstrated via simulations on biomedical signals.
- Published
- 2014
- Full Text
- View/download PDF
18. Bivariate Probit Analysis of the Differences Between Male and Female Formal Employment in Urban China
- Author
-
Guifu Chen and Shigeyuki Hamori
- Subjects
Multivariate probit model ,Informal sector ,Probit model ,Workforce ,Econometrics ,Economics ,Univariate ,Conditional probability ,Sample (statistics) ,Probit ,health care economics and organizations - Abstract
Using the 2004 and 2006 pooling data of the China Health and Nutrition Survey (CHNS) questionnaire, this chapter studies the differences between male and female employment in urban China, taking into account the interdependence of the decisions of women to participate in the workforce and the formal hiring choices of organizations. We probe this interdependence with a bivariate probit model. When certain unobserved factors that may influence both of these decisions are ignored, the estimated coefficients of the equation corresponding to the formal hiring of female employees are inconsistent. However, when results are obtained through a censored bivariate probit of an all-female sample, the conditional formal employment probability of women is about 3 % lower than the unconditional probability acquired through a univariate probit of a sample of labor market participants. Moreover, the findings show that the formal employment probability differential (between males and females) because of discrimination will be overestimated in the case of a univariate probit model.
- Published
- 2013
- Full Text
- View/download PDF
19. Global Optimization of Linear Multiplicative Programming Using Univariate Search
- Author
-
Xue-gang Zhou
- Subjects
Auxiliary variables ,Linear multiplicative programming ,Mathematical optimization ,Convex optimization ,Multiplicative function ,Univariate ,Computer Science::Programming Languages ,Global optimization ,Parametric statistics ,Mathematics ,Linear-fractional programming - Abstract
We show that, by using suitable transformations and introducing auxiliary variables, linear multiplicative program can be converted into an equivalent parametric convex programming problem, parametric concave minimization problem or parametric D.C. programming. Then potential and known methods for globally solving linear multiplicative program become available.
- Published
- 2013
- Full Text
- View/download PDF
20. Recent Univariate and Multivariate Statistical Techniques Applied to Gold Exploration in the Amapari Area, Amazon Region, Brazil
- Author
-
Luis Paulo Vieira Braga, Francisco José da Silva, and Claudio Gerheim Porto
- Subjects
Data set ,Multivariate statistics ,Geography ,Biplot ,Soil test ,Amazon rainforest ,Statistics ,Linear model ,Univariate ,Econometrics ,Multivariate statistical - Abstract
The Amapari area has been extensively explored for gold using multi-element geochemical analysis of soil samples. The data set is usually treated by statistical techniques to identify mineralized geological units. One of the main concerns relating to assay results was the large amount of censored data. To address this, the methodology suggested by Helsel [2] was applied to evaluate the impact of this limitation in the EDA phase. The HE plots for multivariate linear models introduced by Friendly [1] show data ellipsoids, biplots for multivariate data, the respective hypothesis and the error.
- Published
- 2013
- Full Text
- View/download PDF
21. Compositional Analysis in the Study of Mineralization Based on Stream Sediment Data
- Author
-
Renguang Zuo
- Subjects
Multivariate analysis ,Geography ,Principal component analysis ,Statistics ,Univariate ,Estimator ,Mineralogy ,Bivariate analysis ,Median absolute deviation ,Compositional data ,Robust principal component analysis - Abstract
Stream sediment data, widely used in geochemical exploration and environmental studies, are typical compositional data that should be opened prior to data analysis. In this study, the closure problem with geochemical exploration data is addressed from three aspects of univariate, bivariate and multivariate data analysis in a case study from southwestern Fujian depression belt in China. The results show that: (1) the robust estimators, such as median and median absolute deviation are useful to measure the center and spread of data for univariate analysis; (2) the isometric logratio (ilr) information should be applied to measure the variability and stability between the two variables in bivariate analysis; and (3) the robust principal component analysis should be applied for the ilr transformed data to reduce the dimensionality of multiple variables and to obtain a mineralization-linked principal component in multivariate analysis.
- Published
- 2013
- Full Text
- View/download PDF
22. Parameter Estimation for an i.i.d. Model
- Author
-
Weining Wang, Vladimir Spokoiny, Wolfgang Karl Härdle, and Vladimir Panov
- Subjects
symbols.namesake ,Exponential family ,Estimation theory ,symbols ,Univariate ,Applied mathematics ,Least absolute deviations ,Method of moments (statistics) ,Parametric family ,Fisher information ,Shape parameter ,Mathematics - Abstract
This chapter is very important for understanding the whole book. It starts with very classical stuff: Glivenko–Cantelli results for the empirical measure that motivate the famous substitution principle. Then the method of moments is studied in more detail including the risk analysis and asymptotic properties. Some other classical estimation procedures are briefly discussed including the methods of minimum distance, M-estimates, and its special cases: least squares, least absolute deviations, and maximum likelihood estimates. The concept of efficiency is discussed in context of the Cramer–Rao risk bound which is given in univariate and multivariate case. The last sections of Chap. 2 start a kind of smooth transition from classical to “modern” parametric statistics and they reveal the approach of the book. The presentation is focused on the (quasi) likelihood-based concentration and confidence sets. The basic concentration result is first introduced for the simplest Gaussian shift model and then extended to the case of a univariate exponential family in Sect. 2.11.
- Published
- 2013
- Full Text
- View/download PDF
23. A New Model-Free Control Chart for Monitoring Univariate Autocorrelated Processes
- Author
-
Zhen He, Yun Wang, Chi Zhang, and Xu-tao Zhang
- Subjects
Autoregressive model ,Computer science ,Autocorrelation ,Statistics ,Stability (learning theory) ,Univariate ,Process (computing) ,Control chart ,False alarm ,Data mining ,Shewhart individuals control chart ,computer.software_genre ,computer - Abstract
A new model-free control chart called Chi square control chart is proposed. Firstly, the autocorrelation function is used to measure the stability of the process. Secondly, appropriate delay time is selected to reduce the data correlation and make data approximately satisfy IID assumption. Then, the Chi square statistics for monitoring are designed through data matching based on phase space reconstruction. Guidelines for designing the control chart are presented by an example of AR (1) process. Result shows that, the control chart cannot only avoid the false alarm when the process is under control, but also can timely detect the variation of the process out of control. This method doesn’t need model fitting, and the efficiency in data usage is high for its autoregressive character, which makes it suitable for quality monitoring online.
- Published
- 2013
- Full Text
- View/download PDF
24. Multivariate Measurement System Analysis Based on Projection Pursuit Method
- Author
-
Xiaofang Wu, Liangxing Shi, and Zhen He
- Subjects
Measurement systems analysis ,Multivariate statistics ,Computer science ,System of measurement ,Projection pursuit ,Genetic algorithm ,Process capability index ,Univariate ,Data mining ,Projection (set theory) ,computer.software_genre ,computer - Abstract
With the improvement of the automation of the measurement processes and the complexity of products, measurement system analysis is becoming increasingly important (Supported by National Natural Science Foundation of China (No.71102140, 70931004)). However, there exists some difficulty in directly application of univariate measurement system analysis for multiple measured quality characteristics with correlation and the univariate measurement system capability index cannot be used in multivariate measurement system. Therefore, in this paper projection pursuit is used to analyze the multivariate measurement system. The best projection direction is obtained by optimizing the projection direction with Genetic Algorithm, the relationship between multivariate data and there projection is analyzed. Then three common measurement system capability indices are extended to the multivariate measurement system with the projection of the raw data in order to evaluate multivariate measurement system capability, at last the method proposed was proved by an example.
- Published
- 2013
- Full Text
- View/download PDF
25. The Median Hypothesis
- Author
-
Christopher J. C. Burges and Ran Gilad-Bachrach
- Subjects
Multivariate statistics ,Statistical significance ,Statistics ,Econometrics ,Test statistic ,Univariate ,P-rep ,p-value ,Null hypothesis ,Statistical hypothesis testing ,Mathematics - Abstract
The classification task uses observations and prior knowledge to select a hypothesis that will predict class assignments well. In this work we ask the question: what is the best hypothesis to select from a given hypothesis class? To address this question we adopt a PAC-Bayesian approach. According to this viewpoint, the observations and prior knowledge are combined to form a belief probability over the hypothesis class. Therefore, we focus on the next part of the learning process, in which one has to choose the hypothesis to be used given the belief. We call this problem the hypothesis selection problem. Based on recent findings in PAC-Bayesian analysis, we suggest that a good hypothesis has to be close to the Bayesian optimal hypothesis. We define a measure of “depth” for hypotheses to measure their proximity to the Bayesian optimal hypothesis and we show that deeper hypotheses have stronger generalization bounds. Therefore, we propose algorithms to find the deepest hypothesis. Following the definitions of depth in multivariate statistics, we refer to the deepest hypothesis as the median hypothesis. We show that similarly to the univariate and multivariate medians, the median hypothesis has good stability properties in terms of the breakdown point. Moreover, we show that the Tukey median is a special case of the median hypothesis. Therefore, the algorithms proposed here also provide a polynomial time approximation for the Tukey median . This algorithm makes the mildest assumptions compared to other efficient approximation algorithms for the Tukey median .
- Published
- 2013
- Full Text
- View/download PDF
26. Predicting the Size of Test Suites from Use Cases: An Empirical Exploration
- Author
-
William Flageol, Mourad Badri, and Linda Badri
- Subjects
Computer science ,business.industry ,Univariate ,Logistic regression ,Machine learning ,computer.software_genre ,Reliability engineering ,Software ,Software quality assurance ,Use case ,Metric (unit) ,Test Management Approach ,Artificial intelligence ,business ,computer ,Predictive modelling - Abstract
Software testing plays a crucial role in software quality assurance. Software testing is, however, a time and resource consuming process. It is, therefore, important to estimate as soon as possible the effort required to test software. Unfortunately, little is known about the prediction of the testing effort. The study presented in this paper aims at exploring empirically the prediction of the testing effort from use cases. We address the testing effort from the perspective of test suites size. We used four metrics to characterize the size and complexity of use cases, and three metrics to quantify different perspectives of the size of corresponding test suites. We used the univariate logistic regression analysis to evaluate the individual effect of each use case metric on the size of test suites. The multivariate logistic regression analysis was used to explore the combined effect of the use case metrics. The performance of the prediction models was evaluated using receiver operating characteristic analysis. An experimental study, using data collected from five Java case studies, is reported providing evidence that some of the use case metrics are significant predictors of the size of test suites.
- Published
- 2013
- Full Text
- View/download PDF
27. Multivariate ARCH Processes
- Author
-
Gilles Zumbach
- Subjects
Multivariate statistics ,Covariance matrix ,Univariate ,Applied mathematics ,Affine transformation ,Limit (mathematics) ,Arch ,Regularization (mathematics) ,Eigenvalues and eigenvectors ,Mathematics - Abstract
With a deep knowledge of the univariate processes and of the covariance matrix, multivariate ARCH processes can be studied. The general multivariate setup is presented. In order to limit the exploding number of parameters, the simplest linear ARCH process is studied first. Due to the very small eigenvalues, the covariance matrix is singular, and a regularization should be introduced in order to compute the empirical innovations. The relationship between the possible regularizations and the statistical properties of the innovations is presented. In a second step, the affine ARCH process is introduced, discussing the specific issues related to the introduction of the mean terms. In a third step, more general multivariate ARCH processes are summarized.
- Published
- 2013
- Full Text
- View/download PDF
28. Application of GIS Techniques for Landslide Susceptibility Assessment at Regional Scale
- Author
-
Samuele Segoni, Alessandro Battistini, Veronica Tofani, Goffredo Manzo, and Filippo Catani
- Subjects
Geography ,Univariate ,Landslide ,Land cover ,Runoff curve number ,Logistic regression ,Scale (map) ,Cartography ,Spatial analysis ,Regression - Abstract
We evaluated the landslide susceptibility in Sicily region (Italy) (25,000 km2) using a multivariate Logistic Regression model. The susceptibility model was implemented in a GIS environment by using ArcSDM (Arc Spatial Data Modeller) to develop spatial prediction models through regional datasets. A newly developed algorithm was used to automatically extract the scarp area from the whole landslide polygon. From the many susceptibility factors which influence landslide occurrence, on the basis of detailed analysis of the study area and univariate statistical analysis, the following factors were chosen: slope gradient, lithology, land cover, a curve number derived index and a pluviometric anomaly index. All the regression logistic coefficients and parameters were calculated using a selected landslide training dataset. Through the application of the logistic regression modelling technique the final susceptibility map was derived for the whole area. The results of the analysis were validated using an independent landslide dataset. On average, the 81 % of the area affected by instability and the 80 % of the area not affected by instability was correctly classified by the model.
- Published
- 2013
- Full Text
- View/download PDF
29. Better Lattice Constructions for Solving Multivariate Linear Equations Modulo Unknown Divisors
- Author
-
Noboru Kunihiro and Atsushi Takayasu
- Subjects
Combinatorics ,Discrete mathematics ,Multivariate statistics ,Divisor ,Modulo ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Greatest common divisor ,Univariate ,Cryptosystem ,Time complexity ,Linear equation ,Mathematics - Abstract
At CaLC 2001, Howgrave-Graham proposed the polynomial time algorithm for solving univariate linear equations modulo an unknown divisor of a known composite integer, the so-called partially approximate common divisor problem. So far, two forms of multivariate generalizations of the problem have been considered in the context of cryptanalysis. The first is simultaneous modular univariate linear equations, whose polynomial time algorithm was proposed at ANTS 2012 by Cohn and Heninger. The second is modular multivariate linear equations, whose polynomial time algorithm was proposed at Asiacrypt 2008 by Herrmann and May. Both algorithms cover Howgrave-Graham’s algorithm for univariate cases. On the other hand, both multivariate problems also become identical to Howgrave-Graham’s problem in the asymptotic cases of root bounds. However, former algorithms do not cover Howgrave-Graham’s algorithm in such cases. In this paper, we introduce the strategy for natural algorithm constructions that take into account the sizes of the root bounds. We work out the selection of polynomials in constructing lattices. Our algorithms are superior to all known attacks that solve the multivariate equations and can generalize to the case of arbitrary number of variables. Our algorithms achieve better cryptanalytic bounds for some applications that relate to RSA cryptosystems.
- Published
- 2013
- Full Text
- View/download PDF
30. Detecting Statistically Significant Temporal Associations from Multiple Event Sequences
- Author
-
Jörg Sander and Han Liang
- Subjects
Multivariate statistics ,Computer science ,Spike train ,Variable time ,Time constraint ,Multiple time ,Univariate ,Series data ,Data mining ,computer.software_genre ,computer ,Occurrence time - Abstract
In this paper, we aim to mine temporal associations in multiple event sequences. It is assumed that a set of event sequences has been collected from an application, where each event has an id and an occurrence time. Our work is motivated by the observation that in practice many associated events in multiple temporal sequences do not occur concurrently but sequentially. We proposed a two-phase method, called Multivariate Association Miner (MAM). In an empirical study, we apply MAM to two different application domains. Firstly, we use our method to detect multivariate motifs from multiple time series data. Existing approaches are all limited by assuming that the univariate elements of a multivariate motif occur completely or approximately synchronously. The experimental results on both synthetic and real data sets show that our method not only discovers synchronous motifs, but also finds non-synchronous multivariate motifs. Secondly, we apply MAM to mine frequent episodes from event streams. Current methods are all limited by requiring users to either provide possible lengths of frequent episodes or specify an inter-event time constraint for every pair of successive event types in an episode. The results on neuronal spike simulation data show that MAM automatically detects episodes with variable time delays.
- Published
- 2013
- Full Text
- View/download PDF
31. The Parametric Piecewise Representation
- Author
-
Brian A. Barsky
- Subjects
Surface (mathematics) ,Piecewise linear manifold ,Representation (systemics) ,Piecewise ,Univariate ,Applied mathematics ,Computer Science::Symbolic Computation ,Bivariate analysis ,Function (mathematics) ,Mathematics ,Parametric statistics - Abstract
The parametric representation of a curve has each component expressed as a separate univariate (single parameter) function while that of a surface has each component defined by a separate bivariate (two parameter) function.
- Published
- 2013
- Full Text
- View/download PDF
32. The Empirical Properties of Large Covariance Matrices
- Author
-
Gilles Zumbach
- Subjects
Kernel (image processing) ,Covariance matrix ,Univariate ,Applied mathematics ,Covariance and correlation ,Covariance ,Random matrix ,Eigenvalues and eigenvectors ,Eigendecomposition of a matrix ,Mathematics - Abstract
The knowledge acquired on single financial time series can now be applied to study the multivariate case. The first objective is to understand the generic properties of the covariance and correlation matrices. The definition of the covariance matrix uses the univariate long-memory kernel, hence providing the best short-term forecast suitable to build processes. The eigenvalue decomposition of the covariance matrix is presented, in order to study the properties of the spectrum and eigenvectors. For financial data, the dynamics of the eigenvalues is studied and compared to analytical results obtained from random matrix theory, while the eigenvectors dynamics point to the absence of clear invariant subspaces that would correspond to the established market modes.
- Published
- 2013
- Full Text
- View/download PDF
33. An Online Anomalous Time Series Detection Algorithm for Univariate Data Streams
- Author
-
Kishan G. Mehrotra, HuaMing Huang, and Chilukuri K. Mohan
- Subjects
Set (abstract data type) ,Series (mathematics) ,Computer science ,Univariate ,Control chart ,Anomaly detection ,Data mining ,Online algorithm ,computer.software_genre ,Algorithm ,Time complexity ,computer ,Distance measures - Abstract
We address the online anomalous time series detection problem among a set of series, combining three simple distance measures. This approach, akin to control charts, makes it easy to determine when a series begins to differ from other series. Empirical evidence shows that this novel online anomalous time series detection algorithm performs very well, while being efficient in terms of time complexity, when compared to approaches previously discussed in the literature.
- Published
- 2013
- Full Text
- View/download PDF
34. MEE with Discrete Errors
- Author
-
Luís A. Alexandre, Luís M. A. Silva, Joaquim P. Marques de Sá, and Jorge M. Santos
- Subjects
Variable (computer science) ,Computer science ,Probability mass function ,Univariate ,Decision tree ,Split point ,Information gain ,Algorithm ,Thresholding - Abstract
In this chapter we turn our attention to classifiers with a discrete error variable, E = T - Z. The need to operate with discrete errors arises when classifiers only produce a discrete output, as for instance the univariate data splitters used by decision trees. For regression-like classifiers, producing Z as a thresholding of a continuous output, Z = θ(Y), such a need does not arise. The present analysis of MEE with discrete errors, besides complementing our understanding of EE-based classifiers will also serve to lay the foundations of EE-based decision trees later in the chapter.
- Published
- 2013
- Full Text
- View/download PDF
35. Finding Outliers in Linear and Nonlinear Time Series
- Author
-
Pedro Galeano and Daniel Peña
- Subjects
Multivariate statistics ,Series (mathematics) ,Computer science ,Autoregressive conditional heteroskedasticity ,Univariate ,Nonlinear system ,Identification (information) ,ComputingMethodologies_PATTERNRECOGNITION ,parasitic diseases ,Outlier ,Econometrics ,population characteristics ,Autoregressive integrated moving average ,geographic locations ,health care economics and organizations - Abstract
The purpose of this contribution is to review outliers in both univariate and multivariate time series. The usual outlier types are presented in several frameworks including linear and nonlinear time series models. The key issues regarding identification of outliers and estimation of their effects in different settings are summarized.
- Published
- 2013
- Full Text
- View/download PDF
36. MAF: A Method for Detecting Temporal Associations from Multiple Event Sequences
- Author
-
Han Liang
- Subjects
Multivariate statistics ,Computer science ,Ask price ,Variable time ,Multiple time ,Univariate ,Series data ,Data mining ,computer.software_genre ,computer ,Occurrence time - Abstract
In this paper, we propose a two-phase method, called Multivariate Association Finder MAF, to mine temporal associations in multiple event sequences. It is assumed that a set of event sequences, where each event has an id and an occurrence time, is collected from an application. Our work is motivated by the observation that many associated events in multiple temporal sequences do not occur concurrently but sequentially. In an empirical study, we apply our method to two different application domains. Firstly, we use MAF to detect multivariate motifs from multiple time series data. Existing approaches all assume that the univariate elements of a multivariate motif occur synchronously. The experimental results on both synthetic and read data sets show that our method finds both synchronous and non-synchronous multivariate motifs. Secondly, we apply our method to mine frequent episodes from event streams. Current methods often ask users to provide possible lengths of frequent episodes. The results on neuronal spike simulation data show that MAF automatically detects episodes with variable time delays.
- Published
- 2013
- Full Text
- View/download PDF
37. The Limiting Properties of Copulas Under Univariate Conditioning
- Author
-
Piotr Jaworski
- Subjects
Chaotic ,Econometrics ,Univariate ,Conditioning ,Applied mathematics ,Limit (mathematics) ,Element (category theory) ,Invariant (mathematics) ,Extreme value theory ,Mathematics ,Variable (mathematics) - Abstract
The dynamics of univariate conditioning of copulas with respect to the first variable is studied. Special attention is paid to the limiting properties when the first variable is attaining extreme values. We describe the copulas which are invariant with respect to the conditioning and study their sets of attraction. Furthermore we provide examples of the limit sets consisting of more than one element and discuss the chaotic nature of univariate conditioning.
- Published
- 2013
- Full Text
- View/download PDF
38. Multivariate Extremes: A Conditional Quantile Approach
- Author
-
Marie-Françoise Barme-Delcroix
- Subjects
Multivariate statistics ,Order statistic ,Univariate ,Applied mathematics ,Multivariate normal distribution ,Conditional probability distribution ,Extreme value theory ,Conditional variance ,Mathematics ,Quantile - Abstract
There is no natural ordering of a multidimensional space and the extension of the definition of univariate quantiles to multivariate distributions is, therefore, not straightforward. We use a definition based on the ordering of a multivariate sample according to an increasing family of curves, we have called isobars. For a given level u, a u-level isobar is defined as a level curve of the conditional distribution function of the radius given the angle (that is, as a conditional quantile), the sample points being defined by their polar coordinates. In this way, the maximum value of the sample is defined as the point which belongs to the upper level isobar and it is really a sample point. Moreover, the so-defined order statistics may be characterized by a unidimensional approach. First we recall some results concerning the weak stability of these extreme values. Furthermore other applications are here proposed as the definition of the corresponding record values for a multivariate distribution and the stability properties of this kind of record values.
- Published
- 2013
- Full Text
- View/download PDF
39. Referential kNN Regression for Financial Time Series Forecasting
- Author
-
Ruibin Zhang, Abdolhossein Sarrafzadeh, Shaoning Pang, Daisuke Inoue, and Tao Ban
- Subjects
Multivariate statistics ,business.industry ,Univariate ,Regression analysis ,Machine learning ,computer.software_genre ,Regression ,Financial correlation ,Financial time series forecasting ,ComputingMethodologies_PATTERNRECOGNITION ,Correlation analysis ,Artificial intelligence ,Data mining ,Time series ,business ,computer ,Mathematics - Abstract
In this paper we propose a new multivariate regression approach for financial time series forecasting based on knowledge shared from referential nearest neighbors. Our approach defines a two-tier architecture. In the top tier, the nearest neighbors that bear referential information for a target time series are identified by exploiting the financial correlation from the historical data. Next, the future status of the target financial time series is inferred from heritage of the time series by using a multivariate k-Nearest-Neighbour (kNN) regression model exploiting the aggregated knowledge from all relevant referential nearest neighbors. The performance of the proposed multivariate kNN approach is assessed by empirical evaluation on the 9-year S&P 500 stock data. The experimental results show that the proposed approach provides enhanced forecasting accuracy than the referred univariate kNN regression.
- Published
- 2013
- Full Text
- View/download PDF
40. On the Simplicity of Converting Leakages from Multivariate to Univariate
- Author
-
Oliver Mischke and Amir Moradi
- Subjects
Scheme (programming language) ,Power analysis ,Theoretical computer science ,Computer engineering ,Exploit ,Computer science ,Univariate ,Field-programmable gate array ,Secret sharing ,computer ,Masking (Electronic Health Record) ,Glitch ,computer.programming_language - Abstract
Several masking schemes to protect cryptographic implementations against side-channel attacks have been proposed. A few considered the glitches, and provided security proofs in presence of such inherent phenomena happening in logic circuits. One which is based on multi-party computation protocols and utilizes Shamir's secret sharing scheme was presented at CHES 2011. It aims at providing security for hardware implementations --- mainly of AES --- against those sophisticated side-channel attacks that also take glitches into account. One part of this article deals with the practical issues and relevance of the aforementioned masking scheme. Following the recommendations given in the extended version of the mentioned article, we first provide a guideline on how to implement the scheme for the simplest settings. Constructing an exemplary design of the scheme, we provide practical side-channel evaluations based on a Virtex-5 FPGA. Our results demonstrate that the implemented scheme is indeed secure against univariate power analysis attacks given a basic measurement setup. In the second part of this paper we show how using very simple changes in the measurement setup opens the possibility to exploit multivariate leakages while still performing a univariate attack. Using these techniques the scheme under evaluation can be defeated using only a moderate number of measurements. This is applicable not only to the scheme showcased here, but also to most other known masking schemes where the shares of sensitive values are processed in adjacent clock cycles.
- Published
- 2013
- Full Text
- View/download PDF
41. Vector Generalized Linear Models: A Gaussian Copula Approach
- Author
-
Peter X.-K. Song, Mingyao Li, and Peng Zhang
- Subjects
Generalized linear model ,Multivariate statistics ,Linear regression ,Univariate ,Statistics::Methodology ,Generalized linear array model ,Applied mathematics ,Regression analysis ,Generalized estimating equation ,Generalized linear mixed model ,Mathematics - Abstract
In this chapter we introduce a class of multi-dimensional regression models for vector outcomes, termed as the vector generalized linear models (VGLMs), which is a multivariate analogue of the univariate generalized linear models (GLMs). A unified framework of such regression models is established with the utility of Gaussian copula, accommodating discrete, continuous and mixed vector outcomes. Both full likelihood and composite likelihood estimations and inferences are discussed. A Gauss–Newton type algorithm is suggested to carry out the simultaneous estimation for all model parameters. Numerical illustrations are focused on VGLMs for correlated binary outcomes, correlated count outcomes, and mixed normal and binary outcomes. In the simulation studies, we compare the VGLM to the popular generalized estimating equations (GEEs) approach. The simulation results indicate that the VGLMs provide more efficient inference for the regression coefficients than the GEEs. The VGLM is also illustrated via real-data examples.
- Published
- 2013
- Full Text
- View/download PDF
42. Sparse Brain anatomical Network Based Classification of Schizophrenia Patients and Healthy Controls
- Author
-
Junjie Zheng, Huafu Chen, Heng Chen, and Yilun Wang
- Subjects
Training set ,Categorization ,Computer science ,business.industry ,Test set ,Univariate ,Brain cortex ,Pattern recognition ,Feature selection ,Artificial intelligence ,business ,Classifier (UML) ,Sparse regression - Abstract
In this study, we tested whether the disturbed structural connectivity in whole brain cortex could be discriminating biomarker for schizophrenia. The anatomical fiber streamlines constructed on AAL template by diffusion tenor image were selected as potential features and a linear SVM pattern classifier was used to categorize the schizophrenia and healthy controls. We randomly divided the whole data into two groups, a training set which contained 32 patients and 25 controls and a test set had 31 patients and 24 controls. We compared two kinds of feature selection methods 1) Univariate t-test based filtering; 2) sparse regression based filtering. The sparse regression features correctly identified 97% cases in test dataset (96% sensitivity and 98% specificity), while the t-test significant impaired connectivity achieved 94% accuracy (92% sensitivity and 96% specificity). The sparse regression selected structural connectivities were consistent in 90% individuals 10 percent more than the t-test filtered features.
- Published
- 2013
- Full Text
- View/download PDF
43. Descriptive Analysis of Compositional Data
- Author
-
Raimon Tolosana-Delgado and K. Gerald van den Boogaart
- Subjects
Set (abstract data type) ,Multivariate statistics ,Biplot ,Descriptive statistics ,Principal component analysis ,Statistics ,Univariate ,Bivariate analysis ,Compositional data ,Mathematics - Abstract
The descriptive analysis of multivariate data is, in classical applications, mostly a univariate and bivariate description of the marginals. This is an inappropriate approach for compositional data, because the parts of a composition are intrinsically linked to each other: the dataset is multivariate in nature, not by decision of the analyst. Following the principle of working in coordinates and the golden rule of working with log ratios, we can structure a meaningful descriptive analysis of a compositional dataset in the following steps. In the first step, descriptive statistics for central tendency, global spread, and codependence are computed. The second step is a preliminary look at a set of predefined “marginals”, i.e., some ternary diagrams, to uncover relations between more than two parts. The third step is the exploration of the codependence structure through the biplot, a joint optimal representation of the variables and of the observations of a composition. Biplots are nevertheless explained in Chap. 6, because of their connection with principal component analysis. One of the aims of all the preceding phases should be to select a reasonable set of projections, or the selection of a basis: if this is achieved, the fourth step should be a classical univariate analysis of each coordinate.
- Published
- 2013
- Full Text
- View/download PDF
44. Multidimensional Diffusion Models
- Author
-
Oleg Reichmann, Norbert Hilber, Christoph Winter, and Christoph Schwab
- Subjects
Computer Science::Computer Science and Game Theory ,Mathematical optimization ,Tensor product ,Wavelet ,Stochastic volatility ,Basis (linear algebra) ,Computer science ,MathematicsofComputing_NUMERICALANALYSIS ,Univariate ,Sparse grid ,Feature (machine learning) ,Linear subspace - Abstract
In the present chapter, we develop efficient pricing algorithms for multivariate problems, such as the pricing of multi-asset options and the pricing of options in stochastic volatility models, which exploit a third feature of the wavelet basis, namely that wavelets constitute a hierarchic basis of the univariate finite element space. This allows constructing the so-called sparse tensor product subspaces for the numerical solution of d-dimensional pricing problems with complexity essentially equal to that of one-dimensional problems.
- Published
- 2013
- Full Text
- View/download PDF
45. Misleading Signals in Simultaneous Schemes for the Mean Vector and Covariance Matrix of a Bivariate Process
- Author
-
António Pacheco, Wolfgang Schmid, Manuel Cabral Morais, and Patrícia Ferreira Ramos
- Subjects
Estimation of covariance matrices ,Covariance function ,Covariance matrix ,Statistics ,Univariate ,Control chart ,Multivariate normal distribution ,Bivariate analysis ,Covariance ,Mathematics - Abstract
In a bivariate setting, misleading signals (MS) correspond to valid alarms which lead to the misinterpretation of a shift in the mean vector (resp. covariance matrix) as a shift in the covariance matrix (resp. mean vector). While dealing with bivariate output and two univariate control statistics (one for each parameter), MS occur when: The individual chart for the mean vector triggers a signal before the one for the covariance matrix, although the mean vector is on-target and the covariance matrix is off-target. The individual chart for the variance triggers a signal before the one for the mean, despite the fact that the covariance matrix is in-control and the mean vector is out-of-control.
- Published
- 2012
- Full Text
- View/download PDF
46. Univariate Stationary Processes
- Author
-
Gebhard Kirchgässner, Jürgen Wolters, and Uwe Hassler
- Subjects
Series (mathematics) ,Autoregressive model ,Simultaneous equations ,Section (archaeology) ,Computer science ,Moving average ,Autocorrelation ,Univariate ,Mathematical economics ,Partial autocorrelation function - Abstract
As mentioned in the introduction, the publication of the textbook by GEORGE E.P. BOX and GWILYM M. JENKINS in 1970 opened a new road to the analysis of economic time series. This chapter presents the Box-Jenkins Approach, its different models and their basic properties in a rather elementary and heuristic way. These models have become an indispensable tool for short-run forecasts. We first present the most important approaches for statistical modelling of time series. These are autoregressive (AR) processes (Section 2.1) and moving average (MA) processes (Section 2.2), as well as a combination of both types, the so-called ARMA processes (Section 2.3). In Section 2.4 we show how this class of models can be used for predicting the future development of a time series in an optimal way. Finally, we conclude this chapter with some remarks on the relation between the univariate time series models described in this chapter and the simultaneous equations systems of traditional econometrics (Section 2.5).
- Published
- 2012
- Full Text
- View/download PDF
47. DEUM - Distribution Estimation Using Markov Networks
- Author
-
Siddhartha Shakya, John McCall, Alexander E.I. Brownlee, and Gilbert Owusu
- Subjects
symbols.namesake ,Fitness function ,Theoretical computer science ,Markov chain ,Computer science ,Univariate ,symbols ,Bayesian network ,Probability distribution ,Markov chain Monte Carlo ,Mutual information ,Evolutionary computation - Abstract
DEUM is one of the early EDAs to use Markov Networks as its model of probability distribution. It uses undirected graph to represent variable interaction in the solution, and builds a model of fitness function from it. The model is then fitted to the set of solutions to estimate the Markov network parameters; these are then sampled to generate new solutions. Over the years, many differentDEUMalgorithms have been proposed. They range from univariate version that does not assume any interaction between variables, to fully multivariate version that can automatically find structure and build fitness models. This chapter serves as an introductory text on DEUM algorithm. It describes the motivation and the key concepts behind these algorithms. It also provides workflow of some of the key DEUM algorithms.
- Published
- 2012
- Full Text
- View/download PDF
48. Computing on Authenticated Data
- Author
-
Abhi Shelat, Jan Camenisch, Dan Boneh, Jae Hyun Ahn, Susan Hohenberger, and Brent Waters
- Subjects
Transitive relation ,Theoretical computer science ,Third party ,Ring signature ,business.industry ,Univariate ,Homomorphic encryption ,Encryption ,business ,Predicate (grammar) ,Mathematics - Abstract
In tandem with recent progress on computing on encrypted data via fully homomorphic encryption, we present a framework for computing on authenticated data via the notion of slightly homomorphic signatures, or P-homomorphic signatures. With such signatures, it is possible for a third party to derive a signature on the object m′ from a signature of m as long as P(m,m′)=1 for some predicate P which captures the "authenticatable relationship" between m′ and m. Moreover, a derived signature on m′ reveals no extra information about the parent m. Our definition is carefully formulated to provide one unified framework for a variety of distinct concepts in this area, including arithmetic, homomorphic, quotable, redactable, transitive signatures and more. It includes being unable to distinguish a derived signature from a fresh one even when given the original signature. The inability to link derived signatures to their original sources prevents some practical privacy and linking attacks, which is a challenge not satisfied by most prior works. Under this strong definition, we then provide generic constructions for all univariate and closed predicates, and specific efficient constructions for a broad class of natural predicates such as quoting, subsets, weighted sums, averages, and Fourier transforms. To our knowledge, these are the first efficient constructions for these predicates (excluding subsets) that provably satisfy this strong security notion.
- Published
- 2012
- Full Text
- View/download PDF
49. On the Efficient Evaluation of Higher-Order Derivatives of Real-Valued Functions Composed of Matrix Operations
- Author
-
Sebastian F. Walter
- Subjects
symbols.namesake ,Matrix (mathematics) ,Automatic differentiation ,Computer science ,Taylor series ,symbols ,Univariate ,Reverse mode ,Inversion (discrete mathematics) ,Algorithm ,Matrix multiplication ,Higher order derivatives - Abstract
Two different hierarchical levels of algorithmic differentiation are compared: the traditional approach and a higher-level approach where matrix operations are considered to be atomic. More explicitly: It is discussed how computer programs that consist of matrix operations (e.g. matrix inversion) can be evaluated in univariate Taylor polynomial arithmetic. Formulas suitable for the reverse mode are also shown. The advantages of the higher-level approach are discussed, followed by an experimental runtime comparison.
- Published
- 2012
- Full Text
- View/download PDF
50. Multi-Test Decision Trees for Gene Expression Data Analysis
- Author
-
Marek Grze, Marek Kretowski, and Marcin Czajkowski
- Subjects
Incremental decision tree ,Computer science ,business.industry ,Decision tree learning ,Univariate ,Decision tree ,Experimental validation ,computer.software_genre ,Machine learning ,Test (assessment) ,Tree (data structure) ,Node (computer science) ,Data mining ,Artificial intelligence ,business ,computer - Abstract
This paper introduces a new type of decision trees which are more suitable for gene expression data. The main motivation for this work was to improve the performance of decision trees under a possibly small increase in their complexity. Our approach is thus based on univariate tests, and the main contribution of this paper is the application of several univariate tests in each non-terminal node of the tree. In this way, obtained trees are still relatively easy to analyze and understand, but they become more powerful in modelling high dimensional microarray data. Experimental validation was performed on publicly available gene expression datasets. The proposed method displayed competitive accuracy compared to the commonly applied decision tree methods.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.