9 results on '"Jermaine, Chris"'
Search Results
2. Guessing the extreme values in a data set: a Bayesian method and its applications
- Author
-
Wu, Mingxi and Jermaine, Chris
- Published
- 2009
- Full Text
- View/download PDF
3. Scalable approximate query processing with the DBO engine
- Author
-
Jermaine, Chris, Arumugam, Subramanian, Pol, Abhijit, and Dobra, Alin
- Subjects
Object-oriented database ,Ad hoc networks (Computer networks) -- Analysis ,Object-oriented databases -- Analysis ,Query processing -- Analysis - Abstract
This article describes query processing in the DBO database system. Like other database systems designed for ad hoc analytic processing, DBO is able to compute the exact answers to queries over a large relational database in a scalable fashion. Unlike any other system designed for analytic processing, DBO can constantly maintain a guess as to the final answer to an aggregate query throughout execution, along with statistically meaningful bounds for the guess's accuracy. As DBO gathers more and more information, the guess gets more and more accurate, until it is 100% accurate as the query is completed. This allows users to stop the execution as soon as they are happy with the query accuracy, and thus encourages exploratory data analysis. Categories and Subject Descriptors: G.3 [Probability and Statistics]: Probabilistic Algorithms; H.2.4 [Database Management]: Systems--Query processing General Terms: Algorithms; Performance Additional Key Words and Phrases: Online aggregation; sampling; randomized algorithms
- Published
- 2008
4. Introduction of New Associate Editors.
- Author
-
Beng Chin Ooi
- Subjects
- *
EDITORS - Abstract
The article profiles several new associate editors to the editorial board of the periodical "IEEE Transactions on Knowledge and Data Engineering," (TKDE), including Professor Chris Jermaine from Rice University, Ravi Kumar from Yahoo! Research, and Professor Justin Zobel from the University of Melbourne.
- Published
- 2009
- Full Text
- View/download PDF
5. Distributed Algorithms for Computing Very Large Thresholded Covariance Matrices.
- Author
-
GAO, ZEKAI J. and JERMAINE, CHRIS
- Subjects
DISTRIBUTED algorithms ,COVARIANCE matrices ,PRINCIPAL components analysis ,GRAPHICAL modeling (Statistics) ,GAUSSIAN processes - Abstract
Computation of covariance matrices from observed data is an important problem, as such matrices are used in applications such as principal component analysis (PCA), linear discriminant analysis (LDA), and increasingly in the learning and application of probabilistic graphical models. However, computing an empirical covariance matrix is not always an easy problem. There are two key difficulties associated with computing such a matrix from a very high-dimensional dataset. The first problem is over-fitting. For a p-dimensional covariance matrix, there are p(p - 1)/2 unique, off-diagonal entries in the empirical covariance matrix Š; for large p (say, p > 10
5 ), the size n of the dataset is often much smaller than the number of covariances to compute. Over-fitting is a concern in any situation in which the number of parameters learned can greatly exceed the size of the dataset. Thus, there are strong theoretical reasons to expect that for high-dimensional data-even Gaussian data-the empirical covariance matrix is not a good estimate for the true covariance matrix underlying the generative process. The second problem is computational. Computing a covariance matrix takes O(np2 ) time. For large p (greater than 10,000) and n much greater than p, this is debilitating. In this article, we consider how both of these difficulties can be handled simultaneously. Specifically, a key regularization technique for high-dimensional covariance estimation is thresholding, in which the smallest or least significant entries in the covariance matrix are simply dropped and replaced with the value 0. This suggests an obvious way to address the computational difficulty as well: First, compute the identities of the K entries in the covariance matrix that are actually important in the sense that they will not be removed during thresholding, and then in a second step, compute the values of those entries. This can be done in O(Kn) time. If K ⪡ p2 and the identities of the important entries can be computed in reasonable time, then this is a big win. The key technical contribution of this article is the design and implementation of two different distributed algorithms for approximating the identities of the important entries quickly, using sampling. We have implemented these methods and tested them using an 800-core compute cluster. Experiments have been run using real datasets having millions of data points and up to 40,000 dimensions. These experiments show that the proposed methods are both accurate and efficient. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
6. Workload-Driven Antijoin Cardinality Estimation.
- Author
-
RUSU, FLORIN, ZIXUAN ZHUANG, MINGXI WU, and JERMAINE, CHRIS
- Subjects
ESTIMATION theory ,MATHEMATICAL optimization ,STATISTICAL sampling ,MONTE Carlo method ,BAYESIAN analysis - Abstract
Antijoin cardinality estimation is among a handful of problems that has eluded accurate efficient solutions amenable to implementation in relational query optimizers. Given the widespread use of antijoin and subset-based queries in analytical workloads and the extensive research targeted at join cardinality estimation-a seemingly related problem-the lack of adequate solutions for antijoin cardinality estimation is intriguing. In this article, we introduce a novel sampling-based estimator for antijoin cardinality that (unlike existent estimators) provides sufficient accuracy and efficiency to be implemented in a query optimizer. The proposed estimator incorporates three novel ideas. First, we use prior workload information when learning a mixture superpopulation model of the data offline. Second, we design a Bayesian statistics framework that updates the superpopulation model according to the live queries, thus allowing the estimator to adapt dynamically to the online workload. Third, we develop an efficient algorithm for sampling from a hypergeometric distribution in order to generate Monte Carlo trials, without explicitly instantiating either the population or the sample. When put together, these ideas form the basis of an efficient antijoin cardinality estimator satisfying the strict requirements of a query optimizer, as shown by the extensive experimental results over synthetically generated as well as massive TPC-H data. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
7. The Monte Carlo Database System: Stochastic Analysis Close to the Data.
- Author
-
JAMPANI, RAVI, FEI XU, MINGXI WU, PEREZ, LUIS, JERMAINE, CHRIS, and HAAS, PETER J.
- Subjects
MONTE Carlo method ,STOCHASTIC analysis ,STOCHASTIC models ,SENSITIVITY analysis ,DATABASES ,RISK assessment ,PROBABILITY theory ,DATABASE management - Abstract
The application of stochastic models and analysis techniques to large datasets is now commonplace. Unfortunately, in practice this usually means extracting data from a database system into an external tool (such as SAS, R, Arena, or Matlab), and then running the analysis there. This extract-and-model paradigm is typically error-prone, slow, does not support fine-grained modeling, and discourages what-if and sensitivity analyses. In this article we describe MCDB, a database system that permits a wide spectrum of stochastic models to be used in conjunction with the data stored in a large database, without ever extracting the data. MCDB facilitates in-database execution of tasks such as risk assessment, prediction, and imputation of missing data, as well as management of errors due to data integration, information extraction, and privacy-preserving data anonymization. MCDB allows a user to define "random" relations whose contents are determined by stochastic models. The models can then be queried using standard SQL. Monte Carlo techniques are used to analyze the probability distribution of the result of an SQL query over random relations. Novel "tuple-bundle" processing techniques can effectively control the Monte Carlo overhead, as shown in our experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
8. Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches.
- Author
-
Cormode, Graham, Garofalakis, Minos, Haas, Peter J., and Jermaine, Chris
- Subjects
WAVELETS (Mathematics) ,SEARCH algorithms ,ACQUISITION of data ,ERROR analysis in mathematics ,INFORMATION technology ,SPACETIME - Abstract
Methods for Approximate Query Processing (AQP) are essential for dealing with massive data. They are often the only means of providing interactive response times when exploring massive datasets, and are also needed to handle high speed data streams. These methods proceed by computing a lossy, compact synopsis of the data, and then executing the query of interest against the synopsis rather than the entire dataset. We describe basic principles and recent developments in AQP. We focus on four key synopses: random samples, histograms, wavelets, and sketches. We consider issues such as accuracy, space and time efficiency, optimality, practicality, range of applicability, error bounds on query answers, and incremental maintenance.We also discuss the tradeoffs between the different synopsis types. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
9. TRIAL: A Tool for Finding Distant Structural Similarities.
- Author
-
Venkateswaran, Jayendra, Song, Bin, Kahveci, Tamer, and Jermaine, Chris
- Abstract
Finding structural similarities in distantly related proteins can reveal functional relationships that can not be identified using sequence comparison. Given two proteins A and B and threshold \epsilon Å, we develop an algorithm, TRiplet-based Iterative ALignment (TRIAL) for computing the transformation of B that maximizes the number of aligned residues such that the root mean square deviation (RMSD) of the alignment is at most \epsilon Å. Our algorithm is designed with the specific goal of effectively handling proteins with low similarity in primary structure, where existing algorithms perform particularly poorly. Experiments show that our method outperforms existing methods. TRIAL alignment brings the secondary structures of distantly related proteins to similar orientations. It also finds larger number of secondary structure matches at lower RMSD values and increased overall alignment lengths. Its classification accuracy is up to 63 percent better than other methods, including CE and DALI. TRIAL successfully aligns 83 percent of the residues from the smaller protein in reasonable time while other methods align only 29 to 65 percent of the residues for the same set of proteins. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.