Back to Search Start Over

Leveraging for big data regression.

Authors :
Ma, Ping
Sun, Xiaoxiao
Source :
WIREs: Computational Statistics. Jan2015, Vol. 7 Issue 1, p70-76. 7p.
Publication Year :
2015

Abstract

Rapid advance in science and technology in the past decade brings an extraordinary amount of data, offering researchers an unprecedented opportunity to tackle complex research challenges. The opportunity, however, has not yet been fully utilized, because effective and efficient statistical tools for analyzing super-large dataset are still lacking. One major challenge is that the advance of computing resources still lags far behind the exponential growth of database. To facilitate scientific discoveries using current computing resources, one may use an emerging family of statistical methods, called leveraging. Leveraging methods are designed under a subsampling framework, in which one samples a small proportion of the data (subsample) from the full sample, and then performs intended computations for the full sample using the small subsample as a surrogate. The key of the success of the leveraging methods is to construct nonuniform sampling probabilities so that influential data points are sampled with high probabilities. These methods stand as the very unique development of their type in big data analytics and allow pervasive access to massive amounts of information without resorting to high performance computing and cloud computing. WIREs Comput Stat 2015, 7:70-76. doi: 10.1002/wics.1324 For further resources related to this article, please visit the . Conflict of interest: The authors have declared no conflicts of interest for this article. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19395108
Volume :
7
Issue :
1
Database :
Academic Search Index
Journal :
WIREs: Computational Statistics
Publication Type :
Academic Journal
Accession number :
100031973
Full Text :
https://doi.org/10.1002/wics.1324