Back to Search
Start Over
Nearest neighbor classification using bottom-k sketches
- Source :
- BigData Conference
- Publication Year :
- 2013
- Publisher :
- IEEE, 2013.
-
Abstract
- Bottom-k sketches are an alternative to k×minwise sketches when using hashing to estimate the similarity of documents represented by shingles (or set similarity in general) in large-scale machine learning. They are faster to compute and have nicer theoretical properties. In the case of k×minwise hashing, the bias introduced by not truly random hash function is independent of the number k of hashes, while this bias decreases with increasing k when employing bottom-k. In practice, bottom-k sketches can expedite classification systems if the trained classifiers are applied to many data points with a lot of features (i.e., to many documents encoded by a large number of shingles on average). An advantage of b-bit k×minwise hashing is that it can be efficiently incorporated into machine learning methods relying on scalar products, such as support vector machines (SVMs). Still, experimental results indicate that a nearest neighbors classifier with bottom-k sketches can be preferable to using a linear SVM and b-bit k×minwise hashing if the amount of training data is low or the number of features is high.
Details
- Database :
- OpenAIRE
- Journal :
- 2013 IEEE International Conference on Big Data
- Accession number :
- edsair.doi...........f3d5ab2f6e21e495564f80311a828b75