Back to Search Start Over

Point Location and Active Learning: Learning Halfspaces Almost Optimally

Authors :
Max Hopkins
Daniel M. Kane
Gaurav Mahajan
Shachar Lovett
Source :
FOCS
Publication Year :
2020
Publisher :
arXiv, 2020.

Abstract

Given a finite set $X\subset \mathbb{R}^{d}$ and a binary linear classifier $c:\mathbb{R}^{d}\rightarrow\{0,1\}$ , how many queries of the form $c(x)$ are required to learn the label of every point in $X$ ? Known as point location, this problem has inspired over 35 years of research in the pursuit of an optimal algorithm. Building on the prior work of Kane, Lovett, and Moran (ICALP 2018), we provide the first nearly optimal solution, a randomized linear decision tree of depth $\tilde{O}(d\log(\vert X\vert))$ , improving on the previous best of $\tilde{O}(\overline{d}^{2}\log(\vert X\vert))$ from Ezra and Sharir (Discrete and Computational Geometry, 2019). As a corollary, we also provide the first nearly optimal algorithm for actively learning halfspaces in the membership query model. En route to these results, building on the work of Carlen, Lieb, and Loss (J. Geometric Analysis 2004), as well as Dvir, Saraf, and Wigderson (STOC 2014), we prove a novel characterization of Barthe's Theorem (Inventiones Mathematicae, 1998) of independent interest. In particular, we show that $X$ may be transformed into approximate isotropic position if and only if there exists no $k$ -dimensional subspace with more than a $k/d$ -fraction of $X$ , and provide a similar characterization for exact isotropic position. The below is an extended abstract. The full work can be found at https://arxiv.org/abs/2004.11380

Details

Database :
OpenAIRE
Journal :
FOCS
Accession number :
edsair.doi.dedup.....cb2bdec96588436ab72f07314ffe7111
Full Text :
https://doi.org/10.48550/arxiv.2004.11380