\( \newcommand{\opt}{\mathrm{opt}} \newcommand{\eps}{\epsilon} \newcommand{\R}{\mathbb{R}} \newcommand{\vec}{\mathbf} \newcommand{\wstar}{\w^{\ast}} \newcommand{\x}{\vec x} \newcommand{\w}{\vec w} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \newcommand{\poly}{\mathrm{poly}} \newcommand{\polylog}{\mathrm{polylog}} \newcommand{\var}{\mathbf{Var}} \newcommand{\cov}{\mathbf{Cov}} \)
Logo

Nikos Zarifis

zarifis [at] wisc.edu

Computer Sciences Department University of Wisconsin–Madison

I am a PhD student at the Computer Sciences Department of University of Wisconsin-Madison and a part of the Theory of Computing Group. I am very lucky to be advised by Professor Ilias Diakonikolas. I completed my undergraduate studies at the School of Electrical and Computer Engineering Department of the National Technical University of Athens in Greece, where I was advised by Professor Dimitris Fotakis.

I am interested in algorithms and in theoretical machine learning. For more information you can take a look in my CV.

Publications

  1. Robustly Learning Single-Index Models via Alignment Sharpness [abstract] [arxiv] Nikos Zarifis*, Puqian Wang*, Ilias Diakonikolas, and Jelena Diakonikolas Manuscript * Equal contribution

    We study the problem of learning Single-Index Models under the $L_2^2$ loss in the agnostic model. We give an efficient learning algorithm, achieving a constant factor approximation to the optimal loss, that succeeds under a range of distributions (including log-concave distributions) and a broad class of monotone and Lipschitz link functions. This is the first efficient constant factor approximate agnostic learner, even for Gaussian data and for any nontrivial class of link functions. Prior work for the case of unknown link function either works in the realizable setting or does not attain constant factor approximation. The main technical ingredient enabling our algorithm and analysis is a novel notion of a local error bound in optimization that we term alignment sharpness and that may be of broader interest.

  2. Statistical Query Lower Bounds for Learning Truncated Gaussians [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, and Nikos Zarifis Manuscript

    We study the problem of estimating the mean of an identity covariance Gaussian in the truncated setting, in the regime when the truncation set comes from a low-complexity family $\mathcal{C}$ of sets. Specifically, for a fixed but unknown truncation set $S \subseteq \mathbb{R}^d$, we are given access to samples from the distribution $\mathcal{N}(\boldsymbol{ \mu}, \mathbf{ I})$ truncated to the set $S$. The goal is to estimate $\boldsymbol\mu$ within accuracy $\epsilon>0$ in $\ell_2$-norm. Our main result is a Statistical Query (SQ) lower bound suggesting a super-polynomial information-computation gap for this task. In more detail, we show that the complexity of any SQ algorithm for this problem is $d^{\mathrm{poly}(1/\epsilon)}$, even when the class $\mathcal{C}$ is simple so that $\mathrm{poly}(d/\epsilon)$ samples information-theoretically suffice. Concretely, our SQ lower bound applies when $\mathcal{C}$ is a union of a bounded number of rectangles whose VC dimension and Gaussian surface are small. As a corollary of our construction, it also follows that the complexity of the previously known algorithm for this task is qualitatively best possible.

  3. Agnostically Learning Multi-index Models with Queries [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis Manuscript

    We study the power of query access for the task of agnostic learning under the Gaussian distribution. In the agnostic model, no assumptions are made on the labels and the goal is to compute a hypothesis that is competitive with the {\em best-fit} function in a known class, i.e., it achieves error $\mathrm{opt}+\epsilon$, where $\mathrm{opt}$ is the error of the best function in the class. We focus on a general family of Multi-Index Models (MIMs), which are $d$-variate functions that depend only on few relevant directions, i.e., have the form $g(\mathbf{W} \mathbf{x})$ for an unknown link function $g$ and a $k \times d$ matrix $\mathbf{W}$. Multi-index models cover a wide range of commonly studied function classes, including constant-depth neural networks with ReLU activations, and intersections of halfspaces. Our main result shows that query access gives significant runtime improvements over random examples for agnostically learning MIMs. Under standard regularity assumptions for the link function (namely, bounded variation or surface area), we give an agnostic query learner for MIMs with complexity $O(k)^{\mathrm{poly}(1/\epsilon)} \; \mathrm{poly}(d) $. In contrast, algorithms that rely only on random examples inherently require $d^{\mathrm{poly}(1/\epsilon)}$ samples and runtime, even for the basic problem of agnostically learning a single ReLU or a halfspace. Our algorithmic result establishes a strong computational separation between the agnostic PAC and the agnostic PAC+Query models under the Gaussian distribution. Prior to our work, no such separation was known -- even for the special case of agnostically learning a single halfspace, for which it was an open problem first posed by Feldman. Our results are enabled by a general dimension-reduction technique that leverages query access to estimate gradients of (a smoothed version of) the underlying label function.

  4. Super Non-singular Decompositions of Polynomials and their Application to Robustly Learning Low-degree PTFs [abstract] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, and Nikos Zarifis In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024)

    We study the efficient learnability of low-degree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions. Our main algorithmic result is a polynomial-time PAC learning algorithm for this concept class in the strong contamination model under the Gaussian distribution with error guarantee $O_{d, c}(\opt^{1-c})$, for any desired constant $c>0$, where $\opt$ is the fraction of corruptions. In the strong contamination model, an omniscient adversary can arbitrarily corrupt an $\opt$-fraction of the data points and their labels. This model generalizes the malicious noise model and the adversarial label noise model. Prior to our work, known polynomial-time algorithms in this corruption model (or even in the weaker adversarial label noise model) achieved error $\tilde{O}_d(\opt^{1/(d+1)})$, which deteriorates significantly as a function of the degree $d$. Our algorithm employs an iterative approach inspired by localization techniques previously used in the context of learning linear threshold functions. Specifically, we use a robust perceptron algorithm to compute a good partial classifier and then iterate on the unclassified points. In order to achieve this, we need to take a set defined by a number of polynomial inequalities and partition it into several well-behaved subsets. To this end, we develop new polynomial decomposition techniques that may be of independent interest.

  5. Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise [abstract] [arxiv] Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2023)

    We study the problem of learning general (i.e., not necessarily homogeneous) halfspaces with Random Classification Noise under the Gaussian distribution. We establish nearly-matching algorithmic and Statistical Query (SQ) lower bound results revealing a surprising information-computation gap for this basic problem. Specifically, the sample complexity of this learning problem is $\widetilde{\Theta}(d/\epsilon)$, where $d$ is the dimension and $\epsilon$ is the excess error. Our positive result is a computationally efficient learning algorithm with sample complexity $\tilde{O}(d/\epsilon + d/(\max\{p, \epsilon\})^2)$, where $p$ quantifies the bias of the target halfspace. On the lower bound side, we show that any efficient SQ algorithm (or low-degree test) for the problem requires sample complexity at least $\Omega(d^{1/2}/(\max\{p, \epsilon\})^2)$. Our lower bound suggests that this quadratic dependence on $1/\epsilon$ is inherent for efficient algorithms.

  6. Efficient Testable Learning of Halfspaces with Adversarial Label Noise [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2023)

    We give the first polynomial-time algorithm for the testable learning of halfspaces in the presence of adversarial label noise under the Gaussian distribution. In the recently introduced testable learning model, one is required to produce a tester-learner such that if the data passes the tester, then one can trust the output of the robust learner on the data. Our tester-learner runs in time $\mathrm{poly}(d/\epsilon)$ and outputs a halfspace with misclassification error $O(\mathrm{opt})+\epsilon$, where $\opt$ is the 0-1 error of the best fitting halfspace. At a technical level, our algorithm employs an iterative soft localization technique enhanced with appropriate testers to ensure that the data distribution is sufficiently similar to a Gaussian.

  7. Self-Directed Linear Classification [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 36th Annual Conference on Learning Theory (COLT 2023)

    In online classification, a learner is presented with a sequence of examples and aims to predict their labels in an online fashion so as to minimize the total number of mistakes. In the self-directed variant, the learner knows in advance the pool of examples and can adaptively choose the order in which predictions are made. Here we study the power of choosing the prediction order and establish the first strong separation between worst-order and random-order learning for the fundamental task of linear classification. Prior to our work, such a separation was known only for very restricted concept classes, e.g., one-dimensional thresholds or axis-aligned rectangles. We present two main results. If $X$ is a dataset of $n$ points drawn uniformly at random from the $d$-dimensional unit sphere, we design an efficient self-directed learner that makes $O(d \log \log(n))$ mistakes and classifies the entire dataset. If $X$ is an arbitrary $d$-dimensional dataset of size $n$, we design an efficient self-directed learner that predicts the labels of $99\%$ of the points in $X$ with mistake bound independent of $n$. In contrast, under a worst- or random-ordering, the number of mistakes must be at least $\Omega(d \log n)$, even when the points are drawn uniformly from the unit sphere and the learner only needs to predict the labels for $1\%$ of them.

  8. Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise [abstract] [arxiv] Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, and Nikos Zarifis In Proceedings of the 36th Annual Conference on Learning Theory (COLT 2023)

    We study the problem of PAC learning $\gamma$-margin halfspaces with Random Classification Noise. We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms. Concretely, the sample complexity of the problem is $\widetilde{\Theta}(1/(\gamma^2 \epsilon))$. We start by giving a simple efficient algorithm with sample complexity $\widetilde{O}(1/(\gamma^2 \epsilon^2))$. Our main result is a lower bound for Statistical Query (SQ) algorithms and low-degree polynomial tests suggesting that the quadratic dependence on $1/\epsilon$ in the sample complexity is inherent for computationally efficient algorithms. Specifically, our results imply a lower bound of $\widetilde{\Omega}(1/(\gamma^{1/2} \epsilon^2))$ on the sample complexity of any efficient SQ learner or low-degree test.

  9. SQ Lower Bounds for Learning Mixtures of Separated and Bounded Covariance Gaussians [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, and Nikos Zarifis In Proceedings of the 36th Annual Conference on Learning Theory (COLT 2023)

    We study the complexity of learning mixtures of separated Gaussians with common unknown bounded covariance matrix. Specifically, we focus on learning Gaussian mixture models (GMMs) on $\mathbb{R}^d$ of the form $P= \sum_{i=1}^k w_i \mathcal{N}(\vec \mu_i,\vec \Sigma_i)$, where $\vec \Sigma_i = \vec \Sigma \preceq \vec I$ and $\min_{i \neq j} \|\vec \mu_i - \vec \mu_j\|_2 \geq k^\epsilon$ for some $\epsilon>0$. Known learning algorithms for this family of GMMs have complexity $(dk)^{O(1/\epsilon)}$. In this work, we prove that any Statistical Query (SQ) algorithm for this problem requires complexity at least $d^{\Omega(1/\epsilon)}$. Our SQ lower bound implies a similar lower bound for low-degree polynomial tests. Our result provides evidence that known algorithms for this problem are nearly best possible.

  10. Robustly Learning a Single Neuron via Sharpness [abstract] [arxiv] Puqian Wang*, Nikos Zarifis*, Ilias Diakonikolas, and Jelena Diakonikolas In Proceedings of the 40th International Conference on Machine Learning (ICML 2023) [Selected for Oral Presentation] * Equal contribution

    We study the problem of learning a single neuron with respect to the -loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal $L_2^2$-error within a constant factor. Notably, our algorithm applies under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory.

  11. Learning a Single Neuron with Adversarial Label Noise via Gradient Descent [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 35th Annual Conference on Learning Theory (COLT 2022)

    We study the fundamental problem of learning a single neuron, i.e., a function of the form $\mathbf{x} \mapsto \sigma(\mathbf{ w} \cdot \mathbf{x})$ for monotone activations $\sigma:\mathbb{R} \mapsto \mathbb{R}$, with respect to the $L_2^2$-loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution $D$ on $(\mathbf{x}, y) \in \mathbb{R}^d \times \mathbb{R}$ such that there exists $\mathbf{w}^\ast \in \R^d$ achieving $F(\mathbf{ w}^\ast) \leq \eps$, where $F(\mathbf{ w}) = \mathbb{E}_{(\mathbf{x},y) \sim D}[(\sigma(\mathbf{ w}\cdot \mathbf{x}) - y)^2]$. The goal of the learner is to output a hypothesis vector $\wt{\vec w}$ such that $F(\wt{\vec w}) = C \,\eps$ with high probability, where $C$ is a universal constant. As our main contribution, we give efficient constant-factor approximate learners for a broad class of distributions (including log-concave distributions) and activation functions (including ReLUs and sigmoids). Concretely, for the class of isotropic log-concave distributions, we obtain the following important corollaries: 1) For the logistic activation, i.e., $\sigma(t) = 1/(1+e^{-t})$, we obtain the first polynomial-time constant factor approximation, even under the Gaussian distribution. Moreover, our algorithm has sample complexity $\wt{O}(d/\eps)$, which is tight within polylogarithmic factors. 2) For the ReLU activation, i.e., $\sigma(t) = \max(0,t)$, we give an efficient algorithm with sample complexity $\wt{O}(d \, \polylog(1/\eps))$. Prior to our work, the best known constant-factor approximate learner had sample complexity $\Omega(d/\eps)$. In both settings, our algorithms are simple, performing gradient-descent on the (regularized) $L_2^2$-loss. The correctness of our algorithms relies on novel structural results that we establish, showing that (essentially all) stationary points of the underlying non-convex loss are approximately optimal.

  12. Learning General Halfspaces with Adversarial Label Noise via Online Gradient Descent [abstract] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 39th International Conference on Machine Learning (ICML 2022)

    We study the problem of learning general — i.e., not necessarily homogeneous — halfspaces with adversarial label noise under the Gaussian distribution. Prior work has provided a sophisticated polynomial-time algorithm for this problem. In this work, we show that the problem can be solved directly via online gradient descent applied to a sequence of natural non-convex surrogates. This approach yields a simple iterative learning algorithm for general halfspaces with near-optimal sample complexity, runtime, and error guarantee. At the conceptual level, our work establishes an intriguing connection between learning halfspaces with adversarial noise and online optimization that may find other applications.

  13. Learning General Halfspaces with General Massart Noise under the Gaussian Distribution [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 54th Annual ACM Symposium on Theory of Computing (STOC 2022)

    We study the problem of PAC learning halfspaces on $\mathbb{R}^d$ with Massart noise under the Gaussian distribution. In the Massart model, an adversary is allowed to flip the label of each point $\mathbf{x}$ with unknown probability $\eta(\mathbf{x}) \leq \eta$, for some parameter $\eta \in [0,1/2]$. The goal is to find a hypothesis with misclassification error of $\mathrm{OPT} + \epsilon$, where $\mathrm{OPT}$ is the error of the target halfspace. This problem had been previously studied under two assumptions: (i) the target halfspace is homogeneous (i.e., the separating hyperplane goes through the origin), and (ii) the parameter $\eta$ is strictly smaller than $1/2$. Prior to this work, no nontrivial bounds were known when either of these assumptions is removed. We study the general problem and establish the following: For $\eta <1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $d^{O_{\eta}(\log(1/\gamma))}\mathrm{poly}(1/\epsilon)$, where $\gamma =\max\{\epsilon, \min\{\mathbf{Pr}[f(\mathbf{x}) = 1], \mathbf{Pr}[f(\mathbf{x}) = -1]\} \}$ is the bias of the target halfspace $f$. Prior efficient algorithms could only handle the special case of $\gamma = 1/2$. Interestingly, we establish a qualitatively matching lower bound of $d^{\Omega(\log(1/\gamma))}$ on the complexity of any Statistical Query (SQ) algorithm. For $\eta = 1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $O_\epsilon(1) d^{O(\log(1/\epsilon))}$. This result is new even for the subclass of homogeneous halfspaces; prior algorithms for homogeneous Massart halfspaces provide vacuous guarantees for $\eta=1/2$. We complement our upper bound with a nearly-matching SQ lower bound of $d^{\Omega(\log(1/\epsilon))}$, which holds even for the special case of homogeneous halfspaces.

  14. Agnostic Proper Learning of Halfspaces under Gaussian Marginals [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 34th Annual Conference on Learning Theory (COLT 2021)

    We study the problem of agnostically learning halfspaces under the Gaussian distribution. Our main result is the {\em first proper} learning algorithm for this problem whose sample complexity and computational complexity qualitatively match those of the best known improper agnostic learner. Building on this result, we also obtain the first proper polynomial-time approximation scheme (PTAS) for agnostically learning homogeneous halfspaces. Our techniques naturally extend to agnostically learning linear models with respect to other non-linear activations, yielding in particular the first proper agnostic algorithm for ReLU regression.

  15. The Optimality of Polynomial Regression for Agnostic Learning under Gaussian Marginals [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, and Nikos Zarifis In Proceedings of the 34th Annual Conference on Learning Theory (COLT 2021)

    We study the problem of agnostic learning under the Gaussian distribution. We develop a method for finding hard families of examples for a wide class of problems by using LP duality. For Boolean-valued concept classes, we show that the $L^1$-regression algorithm is essentially best possible, and therefore that the computational difficulty of agnostically learning a concept class is closely related to the polynomial degree required to approximate any function from the class in $L^1$-norm. Using this characterization along with additional analytic tools, we obtain optimal SQ lower bounds for agnostically learning linear threshold functions and the first non-trivial SQ lower bounds for polynomial threshold functions and intersections of halfspaces. We also develop an analogous theory for agnostically learning real-valued functions, and as an application prove near-optimal SQ lower bounds for agnostically learning ReLUs and sigmoids.

  16. Learning Online Algorithms with Distributional Advice Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Ali Vakilian, and Nikos Zarifis In Proceedings of the 38th International Conference on Machine Learning (ICML 2021)
  17. A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing (STOC 2021)

    We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise. In the Tsybakov noise model, the label of every sample is independently flipped with an adversarially controlled probability that can be arbitrarily close to $1/2$ for a fraction of the samples. {\em We give the first polynomial-time algorithm for this fundamental learning problem.} Our algorithm learns the true halfspace within any desired accuracy $\epsilon$ and succeeds under a broad family of well-behaved distributions including log-concave distributions. Prior to our work, the only previous algorithm for this problem required quasi-polynomial runtime in $1/\epsilon$. Our algorithm employs a recently developed reduction DKTZ20b from learning to certifying the non-optimality of a candidate halfspace. This prior work developed a quasi-polynomial time certificate algorithm based on polynomial regression. {\em The main technical contribution of the current paper is the first polynomial-time certificate algorithm.} Starting from a non-trivial warm-start, our algorithm performs a novel "win-win" iterative process which, at each step, either finds a valid certificate or improves the angle between the current halfspace and the true one. Our warm-start algorithm for isotropic log-concave distributions involves a number of analytic tools that may be of broader interest. These include a new efficient method for reweighting the distribution in order to recenter it and a novel characterization of the spectrum of the degree-$2$ Chow parameters.

  18. Learning Halfspaces with Tsybakov Noise [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing (STOC 2021) (Conference version to be merged with paper above.)

    We study the efficient PAC learnability of halfspaces in the presence of Tsybakov noise. In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary. This noise model significantly generalizes the Massart noise model, by allowing the flipping probabilities to be arbitrarily close to $1/2$ for a fraction of the samples. Our main result is the first non-trivial PAC learning algorithm for this problem under a broad family of structured distributions -- satisfying certain concentration and (anti-)anti-concentration properties -- including log-concave distributions. Specifically, we given an algorithm that achieves misclassification error $\epsilon$ with respect to the true halfspace, with quasi-polynomial runtime dependence in $1/\epsilon$. The only previous upper bound for this problem -- even for the special case of log-concave distributions -- was doubly exponential in $1/\epsilon$ (and follows via the naive reduction to agnostic learning). Our approach relies on a novel computationally efficient procedure to certify whether a candidate solution is near-optimal, based on semi-definite programming. We use this certificate procedure as a black-box and turn it into an efficient learning algorithm by searching over the space of halfspaces via online convex optimization.

  19. Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2020)

    We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals. In the former problem, given labeled examples $(\mathbf{x}, y)$ from an unknown distribution on $\mathbb{R}^d \times \{ \pm 1\}$, whose marginal distribution on $\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT}$ is the 0-1 loss of the best-fitting halfspace. In the latter problem, given labeled examples $(\mathbf{x}, y)$ from an unknown distribution on $\mathbb{R}^d \times \mathbb{R}$, whose marginal distribution on $\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with square loss $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT}$ is the square loss of the best-fitting ReLU. We prove Statistical Query (SQ) lower bounds of $d^{\mathrm{poly}(1/\epsilon)}$ for both of these problems. Our SQ lower bounds provide strong evidence that current upper bounds for these tasks are essentially best possible.

  20. Non-Convex SGD Learns Halfspaces with Adversarial Label Noise [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2020)

    We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model. For a broad family of structured distributions, including log-concave distributions, we show that non-convex SGD efficiently converges to a solution with misclassification error $O(\opt)+\eps$, where $\opt$ is the misclassification error of the best-fitting halfspace. In sharp contrast, we show that optimizing any convex surrogate inherently leads to misclassification error of $\omega(\opt)$, even under Gaussian marginals.

  21. Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, and Nikos Zarifis In Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020)

    We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\R^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{\Omega}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by~\cite{Kliv17}. Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with positive or negative coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.

  22. Learning Halfspaces with Massart Noise Under Structured Distributions [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020)

    We study the problem of learning halfspaces with Massart noise in the distribution-specific PAC model. We give the first computationally efficient algorithm for this problem with respect to a broad family of distributions, including log-concave distributions. This resolves an open question posed in a number of prior works. Our approach is extremely simple: We identify a smooth {\em non-convex} surrogate loss with the property that any approximate stationary point of this loss defines a halfspace that is close to the target halfspace. Given this structural result, we can use SGD to solve the underlying learning problem.

  23. Reallocating multiple facilities on the line [abstract] [arxiv] Dimitris Fotakis, Loukas Kavouras, Panagiotis Kostopanagiotis, Philip Lazos, Stratis Skoulakis, and Nikos Zarifis In Theoretical Computer Science 2021

    We study the multistage $K$-facility reallocation problem on the real line, where we maintain $K$ facility locations over $T$ stages, based on the stage-dependent locations of $n$ agents. Each agent is connected to the nearest facility at each stage, and the facilities may move from one stage to another, to accommodate different agent locations. The objective is to minimize the connection cost of the agents plus the total moving cost of the facilities, over all stages. $K$-facility reallocation was introduced by de Keijzer and Wojtczak, where they mostly focused on the special case of a single facility. Using an LP-based approach, we present a polynomial time algorithm that computes the optimal solution for any number of facilities. We also consider online $K$-facility reallocation, where the algorithm becomes aware of agent locations in a stage-by-stage fashion. By exploiting an interesting connection to the classical $K$-server problem, we present a constant-competitive algorithm for $K = 2$ facilities.