Logo

Nikos Zarifis

zarifis [at] wisc.edu

Computer Sciences Department University of Wisconsin–Madison

I am a first year PhD student at the Computer Sciences Department of University of Wisconsin-Madison and a part of the Theory of Computing Group. I am very lucky to be advised by Professor Ilias Diakonikolas. I completed my undergraduate studies at the School of Electrical and Computer Engineering Department of the National Technical University of Athens in Greece, where I was advised by Professor Dimitris Fotakis.

I am interested in algorithms and in theoretical machine learning. For more information you can take a look in my CV.

Publications

  1. A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis Manuscript

    We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise. In the Tsybakov noise model, the label of every sample is independently flipped with an adversarially controlled probability that can be arbitrarily close to 1/2 for a fraction of the samples. \em We give the first polynomial-time algorithm for this fundamental learning problem. Our algorithm learns the true halfspace within any desired accuracy εand succeeds under a broad family of well-behaved distributions including log-concave distributions. Prior to our work, the only previous algorithm for this problem required quasi-polynomial runtime in 1/ε. Our algorithm employs a recently developed reduction DKTZ20b from learning to certifying the non-optimality of a candidate halfspace. This prior work developed a quasi-polynomial time certificate algorithm based on polynomial regression. \em The main technical contribution of the current paper is the first polynomial-time certificate algorithm. Starting from a non-trivial warm-start, our algorithm performs a novel "win-win" iterative process which, at each step, either finds a valid certificate or improves the angle between the current halfspace and the true one. Our warm-start algorithm for isotropic log-concave distributions involves a number of analytic tools that may be of broader interest. These include a new efficient method for reweighting the distribution in order to recenter it and a novel characterization of the spectrum of the degree-2 Chow parameters.

  2. Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2020)

    We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals. In the former problem, given labeled examples (\mathbfx, y) from an unknown distribution on \mathbbR^d \times { \pm 1}, whose marginal distribution on \mathbfx is the standard Gaussian and the labels y can be arbitrary, the goal is to output a hypothesis with 0-1 loss \mathrmOPT+ε, where \mathrmOPT is the 0-1 loss of the best-fitting halfspace. In the latter problem, given labeled examples (\mathbfx, y) from an unknown distribution on \mathbbR^d \times \mathbbR, whose marginal distribution on \mathbfx is the standard Gaussian and the labels y can be arbitrary, the goal is to output a hypothesis with square loss \mathrmOPT+ε, where \mathrmOPT is the square loss of the best-fitting ReLU. We prove Statistical Query (SQ) lower bounds of d^\mathrmpoly(1/ε) for both of these problems. Our SQ lower bounds provide strong evidence that current upper bounds for these tasks are essentially best possible.

  3. Non-Convex SGD Learns Halfspaces with Adversarial Label Noise [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Advances in Neural Information Processing Systems (NeurIPS 2020)

    We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model. For a broad family of structured distributions, including log-concave distributions, we show that non-convex SGD efficiently converges to a solution with misclassification error O(\opt)+\eps, where \opt is the misclassification error of the best-fitting halfspace. In sharp contrast, we show that optimizing any convex surrogate inherently leads to misclassification error of ω(\opt), even under Gaussian marginals.

  4. Learning Halfspaces with Tsybakov Noise [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis Manuscript

    We study the efficient PAC learnability of halfspaces in the presence of Tsybakov noise. In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary. This noise model significantly generalizes the Massart noise model, by allowing the flipping probabilities to be arbitrarily close to 1/2 for a fraction of the samples. Our main result is the first non-trivial PAC learning algorithm for this problem under a broad family of structured distributions – satisfying certain concentration and (anti-)anti-concentration properties – including log-concave distributions. Specifically, we given an algorithm that achieves misclassification error εwith respect to the true halfspace, with quasi-polynomial runtime dependence in 1/ε. The only previous upper bound for this problem – even for the special case of log-concave distributions – was doubly exponential in 1/ε(and follows via the naive reduction to agnostic learning). Our approach relies on a novel computationally efficient procedure to certify whether a candidate solution is near-optimal, based on semi-definite programming. We use this certificate procedure as a black-box and turn it into an efficient learning algorithm by searching over the space of halfspaces via online convex optimization.

  5. Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks [abstract] [arxiv] Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, and Nikos Zarifis In Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020)

    We study the problem of PAC learning one-hidden-layer ReLU networks with k hidden units on \R^d under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for k up to \tildeΩ(\sqrt\log d). Previously, no polynomial time algorithm was known, even for k=3. This answers an open question posed by \citeKliv17. Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with positive or negative coefficients, we prove a Statistical Query lower bound of d^Ω(k). Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.

  6. Learning Halfspaces with Massart Noise Under Structured Distributions [abstract] [arxiv] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis In Proceedings of the 33rd Annual Conference on Learning Theory (COLT 2020)

    We study the problem of learning halfspaces with Massart noise in the distribution-specific PAC model. We give the first computationally efficient algorithm for this problem with respect to a broad family of distributions, including log-concave distributions. This resolves an open question posed in a number of prior works. Our approach is extremely simple: We identify a smooth \em non-convex surrogate loss with the property that any approximate stationary point of this loss defines a halfspace that is close to the target halfspace. Given this structural result, we can use SGD to solve the underlying learning problem.

  7. Reallocating multiple facilities on the line [abstract] [arxiv] Dimitris Fotakis, Loukas Kavouras, Panagiotis Kostopanagiotis, Philip Lazos, Stratis Skoulakis, and Nikos Zarifis In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19)

    We study the multistage K-facility reallocation problem on the real line, where we maintain K facility locations over T stages, based on the stage-dependent locations of n agents. Each agent is connected to the nearest facility at each stage, and the facilities may move from one stage to another, to accommodate different agent locations. The objective is to minimize the connection cost of the agents plus the total moving cost of the facilities, over all stages. K-facility reallocation was introduced by de Keijzer and Wojtczak, where they mostly focused on the special case of a single facility. Using an LP-based approach, we present a polynomial time algorithm that computes the optimal solution for any number of facilities. We also consider online K-facility reallocation, where the algorithm becomes aware of agent locations in a stage-by-stage fashion. By exploiting an interesting connection to the classical K-server problem, we present a constant-competitive algorithm for K = 2 facilities.