Exploiting regularity in sparse Generalized Linear Models
Résumé
Generalized Linear Models (GLM) are a wide class ofregression and classification models, where the predictedvariable is obtained from a linear combination of the in-put variables. For statistical inference in high dimensions,sparsity inducing regularization have proven useful whileoffering statistical guarantees. However, solving the result-ing optimization problems can be challenging: even forpopular iterative algorithms such as coordinate descent, oneneeds to loop over a large number of variables. To mitigatethis, techniques known asscreening rulesandworking setsdiminish the size of the optimization problem at hand, eitherby progressively removing variables, or by solving a growingsequence of smaller problems. For both of these techniques,significant variables are identified by convex duality. In thispaper, we show that the dual iterates of a GLM exhibit aVector AutoRegressive (VAR) behavior after sign identifi-cation, when the primal problem is solved with proximalgradient descent or cyclic coordinate descent. Exploitingthis regularity one can construct dual points that offertighter control of optimality, enhancing the performance ofscreening rules and helping to design a competitive workingset algorithm.
Domaines
Mathématiques [math]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...