• il y a 11 ans
Convex relaxation for Combinatorial Penalties
The last years have seen the emergence of the field of structured sparsity, which aims at identifying a model of small complexity given some a priori knowledge on its possible structure. Specifically, models with structured sparsity are models in which the set of non-zero parameters --- often corresponding to a set of selected variables --- is not only assumed to be small, but also to display structured patterns. Two important examples are group sparsity, where groups of parameters are simultaneously zero or non-zero ,and hierarchical sparsity, were variables can only be selected following a prescribed partial order encoded by a directed acyclic graph.
A common approach to the problem is to add to the empirical risk an implicit or explicit penalization of the structure of the non-zero patterns.
In this talk, I will consider a generic formulation in which allowed structures are encoded by a combinatorial penalty, and show that when combined with continuous regularizer such as an Lp norm, a tightest convex relaxation can be constructed and used a regularizer. The formulation considered allows to treat in a unified framework several a priori  disconnected approaches such as norms based on overlapping groups, norms based on latent representations such as block-coding and submodular functions, and to obtain generic consistency and support recovery results for the corresponding estimators obtained as minimizers of the regularized empirical risk.

Recommandations