Skip to content

Feature request: the LLA algorithm for general folded concave penalties #185

@idc9

Description

@idc9

I am very happy to that see someone implementing adaptive Lasso in Python (#169)! It would be great if celer also implemented the more general LLA algorithm for any folded concave penalty e.g. see One-step sparse estimates in nonconcave penalized likelihood models, (Zou and Li, 2008) and Strong oracle optimality of folded concave penalized estimation, (Fan el at. 2014). The LLA algorithm is a mild, but statistically very nice generalization of AdaptiveLasso.

The main differences between the general LLA algorithm and AdaptiveLasso are

  1. LLA typically uses a better initialize e.g. a Lasso solution or simply 0 instead of the least squares solution
  2. LLA allows for different penalties (e.g. using SCAD the LLA algorithm satisfies the desirable strong oracle property)

The LLA algorithm should be fairly straightforward to implement, granted I'm not yet very familiar with the backend of celer.

LLA algorithm sketch

User input:

  1. tuning parameter \lambda
  2. concave penalty function g_{\lambda} (e.g. SCAD, MCP)
  3. initial value, \beta^0
  4. stopping criteria, either A) stop after s = 1 steps (so called "one step estimator") or B) stop at convergence

for s= 1, 2, ....

w^s = compute Lasso weights at current guess \beta^{s-1}

\beta^{s} = solve weighted lasso problem using weights w^s

check stopping criteria

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions