-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
I am very happy to that see someone implementing adaptive Lasso in Python (#169)! It would be great if celer also implemented the more general LLA algorithm for any folded concave penalty e.g. see One-step sparse estimates in nonconcave penalized likelihood models, (Zou and Li, 2008) and Strong oracle optimality of folded concave penalized estimation, (Fan el at. 2014). The LLA algorithm is a mild, but statistically very nice generalization of AdaptiveLasso.
The main differences between the general LLA algorithm and AdaptiveLasso are
- LLA typically uses a better initialize e.g. a Lasso solution or simply 0 instead of the least squares solution
- LLA allows for different penalties (e.g. using SCAD the LLA algorithm satisfies the desirable strong oracle property)
The LLA algorithm should be fairly straightforward to implement, granted I'm not yet very familiar with the backend of celer.
LLA algorithm sketch
User input:
- tuning parameter \lambda
- concave penalty function g_{\lambda} (e.g. SCAD, MCP)
- initial value, \beta^0
- stopping criteria, either A) stop after s = 1 steps (so called "one step estimator") or B) stop at convergence
for s= 1, 2, ....
w^s = compute Lasso weights at current guess \beta^{s-1}
\beta^{s} = solve weighted lasso problem using weights w^s
check stopping criteria
Metadata
Metadata
Assignees
Labels
No labels