Skip to content
/ dRF Public

Repository for the paper Deterministic equivalent and error universality of deep random features learning

Notifications You must be signed in to change notification settings

HugoCui/dRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dRF

Code for the paper : Deterministic equivalent and error universality of deep random features learning (link to paper)

illus

Ridge regression

(Fig. 1)

  • Ridge.ipynb provides a Jupyter notebook implementation of the theoretical characterization of Appendix D.2 for the test error $\epsilon_g$, achieved by a depth $L=2$ dRF, with $\sigma=\tanh$ activation, and a single-layer target $L_\star=1,\sigma_\star=$ sign. To vary the length, the coefficients $r_\ell, \kappa^\ell_1,\kappa^\ell_*$ (10-11) must be first evaluated.

Logistic regression

(Fig. 2)

  • Logistic.ipynb provides a Jupyter notebook implementation of the theoretical characterization of Appendix D.3 for the classification error $\epsilon_g$, achieved by a depth $L=2$ dRF, with $\sigma=\tanh$ activation, and a single-layer target $L_\star=1,\sigma_\star=$ sign. To vary the length, the coefficients $r_\ell, \kappa^\ell_1,\kappa^\ell_*$ (10-11) must be first evaluated.

Varying the depth

(Fig. 3) -Layerwise_ridge.ipynb reproduces Fig. 3, namely the test error of various dRF of increasing depth $L$. For instance, to plot the test erros up to depth $10$ models, run

max_length=10
for L in errors.keys():
    plt.plot(alphas, errors[L],label="layer "+str(L),linewidth=1.3) 

Versions: These notebooks employ Python 3.12 .

About

Repository for the paper Deterministic equivalent and error universality of deep random features learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published