Code for the paper : Deterministic equivalent and error universality of deep random features learning (link to paper)
(Fig. 1)
-
Ridge.ipynb provides a Jupyter notebook implementation of the theoretical characterization of Appendix D.2 for the test error
$\epsilon_g$ , achieved by a depth$L=2$ dRF, with$\sigma=\tanh$ activation, and a single-layer target$L_\star=1,\sigma_\star=$ sign. To vary the length, the coefficients$r_\ell, \kappa^\ell_1,\kappa^\ell_*$ (10-11) must be first evaluated.
(Fig. 2)
-
Logistic.ipynb provides a Jupyter notebook implementation of the theoretical characterization of Appendix D.3 for the classification error
$\epsilon_g$ , achieved by a depth$L=2$ dRF, with$\sigma=\tanh$ activation, and a single-layer target$L_\star=1,\sigma_\star=$ sign. To vary the length, the coefficients$r_\ell, \kappa^\ell_1,\kappa^\ell_*$ (10-11) must be first evaluated.
(Fig. 3)
-Layerwise_ridge.ipynb reproduces Fig. 3, namely the test error of various dRF of increasing depth
max_length=10
for L in errors.keys():
plt.plot(alphas, errors[L],label="layer "+str(L),linewidth=1.3)
Versions: These notebooks employ Python 3.12 .
