On MNIST: (a) acc[96.80%] & loss vs. epochs for mlp; (b) acc[98.24%] & loss vs. epochs for LeNet
- Linear Regression
- Self Made Gauss-Noise of a Function.
- Logistic Regression
- Iris.
- KNN
- CIFAR-10.
- MLP
- MNIST.
- CIFAR-10.
- LeNet[1]
- MNIST.
- CIFAR-10.
- LSTM
- UCI HAR.
- GRU[2]
- UCI HAR.
- Transformer[3]
- WMT15. TODO
- Nerual ODE[4]
- MNIST. TODO
- VAE[5]
- MNIST.
Last update: 2025.03.14.
236 text files.
135 unique files.
138 files ignored.
github.com/AlDanial/cloc v 1.98 T=0.05 s (2810.5 files/s, 307803.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 33 1689 3297 3177
Jupyter Notebook 21 0 3947 1913
Text 6 1 0 301
CSV 68 0 0 203
Markdown 5 40 0 198
TOML 2 3 0 16
-------------------------------------------------------------------------------
SUM: 135 1733 7244 5808
-------------------------------------------------------------------------------
[1] LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., & Jackel, L. (1989). Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1(4), 541–551.
[2] Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the Difficulty of Training Recurrent Neural Networks. In Proceedings of the 30th International Conference on Machine Learning (ICML) (pp. 1310–1318).
[3] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, ., & Polosukhin, I. (2017). Attention is All You Need. In Advances in Neural Information Processing Systems (NeurIPS).
[4] Chen, T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural Ordinary Differential Equations. In Advances in Neural Information Processing Systems (NeurIPS).
[5] Kingma, D., & Welling, M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR).