LensFlare is an educational deep learning library for understanding neural networks. The code is based on work from the Coursera deeplearning.ai course.
- TensorFlow 2 with GradientTape - Low-level API for explicit gradient computation
- Metal GPU acceleration on Apple Silicon Macs
- sklearn-style interface with
fit(),predict(),transform() - Pure NumPy implementation for maximum educational transparency
pip install -e .
# For Apple Silicon GPU acceleration
pip install -e ".[metal]"from lensflare import TfNNClassifier, load_moons_dataset, check_gpu_available, plot_decision_boundary
# Check for Metal GPU (Apple Silicon)
check_gpu_available()
# Load the moons dataset
X_train, y_train = load_moons_dataset(n_samples=300, noise=0.2, seed=42, plot=True)# Define network architecture: 2 inputs -> 64 -> 32 -> 16 -> 1 output
layers_dims = [X_train.shape[0], 64, 32, 16, 1]
# Create classifier with Adam optimizer
clf = TfNNClassifier(
layers_dims=layers_dims,
optimizer="adam",
alpha=0.01,
lambd=0.01,
keep_prob=0.9,
num_epochs=2000,
print_cost=True
)
# Train the model
clf.fit(X_train, y_train, seed=1)
# Get predictions
y_pred_train = clf.transform(X_train, y_train)TensorFlow GPU devices available: 1
- /physical_device:GPU:0
Cost after epoch 0: 1.199636
Cost after epoch 1000: 0.178848
Training Accuracy: 0.96
# Plot decision boundary
plot_decision_boundary(clf, X_train, y_train)For even more educational transparency, use the pure NumPy classifier:
from lensflare import NpNNClassifier
np_clf = NpNNClassifier(
layers_dims=[X_train.shape[0], 32, 16, 1],
optimizer="adam",
alpha=0.01,
lambd=0.01,
num_epochs=2000,
print_cost=True
)
np_clf.fit(X_train, y_train, seed=1)- Python >= 3.12
- TensorFlow >= 2.15, <= 2.18.1
- NumPy, Matplotlib, scikit-learn
MIT


