This project demonstrates how to spoof a face recognition model using adversarial perturbations. Using facenet-pytorch, we simulate a real-world biometric attack scenario where one person's face is subtly altered to match the identity of another.
- Model: FaceNet (InceptionResnetV1, pretrained on VGGFace2)
- Framework: PyTorch
- Attack Method: Gradient-based cosine similarity optimization
- Defenses: JPEG Compression, Gaussian Blur
To craft an adversarial example of Person B that is misclassified by the model as Person A, despite the two being visually distinct.
adversarial-face-spoofing/
├── face_recognition_model.py # Embedding + similarity logic
├── adversarial_attack.py # Spoof generation and evaluation
├── defense.py # Input transformations (JPEG, blur)
├── images/ # Input face images
│ ├── person_a.jpg
│ ├── person_b.jpg
| Step | Cosine Similarity | Comment |
|---|---|---|
| 0 | 0.18 | Low initial similarity |
| 100 | 1.00 | Perfect spoof |
Final similarity scores:
Adversarial image: 1.0000
JPEG compressed: 0.4812
Gaussian blurred: 0.5377
✅ Defenses were successful: similarity dropped below 0.6 threshold.
- Extract embeddings of Person A and Person B
- Iteratively adjust Person B's pixels to match Person A's embedding
- Use cosine similarity loss and optimize using Adam
- Clamp pixel values to stay within valid image bounds
- JPEG Compression: Removes minor pixel-level noise by re-encoding the image
- Gaussian Blur: Smooths input to degrade high-frequency adversarial perturbations
Use Python 3.10. Newer versions (like 3.12+) may break compatibility with facenet-pytorch or torch.
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txtpython adversarial_attack.pyBuilt by Gregory Apostle as part of an AI Security portfolio project.