You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Adversarial Pertubation System for images that specifically targets image scrapers for unauthorized AI uses. Applies imperceptible perturbations using a PGD attack that specifically targets the CLIP vision model. Attack constrained by LPIPS perceptual loss and saliency mask to preserve visual fidelity.
Hands-on AI security workshop by GDSC Asia Pacific University – explore the fundamentals of attacking machine learning systems through white-box and black-box techniques. Learn to evade image classifiers and manipulate LLM behavior using real-world tools and methods.
Project developed during the course of 'Optimization for Data Science' in the University of Padua. The project provides an Implementation of Frank-Wolfe Methods for Recommender Systems in Python.