Focuses on parameter isolation methods for continual learning, where each task uses separate parameter masks or subnetworks to prevent forgetting. Implements Hard Attention to the Task (HAT), Supermask Superposition (SupSup), and Piggyback, with visualization tools and metrics for task overlap and capacity usage.
-
Updated
Oct 15, 2025 - Python