You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have chosen to use Lightning Fabric over PyTorch Lightning for this project.
Why Fabric? Fabric allows for a "build-your-own-loop" approach, which is essential for the custom training requirements of neural compression (e.g., separate auxiliary loss optimization steps, custom rate-distortion loss handling). It provides the necessary control without the "magic" or restrictive structure of a full LightningModule.
Trade-offs: Fabric does not have a built-in callback system (like EarlyStopping or ModelCheckpoint). These features must be implemented manually in the training loop, as seen in tinify/cli/train.py.
Decision: Stick with Fabric for the transparency and granular control it offers, which outweighs the convenience of pre-built callbacks for this specific use case.
About
Deep learning image compression research and implementation