Skip to content

Change aggregation to aggregate optimizer parameters #296

@ehoelzl

Description

@ehoelzl

For now all aggregation is done on the model level: Before optimization, we aggregate the model weights or gradients.

However, some optimizers have states and distributing training using such optimizers can require additional information to be shared. Which is why we should aggregate the optimizer's parameters/state dict instead of the model's gradients/weights.

Metadata

Metadata

Assignees

Labels

conceptual-changeChanges related to how models are trained

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions