Skip to content

[differential_privacy] Learning rates used for Adaptive Clipping experiments #59

@VasundharaAgarwal

Description

@VasundharaAgarwal

Hi,

I am trying to reproduce the experiments in "Differentially Private Learning with Adaptive Clipping" (2021), the source code for which is provided under federated/differential_privacy. The paper does not report the final server learning rates used for DP-FedAvgM with clipping enabled. It simply states the following in Section 3.1 -

Therefore, for all approaches with clipping—fixed or adaptive—we search over a small grid of five server learning rates, scaling the
values in Table 1 by {1, 10^1/4, 10^1/2, 10^3/4, 10}. For all configurations, we report the best performing model whose server learning rate was chosen from this small grid on the validation set.

It is not computationally feasible for me to search for the optimal server lr in every possible configuration so I was hoping you could specify the learning rates that were used for training the best performing models. Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions