Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Running FedMA with large input data shape #5

@jefersonf

Description

@jefersonf

Hi @hwang595, a few weeks ago I made some questions in another issue thread about I problem that I had when trying to train a model with input image shape greater or equal to 224x224. Since then, I tried to reduce the dimensions of my problem to the default size, i.e. 32x32, and it worked well! But when I run using 224x224, I'm still locked in this training part.

So I'm gonna ask my questions here again:

  • Is there such a relationship? Training input size and FedMA communication process? If that's true, what can we do about it?
  • By adding a different model, in which part of the code should I take care? Besides changing, for example, the input dimensions to 1x224x224?

Obs.: As I'm working with medical images it is critical resize them.

Thanks for the great work!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions