Skip to content

Conversation

@kwen2501
Copy link
Contributor

@kwen2501 kwen2501 commented Aug 5, 2024

What does this PR do?

Non-persistent buffers is not saved in state dict.

In the case of meta init, while loading state dict from checkpoint can fill in parameters and persistent buffers, we need a way to initialize non-persistent buffers.

This PR does so by registering a buffer's init function against the buffer's FQN, and attaching such a callback dict to the model.

For how the init callbacks can be used, please refer to the init_buffers utility in this PR:
pytorch/PiPPy#1135

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@kwen2501 kwen2501 changed the title Add buffer init callbacks to llama Register buffer init callbacks in llama Aug 6, 2024
@amyeroberts
Copy link
Contributor

Hi @kwen2501, thanks for opening this PR!

cc'ing @muellerzr as this likely overlaps with accelerate's handling of weights loading and @ArthurZucker re Llama

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, nice job finding this fix. cc @SunMarc

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I remember this is required for some specific usage with torch that we talked about! Let's iterate 🤗

Comment on lines +1137 to +1141
# Create buffer init callbacks by extending the one from `LlamaModel`,
# i.e. appending a prefix to all buffer FQNs.
for key, val in self.model.buf_init_callbacks.items():
new_key = ".".join(["model", key])
self.buf_init_callbacks[new_key] = val
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this should go in the LlamaPreTrainedModel at best!

self.original_inv_freq = self.inv_freq

# Save buffer init callback
LlamaModel.buf_init_callbacks.setdefault("rotary_emb.inv_freq", init_inv_freq)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this not be called in the LlamaModel directly, it's a bit weird for us to register something like this at the class level

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants