Skip to content

Conversation

@Justjustifyjudge
Copy link

在无xformer环境下,在autoencoder.py的1102行,会将所有的attn_mode设置为"softmax",而事实上我们只需把"softmax-xformers"设置为softmax即可。
In an environment without xformers, at line 1102 of autoencoder.py, all attn_modewill be set to "softmax". However, in fact, we only need to set "softmax-xformers"to "softmax".

在autoencoder.py的第471行,CrossAttention类没有设计接受layer参数,已修复。
At line 471 of autoencoder.py, the CrossAttentionclass was not designed to accept a layerparameter. This has been fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant