You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was testing the speed of linear attention, but found that its speed is slower than the standard attention implemented in pytorch. Could you help me to understand this? Is this because the pytorch implementation include flash attention?