You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,24 @@ We only develop and test with PyTorch. Please make sure to install it from [PyTo
26
26
27
27
## Usage <aid="Usage"></a>
28
28
29
+
**The core of the channel attention mechanism lies in its invariance between input and output.** Therefore, we can easily embed this module into a certain location in a neural network to further improve the model's performance.
29
30
31
+
~~~python
32
+
import torch
33
+
from channel_attention import SEAttention
30
34
35
+
# 1D Time Series Data with (batch_size, channels, seq_len)
When the number of input channels is small, the channel attention mechanism is very lightweight and does not significantly increase computational complexity.
32
47
33
48
## Modules <aid="Modules"></a>
34
49
@@ -50,5 +65,12 @@ We only develop and test with PyTorch. Please make sure to install it from [PyTo
0 commit comments