Why 'epochf' is called before 'learn' in train/gd.py algorithms?
Imagine you need only one epoch (for example, in reinforcement learning's step). When you specify 'epochs=1' the algorithms stop before any learning. When you specify 'epochs=2' you get additional unnecessary gradient calculation (so that you double the expensive calculations count).
Why 'epochf' is called before 'learn' in train/gd.py algorithms?
Imagine you need only one epoch (for example, in reinforcement learning's step). When you specify 'epochs=1' the algorithms stop before any learning. When you specify 'epochs=2' you get additional unnecessary gradient calculation (so that you double the expensive calculations count).