Added a how-to-guide for parameter schedulers #80
Added a how-to-guide for parameter schedulers #80divyanshugit wants to merge 2 commits intopytorch-ignite:mainfrom
Conversation
* Co-author: vfdev
This PR is modified version of PR#49 Co-authored-by: Sylvain Desroziers <sylvain.desroziers@gmail.com>
|
Why not improve #49 instead of creating a new PR ? Anyway. |
|
I feel doubtful about the added description. It’s copy / paste from official PyTorch doc and I’m not sure that some details really matter. I would have preferred something more personal for the project. |
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import ast\n", |
There was a problem hiding this comment.
It is used in MultiStepLR for parsing the string values of Milestones.
| "source": [ | ||
| "## LambdaLR\n", | ||
| "\n", | ||
| "Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.\n", |
There was a problem hiding this comment.
last_epoch ? Just copy / paste sounds not enough. IMO useless and to be removed in others description cells.
There was a problem hiding this comment.
Should I try to rephrase it?
I thought this is the simplest way to explain these schedulers. |
| "Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.\n", | ||
| "\n", | ||
| "$$\n", | ||
| "l r_{\\text {epoch}} = l r_{\\text {initial}} * Lambda(epoch)\n", |
There was a problem hiding this comment.
In the PyTorch doc, optimizer.step() is called at the epoch end, it’s more flexible in ignite. The formula should not be written using epoch as index name.
There was a problem hiding this comment.
Oh, my bad. I'm sorry. I didn't think in this manner.
This PR is a modified version of PR#49
@Priyansi @sdesrozis Can you please give a review?