You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
There are two features provided by mmengine, namely auto_scale_lr and grad_acculumate.
As designed, the actual learning rate should be determined according to batch_size of dataset and base_batch_size in auto_scale_lr.
But when I use mmengine cfg param accumulative_counts, I observe no influence on actual learning_rate.
So does this mean auto_scale_lr only considers the batch_size of input dataset regardless of a potential grad_accumulative?
I believe this may induce some potential mis-set on learning_rate and more confuse.
Beta Was this translation helpful? Give feedback.
All reactions