Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Speed up optimizer. #909

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

RangiLyu
Copy link
Member

@RangiLyu RangiLyu commented Feb 6, 2023

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Speed up optimization progress with parameter grouping and foreach optimizer.

Modification

Add reduce_param_groups refer from https://github.com/facebookresearch/detectron2/blob/main/detectron2/solver/build.py

Enable foreach when torch>=1.12

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

benchmark

model time(before) time(after) total training time
RTMDet-s 0.352 0.338 -0.5h
RTMDet-x 1.102 1.055 -1.5h

TODO

  • benchmark
  • fix unit tests
  • docstring

@RangiLyu RangiLyu added the enhancement New feature or request label Feb 6, 2023
@codecov
Copy link

codecov bot commented Feb 6, 2023

Codecov Report

Attention: Patch coverage is 77.27273% with 5 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@fd84c21). Learn more about missing BASE report.

Files with missing lines Patch % Lines
mmengine/optim/optimizer/default_constructor.py 77.27% 2 Missing and 3 partials ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main     #909   +/-   ##
=======================================
  Coverage        ?   77.74%           
=======================================
  Files           ?      139           
  Lines           ?    11491           
  Branches        ?     2330           
=======================================
  Hits            ?     8934           
  Misses          ?     2147           
  Partials        ?      410           
Flag Coverage Δ
unittests 77.74% <77.27%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

mmengine/optim/optimizer/default_constructor.py Outdated Show resolved Hide resolved
mmengine/optim/optimizer/default_constructor.py Outdated Show resolved Hide resolved
mmengine/optim/optimizer/default_constructor.py Outdated Show resolved Hide resolved
mmengine/optim/optimizer/default_constructor.py Outdated Show resolved Hide resolved
mmengine/optim/optimizer/default_constructor.py Outdated Show resolved Hide resolved
@zhouzaida
Copy link
Collaborator

It would be better to provide a benchmark of the optimization in the PR message.

@zhouzaida
Copy link
Collaborator

Please add the usage in the docs/en/tutorials/optim_wrapper.md and docs/zh_cn/tutorials/optim_wrapper.md

zhouzaida
zhouzaida previously approved these changes Apr 7, 2023
- ``reduce_param_groups`` (bool): If true, constructor will cluster the
parameter groups with the same learning rate, momentum and other
parameters, which can speed up the optimizer. Defaults to true.
New in version 0.7.2.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a newer version number? 0.10.4?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants