-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementations of the custom layers in C++ and Native Pytorch for CPU support. #212
base: master
Are you sure you want to change the base?
Conversation
…ple2D layer and the ChannelNorm layer in native Pytorch and C++ to support inference on CPU. The main bottleneck is the Correlation Layer on which the FlowNetC architecture relies. This PR provides 2 implementations of the Correlation layer. -PyTorch native implementation. This requires no extra setup -Optimized C++ implementation for inference on CPU. Also provided are Pytorch native implementations for Resample2D and Channelnorm. Since the Pytorch implementation is quite efficient (compeletely vectorized) with no python for loops, C++ implementation is not needed. These layers also work by default on the GPU dependening on if the input tensors are on gpu and are slightly slower than the provided cuda implementation. See comments at the top of models.py and networks/FlowNetC.py for more details and how to switch to CPU mode. Backward passes are not yet implemented but will be added in the future. run_a_pair.py is replaced with a generic script called test.py to simply test functionality. run_a_pair.py had hardcoded paths. Also 2 frames from sintel added in test_images dir so that functionality and setup can be swiftly checked. Resolves: NVIDIA#190
I do not see backward method for the Correlation implementations (PyTorch and C++). EDIT : I mean not completed |
I did not implement them because I only needed forward methods for inference. |
So they do not work for training ? |
Yes, they will not work for training. And yes, newer PyTorch versions introduced some API changes such as making forward and backward static and slightly different calling syntax for nn.Function classes. These have not been updated yet. |
Hi, I tried to export the model FlowNet2 to onnx (the one with Correlation, Resample2d and ChannelNorm in PyTorch version), However I get: |
Hi! |
Implementation of Correlation, Resample2D layer and the ChannelNorm layer in native Pytorch and C++ to support inference on CPU.
The main bottleneck is the Correlation Layer on which the FlowNetC architecture relies.
This PR provides 2 implementations of the Correlation layer.
-PyTorch native implementation. This requires no extra setup
-Optimized C++ implementation for inference on CPU.
Also provided are Pytorch native implementations for Resample2D and Channelnorm.
Since the Pytorch implementation is quite efficient (compeletely vectorized) with no
python for loops, C++ implementation is not needed. These layers also work by default
on the GPU dependening on if the input tensors are on gpu and are slightly slower than
the provided cuda implementation.
See comments at the top of models.py and networks/FlowNetC.py for more details and
how to switch to CPU mode.
Backward passes are not yet implemented but will be added in the future.
run_a_pair.py is replaced with a generic script called test.py to simply test functionality.
run_a_pair.py had hardcoded paths. Also 2 frames from sintel added in test_images dir so that
functionality and setup can be swiftly checked.
Resolves: #190