-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to sparsify convolutional layers (torch.nn.Conv2d)? #2
Comments
Sten can help to sparisfy any operator but you need to provide custom implementations for them as shown here examples/custom_implementations.ipynb. The choice of actual implementation depends on the sparse format you want to use and architecture (GPU or CPU). |
Since I see that the actual implementation could depend on the choice of the sparse format and architecture (GPU or CPU), but I believe providing a default implementation of sparsified |
We currently propose to use sparse convolution, implemented through multiplication by a dense matrix. The last time I checked available libraries, there were no implementations of sparse convolution that had a significant performance improvement over dense. If you can suggest any libraries, we can include support for them. |
Sorry, could you please clarify what you mean by "sparse convolution, implemented through multiplication by a dense matrix"? Is there any existing implementation of what you are describing here? |
I mean first do an element-by-element multiplication of the input tensor and/or filter tensor by a mask tensor of zeros and ones. Then use a dense convolution as usual. |
On what kinds of hardware would this implementation provide real speedup (in your opinion)? |
This implementation is not supposed to give any speedup, quite opposite, it will be slower than the non-sparse version. However, it may still be useful. For example, to evaluate how much accuracy can be preserved from the sparsification of the model. |
I would like to turn an existing convolutional neural network (ResNet) into a sparse model. This model contains
torch.nn.Conv2d
but nottorch.nn.Linear
. Doessten
support this operation?I checked the tutorial notebook examples/modify_existing.ipynb. It seems that
sten
can only sparsifytorch.nn.Linear
, it cannot sparsifytorch.nn.Conv2d
? Is it the case?The text was updated successfully, but these errors were encountered: