You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use Apex's sparsity tools on an Orin Nano Super, running the Jetpack 6.1 SDCard image that was released recently with the Super devkit. To access torch, apex, etc, I'm running everything inside the nvcr.io/nvidia/pytorch:24.12-py3-igpu container.
Here's my minimal repro:
# root@4ddd9f30dc2e:/host# cat repro.py
import torch
import torchvision
from apex.contrib.sparsity import ASP
model = torchvision.models.resnet18(weights=None)
model = model.cuda().train()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
ASP.prune_trained_model(model, optimizer)
I would expect this to complete successfully, as this testcase seems to work on and x86_64 + RTX dGPU desktop using the analogous nvcr.io/nvidia/pytorch:24.12-py3 container.
I'm not sure what causes this, AttributeError: 'ResNet' object has no attribute 'module'. Did you mean: 'modules'?
But I've seen other projects guard the torch.distributed calls with torch.distributed.is_available() to catch cases like the jetson torch build where distributed is disabled.
Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to use Apex's sparsity tools on an Orin Nano Super, running the Jetpack 6.1 SDCard image that was released recently with the Super devkit. To access torch, apex, etc, I'm running everything inside the
nvcr.io/nvidia/pytorch:24.12-py3-igpu
container.Here's my minimal repro:
I would expect this to complete successfully, as this testcase seems to work on and x86_64 + RTX dGPU desktop using the analogous
nvcr.io/nvidia/pytorch:24.12-py3
container.Instead, it fails with this trace:
asp_error.log
I'm not sure what causes this,
AttributeError: 'ResNet' object has no attribute 'module'. Did you mean: 'modules'?
But I've seen other projects guard the
torch.distributed
calls withtorch.distributed.is_available()
to catch cases like the jetson torch build where distributed is disabled.Thanks.
The text was updated successfully, but these errors were encountered: