-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forward Transfer Issue #13
Comments
Hi! I have just checked my logs and I don't seem to have saved that experiment. I found the imagenet100 run with random features instead, and it reached 15.34% after linear evaluation. For CIFAR you can just run a linear evaluation without loading the pre-trained checkpoint. You can do that with solo-learn as well if you prefer https://github.com/vturrisi/solo-learn. I think the random seed does not matter that much, but you can run multiple times and average if you find it has high variance. |
Thanks for the reply. Since there are no scripts for linear eval on CIFAR-100 in this project, if we understand correctly, the results of Table 2 in the paper come from the online linear evaluation of So we would like to confirm whether the results of Table 2 are online or offline. Thank you again! |
if the backbone is randomly initialized and frozen there is no difference between online and offline (except augmentations that should not matter much). |
Thanks for the reply! |
Hi! |
Hello! From the paper:
|
Thanks for your reply, this is very detailed! |
IIRC all the other parameters were left as per their default value in the respective paper/code |
We run Barlow Twins+LUMP on CIFAR-100 and only get an accuracy of 55.9%, which is 1.9% lower than the result reported in the paper. We wonder if we made a difference when migrating the LUMP code to your project, e.g., the buffer construction or the augmentation methods for previous samples.
transform = transforms.Compose(
[
transforms.ToPILImage(),
transforms.RandomResizedCrop(32, scale=(0.08, 1.0), ratio=(3.0/4.0,4.0/3.0), interpolation=Image.BICUBIC),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.247, 0.243, 0.261)),
]
) We would like to check if this is the same as yours, thanks! |
Hello,
|
Hi! We are following your excellent work.
We would like to know more clearly the details of your experiments on CIRAR-100 to calculate
Forward Transfer
, such as how the accuracy of the random model on each task is obtained.If we understand correctly, since the random seed is fixed, then the accuracy of the random model should be fixed as well. Is it possible to provide the accuracy of the random model on five tasks for reference.
Thanks!
The text was updated successfully, but these errors were encountered: