You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to express my admiration for your outstanding work. The methods and examples you provided are highly intuitive and leave a lasting impression. The effectiveness of your approach is evident, and I am genuinely inspired by your contributions to the field.
I have a few questions and would greatly appreciate your clarification:
1.LDS Calculation in examples/cifar_quickstart.ipynb:
In the final step of the notebook where LDS is calculated, I noticed that even after training numerous models, the LDS value remains very low. Could you please explain the reasoning behind this observation? Is it an expected outcome, or could there be something I am overlooking?
Comparison Between Image and Text Examples:
While the results on image datasets are visually striking and highly effective, I noticed that some examples in the QNLI dataset seemed less convincing. I understand that textual data is inherently more diverse, rich in hidden semantic information, and subjective. Could you provide additional examples on QNLI to better showcase the effectiveness of your approach in the text domain like example in pre-computed TRAK scores for CIFAR-10 (Google Colab notebook)?
Reproducibility of LDS on QNLI:
For QNLI, I am particularly interested in understanding the process of calculating the LDS metric. As it requires a subset mask and models trained on the subsets (similar to the approach in cifar_quickstart.ipynb), would it be possible for you to share the training process and code for calculating LDS on QNLI? This would be immensely helpful for reproducing the results in your paper efficiently.
Thank you again for your remarkable work and for making your research open to the community. I sincerely hope you can provide some insights into these questions.
Looking forward to your response!
The text was updated successfully, but these errors were encountered:
I would like to express my admiration for your outstanding work. The methods and examples you provided are highly intuitive and leave a lasting impression. The effectiveness of your approach is evident, and I am genuinely inspired by your contributions to the field.
I have a few questions and would greatly appreciate your clarification:
1.LDS Calculation in examples/cifar_quickstart.ipynb:
In the final step of the notebook where LDS is calculated, I noticed that even after training numerous models, the LDS value remains very low. Could you please explain the reasoning behind this observation? Is it an expected outcome, or could there be something I am overlooking?
Comparison Between Image and Text Examples:
While the results on image datasets are visually striking and highly effective, I noticed that some examples in the QNLI dataset seemed less convincing. I understand that textual data is inherently more diverse, rich in hidden semantic information, and subjective. Could you provide additional examples on QNLI to better showcase the effectiveness of your approach in the text domain like example in pre-computed TRAK scores for CIFAR-10 (Google Colab notebook)?
Reproducibility of LDS on QNLI:
For QNLI, I am particularly interested in understanding the process of calculating the LDS metric. As it requires a subset mask and models trained on the subsets (similar to the approach in cifar_quickstart.ipynb), would it be possible for you to share the training process and code for calculating LDS on QNLI? This would be immensely helpful for reproducing the results in your paper efficiently.
Thank you again for your remarkable work and for making your research open to the community. I sincerely hope you can provide some insights into these questions.
Looking forward to your response!
The text was updated successfully, but these errors were encountered: