Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on the Role of the Predictor Network g in the CaSSLe Framework #18

Open
hedjazi opened this issue Jul 4, 2024 · 0 comments

Comments

@hedjazi
Copy link

hedjazi commented Jul 4, 2024

Hi,

Thank you very much for your excellent work. I have a theoretical question regarding the role of the predictor network g as described in section 5, "The CaSSLe Framework":

Now, our goal is to ensure that z contains at least as much information as (and ideally more than) z̄. Instead of enforcing the two feature vectors to be similar, and hence discouraging the new model from learning new concepts, we propose to use a predictor network g to project the representations from the new feature space to the old one. If the predictor is able to perfectly map from one space to the other, then it implies that z is at least as powerful as z̄.

Could you please clarify how being able to project from the new feature space to the old one using g ensures that z will prevent the loss of information from the old task? Additionally, if this is the case, how do you enforce this without having access to some old data (replay), especially considering the significant visual differences in some datasets, such as DomainNet, where visual features can vary greatly between domains?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant