Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where your PRM? #30

Closed
ZM-J opened this issue Dec 13, 2021 · 1 comment
Closed

Where your PRM? #30

ZM-J opened this issue Dec 13, 2021 · 1 comment

Comments

@ZM-J
Copy link

ZM-J commented Dec 13, 2021

In your paper you said PRM combines \alpha_{l-1} and \alpha_l' to generate a better \alpha_l.

The idea is cool. However, in the code, I didn't found something like PRN.

In trainer.py Line 185, you get prediction result alpha_pred_os1, alpha_pred_os4, and alpha_pred_os8 from the model. However, alpha_pred_os4 and alpha_pred_os8 are obtained from modules refine_OS4 and refine_OS8 in the decoder directly, which use ONE input.

So, where your g_l, i.e., guidance in your paper? I don't see anything like g_l except obtaining loss like utils.get_unknown_tensor_from_pred.

Where your PRM? WHERE YOUR PRM? HAIYAA...

@ZM-J ZM-J changed the title Where your PRN? Where your PRM? Dec 13, 2021
@ZM-J
Copy link
Author

ZM-J commented Dec 14, 2021

So the PRM is only applied during inference, i.e., in infer.py Lines 25~30.

During training, nothing about PRM is used.

OK.

@ZM-J ZM-J closed this as completed Dec 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant