You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper you said PRM combines \alpha_{l-1} and \alpha_l' to generate a better \alpha_l.
The idea is cool. However, in the code, I didn't found something like PRN.
In trainer.py Line 185, you get prediction result alpha_pred_os1, alpha_pred_os4, and alpha_pred_os8 from the model. However, alpha_pred_os4 and alpha_pred_os8 are obtained from modules refine_OS4 and refine_OS8 in the decoder directly, which use ONE input.
So, where your g_l, i.e., guidance in your paper? I don't see anything like g_l except obtaining loss like utils.get_unknown_tensor_from_pred.
Where your PRM? WHERE YOUR PRM? HAIYAA...
The text was updated successfully, but these errors were encountered:
ZM-J
changed the title
Where your PRN?
Where your PRM?
Dec 13, 2021
In your paper you said PRM combines \alpha_{l-1} and \alpha_l' to generate a better \alpha_l.
The idea is cool. However, in the code, I didn't found something like PRN.
In
trainer.py
Line 185, you get prediction resultalpha_pred_os1
,alpha_pred_os4
, andalpha_pred_os8
from the model. However,alpha_pred_os4
andalpha_pred_os8
are obtained from modulesrefine_OS4
andrefine_OS8
in the decoder directly, which use ONE input.So, where your g_l, i.e., guidance in your paper? I don't see anything like g_l except obtaining loss like
utils.get_unknown_tensor_from_pred
.Where your PRM? WHERE YOUR PRM? HAIYAA...
The text was updated successfully, but these errors were encountered: