You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to implement this as a backbone in a different larger model. I need to be able to extract the 4 feature maps from different stages.
Upon checking model.summary() I do not find any layers with output equal to the paper reference. (H/32 W/32, C1) as an example for F1.
I have found out that after every attention block the output is:
Hello wangermeng,
I want to implement this as a backbone in a different larger model. I need to be able to extract the 4 feature maps from different stages.
Upon checking model.summary() I do not find any layers with output equal to the paper reference. (H/32 W/32, C1) as an example for F1.
I have found out that after every attention block the output is:
(None, 3136, 64) >> (None, 56, 56, 64)
(None, 784, 128) >> (None, 28, 28, 128)
(None, 196, 320) >> (None, 14, 14, 64)
(None, 50, 512) >> (None, 7, 7, 512) ????????????????????????
The last layer should be (None, 49, 512)?
The line responsible for this change is where a 1 is added in the last stage
PVT-tensorflow2/model/PVT.py
Line 123 in 0fdc147
The text was updated successfully, but these errors were encountered: