-
Notifications
You must be signed in to change notification settings - Fork 79
Description
Hello~ I recently read your brilliant paper, but confused anout BP problem mentioned in the introduction:
Moreover, this would also hinder the back-propagation for the prediction module, which needs to calculate the probability distribution of whether to keep the token even if it is finally eliminated.
My understanding is that the deleted tokens do not participate in subsequent attention calculations, meaning there is no information exchange. They are also irrelevant to the calculation of loss. Therefore, it seems that directly deleting these tokens during training does not affect the correct backpropagation of gradients. I am a bit confused about this statement in the article and would appreciate it if you could clarify any misunderstandings.