-
Notifications
You must be signed in to change notification settings - Fork 154
Open
Description
Hi,
In the paper "Transferable Representation Learning with Deep Adaptation Networks", you use cross-entropy loss (which is corresponding to equation 8 in the paper) to minimize the uncertainty of predicting the labels of the target data.
I find the corresponding implementation of that equation which is defined as EntropyLoss() in loss.py. In the paper, the total loss is composed of three main parts: the classification loss, the mmd loss and the cross-entropy loss.
What confused me is that in train.py, you do add the mmd loss and the classification loss together, but you don't actually add the cross-entropy loss. I am wondering do I miss something or do you do it on purpose?
Looking forward to hearing from you soon.
Thank you,
Ke
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels