Skip to content

questions about content_loss function #8

@RenaissanceXX

Description

@RenaissanceXX

I have the following questions about this function:
For example, realA is AF, and fakeB is HE; the corresponding realB is HE, and fakeA is AF.
Why do realA and fakeA use different thresholds?

Here’s the translated function and comments:

def content_loss(self):
    L1_function = torch.nn.L1Loss()
    real_A_mean = torch.mean(self.real_A, dim=1, keepdim=True)
    real_B_mean = torch.mean(self.real_B, dim=1, keepdim=True)
    fake_A_mean = torch.mean(self.fake_A, dim=1, keepdim=True)
    fake_B_mean = torch.mean(self.fake_B, dim=1, keepdim=True)

    real_A_normal = (real_A_mean - (self.opt.threshold_A / 127.5 - 1)) * 100
    real_B_normal = (real_B_mean - (self.opt.threshold_B / 127.5 - 1)) * 100

    fake_A_normal = (fake_A_mean - (self.opt.threshold_B / 127.5 - 1)) * 100
    fake_B_normal = (fake_B_mean - (self.opt.threshold_A / 127.5 - 1)) * 100

    real_A_sigmoid = torch.sigmoid(real_A_normal)
    real_B_sigmoid = 1 - torch.sigmoid(real_B_normal)

    fake_A_sigmoid = torch.sigmoid(fake_A_normal)
    fake_B_sigmoid = 1 - torch.sigmoid(fake_B_normal)

    content_loss_A = L1_function(real_A_sigmoid, fake_B_sigmoid)
    content_loss_B = L1_function(fake_A_sigmoid, real_B_sigmoid)

    content_loss_rate = 50 * np.exp(-(self.opt.counter / self.opt.data_size))
    content_loss = (content_loss_A + content_loss_B) * content_loss_rate
    return content_loss

Key question summary:

  • realA and fakeA use different thresholds (threshold_A vs. threshold_B).
  • The code seems to apply a threshold swap between real and fake pairs.
    Would you like me to help explain this threshold logic?``

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions