Skip to content

关于定位检测 #2

@Maybeetw

Description

@Maybeetw

首先非常感谢作者开源这个新颖的工作!!!

但是在关于篡改定位检测的地方有一点疑问,想向作者请教一下

在watermark_embedder.py函数 345行

def get_tamper_loc_latent(self, tlt, reversed_tlt, latent_size, tamper_confidence=0.5, optimize=True, return_initial_tl=False):
        tamper_loc_latent = (reversed_tlt != tlt).astype(int)
        tamper_loc_latent = torch.from_numpy(tamper_loc_latent).to(self.device)
        tamper_loc_latent = self.shuffle(tamper_loc_latent)
        ....

tamper_loc_latent = self.shuffle(tamper_loc_latent)
为什么要把解码出来的tamper_loc_latent shuffle一下呢,这样不会导致本来在latent中的顺序又乱了么,之后把这个送入网络,得到refine后的tamper_loc_latent,但是在作latent准确率的时候,没有给他inverse_shuffle,这个地方有些不太理解,如果这个在latent是正确的顺序,那么在
下面这个函数中,计算版权水印的时候,又把 pred_notamper_loc_latent = self.inverse_shuffle(pred_notamper_loc_latent)给 inverse_shuffle回去了,所以这么看来,网络refine后的tamper_loc_latent 其实还是被shuffle的,那为什么shuffle后的可以直接和latent 中的gt_loc算准确率呢

def calc_watermark(self, wm_len, wm_repeat, pred_tamper_loc_latent=None, with_tamper_loc=True):
        # if 'int' not in str(pred_tamper_loc_latent.dtype):
        #     pred_tamper_loc_latent = (pred_tamper_loc_latent - torch.min(pred_tamper_loc_latent)) / (torch.max(pred_tamper_loc_latent) - torch.min(pred_tamper_loc_latent))
        latent_len = wm_repeat.size(0)   
        wm_repeat_times = latent_len // wm_len
        complete_wm_len = wm_len * wm_repeat_times
        remain_wm_len = latent_len - complete_wm_len
        
        if with_tamper_loc == False or pred_tamper_loc_latent is None:
            pred_tamper_loc_latent = torch.zeros_like(wm_repeat)
            
        pred_notamper_loc_latent = 1 - pred_tamper_loc_latent.view(-1)
        pred_notamper_loc_latent = self.inverse_shuffle(pred_notamper_loc_latent)
        ....

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions