Unexpected key(s) in state_dict: "head.weight", "head.bias", "fc_norm.weight", "fc_norm.bias".
size mismatch for pos_embed: copying a param with shape torch.Size([1, 197, 1024]) from checkpoint, the shape in current model is torch.Size([1, 257, 1024]).
checkpoint = torch.load(ckpt_path, map_location='cpu')
model.load_state_dict(checkpoint['model'])
I am loading the mage-vitl-ft.pth but didn't work. do we need conversion scripts?