-
Notifications
You must be signed in to change notification settings - Fork 33
Description
Is there a requirement for the format of the model here?
load dreambooth model from models/ff/anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
Traceback (most recent call last):
File "/root/AnimateDiff-MotionDirector/train.py", line 1038, in
main(name=name, use_wandb=args.wandb, **config)
File "/root/AnimateDiff-MotionDirector/train.py", line 623, in main
validation_pipeline = load_weights(
File "/root/AnimateDiff-MotionDirector/animatediff/utils/util.py", line 143, in load_weights
animation_pipeline.text_encoder = convert_ldm_clip_checkpoint(dreambooth_state_dict)
File "/root/AnimateDiff-MotionDirector/animatediff/utils/convert_from_ckpt.py", line 726, in convert_ldm_clip_checkpoint
text_model.load_state_dict(text_model_dict)
File "/root/miniconda3/envs/animatediff-motiondirector/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".
