-
Notifications
You must be signed in to change notification settings - Fork 6.7k
[wip] switch to transformers main again. #12976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| - "tests/pipelines/test_pipelines_common.py" | ||
| - "tests/models/test_modeling_common.py" | ||
| - "examples/**/*.py" | ||
| - ".github/**.yml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temporary. For this PR.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
| logger.addHandler(stream_handler) | ||
|
|
||
|
|
||
| @unittest.skipIf(is_transformers_version(">=", "4.57.5"), "Size mismatch") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal discussion: https://huggingface.slack.com/archives/C014N4749J9/p1768474502541349
| torch.nn.ConvTranspose2d, | ||
| torch.nn.ConvTranspose3d, | ||
| torch.nn.Linear, | ||
| torch.nn.Embedding, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happening because of the way weight loading is done in v5.
| model = AutoModel.from_pretrained( | ||
| "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="text_encoder", use_safetensors=False | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal discussion: https://huggingface.slack.com/archives/C014N4749J9/p1768462040821759
| input_ids = ( | ||
| input_ids["input_ids"] if not isinstance(input_ids, list) and "input_ids" in input_ids else input_ids | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal discussion https://huggingface.slack.com/archives/C014N4749J9/p1768537424692669
| inputs = { | ||
| "prompt": "dance monkey", | ||
| "negative_prompt": "", | ||
| "negative_prompt": "bad", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, the corresponding tokenizer outputs:
negative_prompt=[' ']
prompt=[' ']
text_input_ids=tensor([], size=(1, 0), dtype=torch.int64)which leads to:
E RuntimeError: cannot reshape tensor of 0 elements into shape [1, 0, -1, 8] because the unspecified dimension size -1 can be any value and is ambiguous| if is_transformers_version("<=", "4.58.0"): | ||
| token = tokenizer._added_tokens_decoder[token_id] | ||
| tokenizer._added_tokens_decoder[last_special_token_id + key_id] = token | ||
| del tokenizer._added_tokens_decoder[token_id] | ||
| elif is_transformers_version(">", "4.58.0"): | ||
| token = tokenizer.added_tokens_decoder[token_id] | ||
| tokenizer.added_tokens_decoder[last_special_token_id + key_id] = token | ||
| del tokenizer.added_tokens_decoder[token_id] | ||
| if is_transformers_version("<=", "4.58.0"): | ||
| tokenizer._added_tokens_encoder[token.content] = last_special_token_id + key_id | ||
| elif is_transformers_version(">", "4.58.0"): | ||
| tokenizer.added_tokens_encoder[token.content] = last_special_token_id + key_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still doesn't solve the following issue:
FAILED tests/pipelines/test_pipelines.py::DownloadTests::test_textual_inversion_unload - AttributeError: CLIPTokenizer has no attribute _added_tokens_decoder. Did you mean: 'added_tokens_decoder'?Internal thread: https://huggingface.slack.com/archives/C014N4749J9/p1768536480412119
|
@DN6 I have fixed a majority of the issues that were being caused by v5 / |
What does this PR do?
This PR is to assess if we can move to
transformersmain again for our CI. This will also help us migrate totransformersv5 successfully.Notes
For the
FAILED tests/pipelines/test_pipelines.py::DownloadTests::test_textual_inversion_unload - AttributeError: CLIPTokenizer has no attribute _added_tokens_decoder. Did you mean: 'added_tokens_decoder'?error, see this internal discussion.