Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Nov 20, 2021

Bumps transformers from 4.9.1 to 4.12.5.

Release notes

Sourced from transformers's releases.

v4.12.5: Patch release

Reverts a commit that introduced other issues:

  • Revert "Experimenting with adding proper get_config() and from_config() methods (#14361)"

v4.12.4: Patch release

  • Fix gradient_checkpointing backward compatibility (#14408)
  • [Wav2Vec2] Make sure that gradient checkpointing is only run if needed (#14407)
  • Experimenting with adding proper get_config() and from_config() methods (#14361)
  • enhance rewrite state_dict missing _metadata (#14348)
  • Support for TF >= 2.7 (#14345)
  • improve rewrite state_dict missing _metadata (#14276)
  • Fix of issue #13327: Wrong weight initialization for TF t5 model (#14241)

v4.12.3: Patch release

  • Add PushToHubCallback in main init (#14246)
  • Supports huggingface_hub >= 0.1.0

v4.12.2: Patch release

Fixes an issue with the image segmentation pipeline and PyTorch's inference mode.

v4.12.1: Patch release

Enables torch 1.10.0

v4.12.0: TrOCR, SEW & SEW-D, Unispeech & Unispeech-SAT, BARTPho

TrOCR and VisionEncoderDecoderModel

One new model is released as part of the TrOCR implementation: TrOCRForCausalLM, in PyTorch. It comes along a new VisionEncoderDecoderModel class, which allows to mix-and-match any vision Transformer encoder with any text Transformer as decoder, similar to the existing SpeechEncoderDecoderModel class.

The TrOCR model was proposed in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models, by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.

The TrOCR model consists of an image transformer encoder and an autoregressive text transformer to perform optical character recognition in an end-to-end manner.

Compatible checkpoints can be found on the Hub: https://huggingface.co/models?other=trocr

SEW & SEW-D

SEW and SEW-D (Squeezed and Efficient Wav2Vec) were proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.

SEW and SEW-D models use a Wav2Vec-style feature encoder and introduce temporal downsampling to reduce the length of the transformer encoder. SEW-D additionally replaces the transformer encoder with a DeBERTa one. Both models achieve significant inference speedups without sacrificing the speech recognition quality.

Compatible checkpoints are available on the Hub: https://huggingface.co/models?other=sew and https://huggingface.co/models?other=sew-d

DistilHuBERT

... (truncated)

Commits
  • ef3cec0 Release: v4.12.5
  • a5211fc Revert "Experimenting with adding proper get_config() and from_config() metho...
  • 527c763 Release: v4.12.4
  • 6f40723 Fix gradient_checkpointing backward compatibility (#14408)
  • db242ae [Wav2Vec2] Make sure that gradient checkpointing is only run if needed (#14407)
  • e99a231 Experimenting with adding proper get_config() and from_config() methods (#14361)
  • 341a059 enhance rewrite state_dict missing _metadata (#14348)
  • 6bf2027 Support for TF >= 2.7 (#14345)
  • c8206b4 improve rewrite state_dict missing _metadata (#14276)
  • b6b97c3 Fix of issue #13327: Wrong weight initialization for TF t5 model (#14241)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [transformers](https://github.com/huggingface/transformers) from 4.9.1 to 4.12.5.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.9.1...v4.12.5)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 20, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Dec 11, 2021

Superseded by #84.

@dependabot dependabot bot closed this Dec 11, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/tune/transformers-4.12.5 branch December 11, 2021 08:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant