This course teaches you about natural language processing (NLP) using libraries from the Hugging Face ecosystem β π€ Transformers, π€ Datasets, π€ Tokenizers, and π€ Accelerate β as well as the Hugging Face Hub.
-
Chapters 1 to 4 provide an introduction to the main concepts of the π€ Transformers library. By the end of this part of the course, you will be familiar with how Transformer models work and will know how to use a model from the Hugging Face Hub, fine-tune it on a dataset, and share your results on the Hub!
-
Chapters 5 to 8 provide a deep dive into the Datasets Library, the Tokenizers library, a walkthrough of all the common NLP use cases covered by the π€ Transformers library, and instructions on debugging and searching the official documentation.
-
Chapter 9 provides an introduction to Gradio, a framework for building Machine Learning applications. This chapter gives a deep dive into building Gradio Interfaces and Blocks.
Future chapters are being developed and will be released in the near future.
I am working on the Pytorch version of the HuggingFace courses, although the syntax for the Tensorflow version is highly similar and I am familiar with Tensorflow syntax (received Tensorflow Developer Certification).