Many characters across (and sometimes within) shows have very similar appearances, it can at times be confusing who is who. This is especially true when fan art gets involved, muddying the waters even more! The Jupyter Notebook within this repo contains a proof of concept example that can be used to resolve that with a custom Pytorch Convolution Neural Network (CNN) used to classify anime images from 2 curated datasets originally sourced from the paper AniWho: A Quick and Accurate Way to Classify Anime Character Faces in Images. This dataset uses four classes, with 50 images each.
Within the notebook is a full walkthrough showing how data was loaded, the model build and layers calculated, training, and performance validation. Additionally, there are visualizations included to assess the dataset, overall model performance, and prediction accuracies.
The custom CNN was able to achieve an accuracy of 90% on the test set. Example images below show how similar some of the characters are, and further illustrate the performance of the model.
The predictive performance of the model is made evident by the exceptionally low training and validation loss, as illustrated below.
