Author: Andrew Feng
This project implements a multi-label disease classifier for retinal fundus images using a fine-tuned ResNet-50 model. It explores classification performance on a medical dataset through training, evaluation, and fine-tuning pipelines. The goal is to effectively detect key retinal diseases, addressing class imbalance and real-world diagnostic challenges.
The dataset used in this project is available here:
🔗 Google Drive Dataset
After cloning this repository, download the entire data/ folder from the link above and place it inside the root directory of the project (Fundus-Classifier/). This is required to run the code successfully and reproduce results.
git clone https://github.com/Andyrooooo16/Fundus-Classifier.git cd ResNet50-Retinal-Fundus-Image-Classifier
Download the data/ folder from the Google Drive link Place the folder directly inside the project root:
css Copy Edit Fundus-Classifier/ ├── data/ ├── src/ ├── outputs/ └── README.md
All source code is located in the src/ directory:
css Copy Edit Fundus-Classifier/ └── src/ ├── crop_pictures.py ├── dataset.py ├── disease_breakdown.py ├── model.py ├── train_model.py ├── evaluate.py ├── finetuning_model.py ├── evaluate_finetuned.py └── test_finetuned.py
Run 'crop_pictures.py' on the Training, Validation, and Test datasets to generate cropped images for model training, evaluation, and testing.
Run 'dataset.py' to preprocess all three subsets of data.
Run 'disease_breakdown.py' to generate three visualization graphs in your browser showing the dataset composition and top disease classes in each subset.
The 'model.py' file contains the pretrained ResNet50 model with adjusted hyperparameters for initial training.
Run 'train_model.py' to train the multi-label classifier using the cropped Training Dataset and the model from 'model.py'. This will save 'best_fundus_model.pth' in the Fundus-Classifier/outputs/Training folder.
Run 'evaluate.py' to assess the performance of 'best_fundus_model.pth'. Evaluation metrics will be saved in the Fundus-Classifier/outputs/Evaluation folder.
Run 'finetuning_model.py' to fine-tune the model. This will generate 'fine_tuned_model.pth'.
Run 'evaluate_finetuned.py' to evaluate the fine-tuned model. Results will be saved under Fundus-Classifier/outputs/Fine_Tuned.
Run 'test_finetuned.py' to perform the final evaluation of the fine-tuned model on the test dataset. Final evaluation metrics will be saved in the Fundus-Classifier/outputs/Test folder.