This repository contains experiments for tomato status (ripeness) classification using deep learning and different image preprocessing techniques. The goal is to evaluate how preprocessing methods affect model performance when classifying tomatoes into ripe, unripe, and rotten categories. This experiment is conducted for academic purpose.
- ResNet50 (transfer learning)
- VGG16 (transfer learning)
- Custom CNN
The following preprocessing strategies are evaluated:
- No preprocessing (original images)
- Morphological processing
-
Grayscale conversion
-
Gaussian blur
-
Thresholding
-
Morphological opening and closing
-
CLAHE + K-Means segmentation
- LAB color space conversion
- K-Means clustering (k = 3)
- CLAHE for contrast enhancement
model training and evaluation/
├── CNN CLAHE + kmeans.ipynb
├── Resnet50 original.ipynb
├── Resnet50 morphology.ipynb
├── Resnet50 CLAHE + kmeans.ipynb
└── VGG16 CLAHE + kmeans.ipynb
- Best preprocessing method: CLAHE + K-Means segmentation
- Best performing model: VGG16
- Highest test accuracy: 95.09%
- Tomato images from a public dataset and self-collected data
- Classes: ripe, unripe, rotten
- Dataset is not included in this repository
This project was developed for an academic Computer Vision course to study the impact of image preprocessing on deep learning–based classification


