This repository contains a Streamlit application and supporting code for a multimodal deep learning model that classifies brain tumor presence based on CT and/or MRI scans.
- Multimodal classifier that supports CT, MRI, or both inputs.
- Dual DenseNet201 architecture (feature-level fusion).
- Trained using the PyTorch deep learning framework.
You can run the app locally using:
streamlit run app.pyor you can try out the app on streamlit cloud here
brain-tumor-multimodal-app/
├── app.py # Streamlit interface for predictions
├── model.py # Model architecture
├── utils.py # Helper functions
├── requirements.txt # Dependencies
├── README.md # This documentation
├── inference_examples/ # Example CT and MRI inputs (optional)
└── notebooks/
└── training-notebook.ipynb # Kaggle-style notebook with training pipeline
The model was trained using the public Kaggle dataset: 📂 Brain Tumor Multimodal Image CT and MRI Dataset
- Format: ImageFolder (
Healthy/,Tumour/) - CT and MRI stored in separate parent folders
- Samples randomly paired by label category
- Two DenseNet201 encoders for CT and MRI images
- Global average pooling and feature fusion
- Fully connected classifier for binary prediction
- Softmax output for confidence scoring
The pretrained model is hosted on Hugging Face: 📍 Hugging Face Repo
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="lukmanaj/brain-tumor-multimodal",
filename="multimodal_brain_tumor_model.pth"
)| Epoch | Train Loss | Accuracy |
|---|---|---|
| 1 | 0.1552 | 94.82% |
| 5 | 0.0368 | 98.78% |
- For educational and research purposes only.
- Not suitable for clinical diagnosis or real-world deployment without further validation.
Aliyu, L. (2025). Brain Tumor Classification using Multimodal Deep Learning.
Feel free to contribute or open an issue for improvements or questions