Improvising adversarial attack against prediction of neural network
-
Updated
Aug 22, 2025 - Python
Improvising adversarial attack against prediction of neural network
Fine-tuned resnet34 and mobilenetv2 on the caltech101 dataset. Tested FGSM attacks and used XAI techniques to understand both models behaviours then implemented two defensive measures against the attacks.
A quantum-classical (or hybrid) neural network and the use of a adversarial attack mechanism. The core libraries employed are Quantinuum pytket and pytket-qiskit. torchattacks is used for the white-box, targetted, compounded adversarial attacks.
Add a description, image, and links to the torchattacks topic page so that developers can more easily learn about it.
To associate your repository with the torchattacks topic, visit your repo's landing page and select "manage topics."