Want to confuse a neural network?
Try FGSM (Fast Gradient Sign Method) – a technique where adding carefully selected “noise” forces the model to misclassify what it “sees” in an image.
I wanted to observe what happens inside the neural network during such an attack, so I created an interactive visualization:
- Draw an apple or a pear – the model will try to recognize what’s in the image.
- Add “attack noise.”
- Watch what happens inside each layer.
Bonus: Try drawing an image that can resist this type of attack