Adversarial attack — type of input or a mask applied to the input of the machine learning model to make it wrong. It is a way to cheat with the output, to ‘fool’ the algorithm.
«Attacking Machine Learning with Adversarial Examples» at Open AI blog covers the basics and provides some examples.
Open AI blog article: https://blog.openai.com/adversarial-example-research/
#adversarialattack #openai #novice #beginner
«Attacking Machine Learning with Adversarial Examples» at Open AI blog covers the basics and provides some examples.
Open AI blog article: https://blog.openai.com/adversarial-example-research/
#adversarialattack #openai #novice #beginner
New attack on neural networks can alter the purpose of the neural network.
A surprising adversarial attack, whereby a perturbation to all input images can "reprogram" a poorly-defended neural network to change its task entirely. e.g. turn an ImageNet classifier into a network that counts squares.
Arxiv: https://arxiv.org/pdf/1806.11146.pdf
#Goodfellow #gbrain #adversarialattack
A surprising adversarial attack, whereby a perturbation to all input images can "reprogram" a poorly-defended neural network to change its task entirely. e.g. turn an ImageNet classifier into a network that counts squares.
Arxiv: https://arxiv.org/pdf/1806.11146.pdf
#Goodfellow #gbrain #adversarialattack