AI Engineering Glossary
Search
view all

Adversarial Attacks

Adversarial Attacks involve creating small, often imperceptible changes to the input of a model to fool or mislead it. For example, altering a few pixels in an image might cause a neural network to misclassify it, even though it looks unchanged to the human eye. This concept highlights the vulnerabilities of models and their sensitivity to input data, contrasting with robustness which seeks to strengthen such weaknesses.

Search Perplexity | Ask ChatGPT | Ask Clade

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

z