adversarial example in conventional maching learning
Adversarial examples in conventional ML models have been discussed since decades ago. ML-based systems with handcrafted features are primary targets, such as spam filters, intrusion detection, biometric authentication, and fraud detection. For example, spam emails are often modified by adding characters to avoid detection.
Adversarial examples in the conventional ML require knowledge of feature extraction, while DL usually needs only raw data input.
- DL models usually work directly with raw data inputs without explicit feature extraction. Consequently, adversarial attacks in DL primarily focus on perturbing the raw input data itself to fool the model.