Robustness
The ability of an AI model to maintain reliable performance when faced with unexpected inputs, adversarial attacks, data distribution changes, or edge cases.
Why It Matters
Robust models are essential for real-world deployment. A model that works perfectly in the lab but fails on slightly unusual inputs is dangerous in production.
Example
A self-driving car's vision system correctly identifying a stop sign even when it is partially obscured by snow, tilted at an angle, or has a sticker on it.
Think of it like...
Like a good bridge that handles not just normal traffic but also storms, earthquakes, and overloaded trucks — it is designed to perform under adverse conditions.
Related Terms
Adversarial Attack
An input deliberately crafted to fool an AI model into making incorrect predictions. Adversarial examples often look normal to humans but cause models to fail spectacularly.
Adversarial Training
A defense technique where adversarial examples are included in the training data to make the model more robust against attacks. The model learns to handle both normal and adversarial inputs.
Generalization
A model's ability to perform well on new, unseen data that was not part of its training set. Generalization is the ultimate goal of machine learning — learning patterns, not memorizing examples.