General
A/B Testing
A controlled experiment comparing two versions (A and B) of a system, feature, or model to determine which performs better. Users are randomly assigned to each version and outcomes are measured.
Why It Matters
A/B testing provides statistical proof of which model or feature is better. It prevents deploying changes that seem better in theory but hurt metrics in practice.
Example
Showing 50% of users recommendations from Model A and 50% from Model B, then measuring which group has higher click-through and purchase rates over two weeks.
Think of it like...
Like a taste test where people try two versions of a recipe without knowing which is which — the blind comparison reveals the true preference.