Privacy-Preserving ML
Machine learning techniques that train models or make predictions while protecting the privacy of individual data points. Includes federated learning, differential privacy, and homomorphic encryption.
Why It Matters
Privacy-preserving ML enables AI development in sensitive domains (healthcare, finance) where raw data sharing is impossible due to regulations.
Example
Training a medical AI across 50 hospitals using federated learning and differential privacy — getting the benefits of combined data without any hospital sharing patient records.
Think of it like...
Like a sealed ballot election — everyone's individual vote (data) is private, but the collective result (model) is still accurate.
Related Terms
Federated Learning
A decentralized training approach where a model is trained across multiple devices or organizations without sharing raw data. Each participant trains locally and only shares model updates.
Differential Privacy
A mathematical framework that provides provable privacy guarantees when analyzing or learning from data. It ensures that the output of any analysis is approximately the same whether or not any individual's data is included.
Data Privacy
The right of individuals to control how their personal information is collected, used, stored, and shared. In AI, data privacy concerns arise from training data, user interactions, and model outputs.
GDPR
General Data Protection Regulation — the European Union's comprehensive data protection law that gives individuals control over their personal data and imposes strict obligations on organizations handling that data.