Risk Assessment
The systematic process of identifying, analyzing, and evaluating potential risks associated with an AI system. Risk assessment considers both the likelihood and impact of potential harms.
Why It Matters
Risk assessment is required by the EU AI Act and is becoming standard practice. It determines what safeguards, testing, and monitoring an AI system needs.
Example
Evaluating a hiring AI by assessing risks of bias (high likelihood, high impact), data privacy violations (medium likelihood, high impact), and system downtime (low impact).
Think of it like...
Like an insurance underwriter assessing a building — they evaluate every potential risk, its likelihood, and its potential damage to determine appropriate protections.
Related Terms
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.
Compliance
The process of ensuring AI systems meet regulatory requirements, industry standards, and organizational policies. AI compliance is becoming increasingly complex as regulations proliferate.
AI Safety
The research field focused on ensuring AI systems operate reliably, predictably, and without causing unintended harm. It spans from technical robustness to long-term existential risk concerns.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.