FLOPS
Floating Point Operations Per Second — a measure of computing speed that quantifies how many mathematical calculations a processor can perform each second. Used to measure AI hardware performance.
Why It Matters
FLOPS is the benchmark for comparing AI hardware capability and estimating training costs. It is how the industry measures and plans compute investments.
Example
An NVIDIA H100 GPU delivers about 4 petaFLOPS (4 quadrillion operations per second) for AI-specific computation.
Think of it like...
Like horsepower for computers — it measures raw computational muscle, telling you how much mathematical work a chip can do in a given time.
Related Terms
GPU
Graphics Processing Unit — originally designed for rendering graphics, GPUs excel at the parallel mathematical operations needed for training and running AI models. They are the primary hardware for modern AI.
TPU
Tensor Processing Unit — Google's custom-designed chip specifically optimized for machine learning workloads. TPUs are designed for matrix operations that are fundamental to neural network computation.
Compute
The computational resources (processing power, memory, time) required to train or run AI models. Compute is measured in FLOPs (floating-point operations) and is a primary constraint and cost in AI development.
Hardware Acceleration
Using specialized hardware (GPUs, TPUs, FPGAs, ASICs) to speed up AI computation compared to general-purpose CPUs. Accelerators are optimized for the specific math operations used in neural networks.