ASIC
Application-Specific Integrated Circuit — a chip designed for a single specific purpose. In AI, ASICs like Google's TPUs are designed exclusively for neural network operations.
Why It Matters
ASICs achieve maximum efficiency by sacrificing generality. They can be 10-100x more efficient than GPUs for their specific AI workload.
Example
Google's TPU being an ASIC designed specifically for tensor operations — it cannot run general software but excels at the exact math neural networks need.
Think of it like...
Like a Formula 1 car versus a minivan — the F1 car is useless for grocery shopping but unbeatable on a racetrack because it is built for one thing only.
Related Terms
TPU
Tensor Processing Unit — Google's custom-designed chip specifically optimized for machine learning workloads. TPUs are designed for matrix operations that are fundamental to neural network computation.
GPU
Graphics Processing Unit — originally designed for rendering graphics, GPUs excel at the parallel mathematical operations needed for training and running AI models. They are the primary hardware for modern AI.
Hardware Acceleration
Using specialized hardware (GPUs, TPUs, FPGAs, ASICs) to speed up AI computation compared to general-purpose CPUs. Accelerators are optimized for the specific math operations used in neural networks.
Compute
The computational resources (processing power, memory, time) required to train or run AI models. Compute is measured in FLOPs (floating-point operations) and is a primary constraint and cost in AI development.