Random Forest
An ensemble learning method that builds multiple decision trees during training and outputs the majority vote (classification) or average prediction (regression) of all the trees. The 'forest' of diverse trees is more robust than any single tree.
Why It Matters
Random forests are one of the most reliable and interpretable ML algorithms. They work well out of the box and are widely used in production for tabular data.
Example
A bank using a random forest of 500 decision trees to evaluate loan applications — each tree considers different factors, and the majority vote determines approval.
Think of it like...
Like asking 500 different experts for their opinion and going with the majority answer — individual experts might be wrong, but the crowd is usually right.
Related Terms
Decision Tree
A supervised learning algorithm that makes predictions by learning a series of if-then-else decision rules from the data. It creates a tree-like structure where each internal node tests a feature and each leaf provides a prediction.
Ensemble Learning
A strategy that combines multiple models to produce better predictions than any single model alone. Ensemble methods leverage the diversity of different models to reduce errors.
Gradient Boosting
An ensemble technique that builds models sequentially, where each new model focuses on correcting the errors made by previous models. It combines many weak learners into a single strong learner.