In-Context Learning
An LLM's ability to learn new tasks from examples or instructions provided within the prompt, without any weight updates or fine-tuning. The model adapts its behavior based on the context given.
Why It Matters
In-context learning is why LLMs are so versatile — you can teach them new tasks on the fly just by showing examples, making them instantly adaptable.
Example
Providing an LLM with 3 examples of English-to-Pirate translation in the prompt, and it immediately translates new sentences into pirate speak — no training required.
Think of it like...
Like a quick study who watches a few examples of a card trick and can immediately replicate it — they learn the pattern in real time from the demonstrations.
Related Terms
Few-Shot Learning
A technique where a model learns to perform a task from only a few examples provided in the prompt. Instead of training on thousands of examples, the model generalizes from just 2-5 demonstrations.
Zero-Shot Learning
A model's ability to perform a task it was never explicitly trained on or shown examples of. The model applies its general knowledge and reasoning to handle entirely new task types.
Prompt Engineering
The practice of designing and optimizing input prompts to get the best possible output from AI models. It involves crafting instructions, providing examples, and structuring queries to guide the model toward desired responses.
Large Language Model
A type of AI model trained on massive amounts of text data that can understand and generate human-like text. LLMs use transformer architecture and typically have billions of parameters, enabling them to perform a wide range of language tasks.