What is Context Window?
A context window is the maximum amount of text (measured in tokens) that a language model can process in a single interaction.
A context window is the maximum amount of text (measured in tokens) that a language model can process in a single interaction. It determines how much information you can provide to the model and how long a response it can generate.
Context window sizes have grown dramatically: GPT-3 had 4K tokens, GPT-4 offered 128K tokens, and Gemini 1.5 reached 1M tokens. Larger context windows enable processing entire documents, codebases, or conversation histories.
However, larger context windows come with costs: inference cost scales with context length (quadratically for standard attention), model accuracy degrades in the "middle" of long contexts (the "lost in the middle" phenomenon), and latency increases with context size.
Token is the unit of measurement: roughly 1 token ≈ 0.75 words in English. A 128K context window can hold approximately 96,000 words — roughly the length of a novel. But filling the full context window every query is expensive (tokens × price-per-token).
Why It Matters
Context window size determines what's possible with your AI application. Too small and you can't provide enough context for accurate responses. Too large and you're paying for unused capacity. Optimizing context usage is a key lever for AI cost management.
Frequently Asked Questions
What is a context window in AI?
The context window is the maximum amount of text a language model can process at once, measured in tokens. It determines how much information you can include in a prompt.
Does a larger context window cost more?
Yes. Inference cost scales with context length. A query using 100K tokens costs roughly 25x more than one using 4K tokens. Optimize context usage to manage costs.
Free Tools
Related Terms
Need Expert Help?
Richard Ewing is a Product Economist and AI Capital Auditor. He helps companies translate technical complexity into financial clarity.
Book Advisory Call →