Context Engineering

Context Engineering is the practice of designing, versioning, and optimizing the information provided to an LLM or AI agent to ensure deterministic and high-quality outputs.

Core Pillars

  1. Static Context: System prompts, core logic, and behavioral guardrails.
  2. Dynamic Context: Real-time data, file states, and user intent.
  3. Retrieval Context: RAG-based data fetched from external knowledge bases.

The Context Development Lifecycle (CDLC)

Similar to DevOps, Context Engineering requires:

  • Versioning: Tracking changes to prompts.
  • Testing: Using “evals” to measure performance.
  • Observability: Monitoring how context influences agent decisions.

SubAgents as Context Boundary Enforcement

The most powerful application of context engineering at scale is SubAgent Delegation: instead of engineering a single massive context window, you engineer which context each subagent receives. The orchestrator’s context contract becomes: “provide the minimal, maximally-relevant context per agent.” This transforms context engineering from a single-window optimization problem into an architecture of isolated, well-scoped cognitive units.

Custom subagents make this concrete: the tools field enforces which inputs the agent can pull in; the description field constrains which tasks trigger it; the system prompt body defines the cognitive frame. The memory: field adds a persistent knowledge layer that accumulates across sessions without inflating live context.

Related: LLM Observability, AI-Augmented SDLC, Claude SubAgents

References