The Context
Engineering Playbook

How to turn Claude Code from a chatbot into a co-founder


by BriefDay · briefday.ai

01 / The Problem

Context > Prompts


"The hottest new programming language is English."

Andrej Karpathy

Most people write thin prompts and get thin results. They type a sentence or two into Claude, get a generic answer, and conclude that AI is overhyped.

They're wrong. But not for the reason you'd think.

The model is the same whether you're paying $20/month or building a daily operating system that runs your business. The difference is context. The quality, depth, and structure of the information you give the AI before it writes a single word.

Prompt engineering is writing a better question. Context engineering is giving the AI everything it needs to think like you do. One gets you clever answers. The other gets you a co-founder who never sleeps.

The gap in practice

Thin prompt
Write a LinkedIn post about
AI in music.
Rich context
ROLE: Ghostwriter for a music exec
OBJECTIVE: Position as industry voice
CONTEXT: Platforms penalizing AI content
VOICE: "AI isn't the threat. Losing
control is."
FORMAT: Under 1,200 chars, no jargon

Same model. Same price. Completely different output. The thin prompt gets you a generic post anyone could write. The rich context gets you something that sounds like you and moves your business forward.

Key insight

You don't need a better AI. You need a better way to talk to the one you already have. That's what this playbook teaches.

02 / The Framework

The RORE Framework


Every effective AI task contains four components. Get them right, and you'll rarely need to ask for a redo.

Component Purpose Question It Answers
Role Who is the AI acting as? What expertise should I bring?
Objective Strategic context (the WHY) Why does this matter? What's the goal behind the goal?
Requirements Specific constraints and outputs What exactly needs to be delivered?
Examples Show, don't tell What does good look like?

Why this order matters

  1. Role sets the mental model and expertise lens. An AI acting as a "senior financial analyst" thinks differently than a "helpful assistant."
  2. Objective provides strategic autonomy. When the AI knows WHY you need something, it makes better trade-off decisions without asking.
  3. Requirements constrain the solution space. Without them, the AI guesses at format, length, and what to include.
  4. Examples disambiguate edge cases. One good example eliminates ten paragraphs of explanation.
The test

If the AI asks zero clarifying questions and the first output is 80%+ usable, your context was good. If it asks "what do you mean by..." or misses the point entirely, your context had gaps.

03 / Step 1

Role: Set the Expertise Lens


The role tells the AI who to be. Not "a helpful assistant" (the default, and the worst), but a specific persona with specific expertise and specific values.

Template

ROLE: You are a [expertise level] [domain] specialist
with experience in [relevant context].
Your perspective prioritizes [values/trade-offs].

Comparison

Weak role
You are a helpful assistant.

Generic. The AI defaults to beginner-level explanations and safe, middle-of-the-road answers.

Strong role
You are a senior e-commerce
analyst specializing in DTC brands.
Your perspective prioritizes
data-driven insights over hype.

Specific. The AI now filters everything through DTC expertise and avoids surface-level takes.

Advanced: roles that stack

You can combine multiple expertise areas. "A senior product manager with a background in fintech, experienced in B2B SaaS pricing" gives the AI permission to draw from multiple domains when making recommendations.

You can also encode values directly. "Your perspective prioritizes revenue impact over vanity metrics" stops the AI from suggesting things that sound good but don't matter.

Key insight

The more specific the role, the better the output. "Helpful assistant" is a 2 out of 10. "Senior financial analyst specializing in SaaS metrics for Series B companies" is an 8. The specificity gives the AI a decision-making framework it can actually use.

04 / Step 2

Objective: Give It the WHY


This is where most prompts fail. People say WHAT they want but never explain WHY they want it. Without the why, the AI can't make judgment calls. It doesn't know which details matter. It doesn't know what to emphasize.

Template

OBJECTIVE: [What we're trying to achieve]

BUSINESS CONTEXT: [Why this matters]

STRATEGIC GOAL: [The bigger picture this serves]

Comparison

No context
Analyze our Q4 platform
performance.

The AI dumps raw numbers. It doesn't know what you'll use them for.

With context
OBJECTIVE: Analyze Q4 performance
CONTEXT: Preparing for vendor
renegotiations. Need leverage showing
our portfolio's cross-platform value.
GOAL: Negotiate better terms.

Now the AI emphasizes growth trends, engagement depth, and revenue metrics that demonstrate leverage.

The mechanism

Business context changes which metrics the AI highlights, which comparisons it draws, and which recommendations it makes. Without it, you get a technically correct report that misses the point. With it, you get a document you can actually use in the meeting.

The strategic goal layer adds another dimension. If the AI knows this analysis feeds into a negotiation, it frames findings as leverage points. If it's for an investor update, it frames the same data as growth proof. Same numbers, different story.

Key insight

With business context, the AI knows which metrics matter, which to bury, and which to lead with. It stops reporting and starts advising.

05 / Step 3

Requirements: Set the Boundaries


Requirements are the guardrails. Without them, the AI guesses at format, length, tone, and what to include. You get a coin-flip on whether the output is useful.

Template

REQUIREMENTS:
- Format: [Output structure]
- Length: [Word count or page limits]
- Must include: [Non-negotiable elements]
- Must avoid: [Prohibited content or approaches]
- Success looks like: [Measurable criteria]

Comparison

Vague
Make it good and comprehensive.

"Good" means nothing. "Comprehensive" usually means too long.

Specific
Format: Exec summary (1 page) +
detailed analysis (3-5 pages)
Must include: YoY comps, top 10
assets, revenue attribution
Must avoid: Raw data dumps
Success: CFO can cite 3 metrics
in a board meeting

Clear boundaries. The AI knows exactly what to deliver and when it's done.

The success criteria trick

"Success looks like" is the most underused field. It forces you to define the finish line before you start. And it gives the AI a self-check: "Would a CFO be able to cite three metrics from this? If not, I need to restructure."

Common requirements to consider

Category Examples
Format Table, bullet points, executive memo, slide deck outline
Length Under 500 words, 1 page, 3 paragraphs
Audience Board of directors, engineering team, end users
Tone Conversational, formal, technical, persuasive
Exclusions No jargon, no speculation, no competitor names

06 / Step 4

Examples: Show, Don't Tell


One good example beats ten paragraphs of explanation. Examples set the quality bar, disambiguate vague instructions, and show the AI what "good" actually looks like in your world.

Template

EXAMPLE INPUT:
[Sample input similar to actual task]

EXAMPLE OUTPUT:
[Ideal response demonstrating quality bar]

NOTE: [What to learn from this example]

Comparison

No example
Write in my voice.

The AI has no idea what your voice sounds like. It guesses, and guesses wrong.

With example
Write in my voice. Example:
"AI isn't the threat. Losing
control is. The tools that promise
efficiency today become the
dependencies that constrain you
tomorrow."

NOTE: Short sentences. Direct
opinions. No hedging.

Now the AI matches your sentence length, directness, and perspective. Night and day.

What makes a good example

When to use multiple examples

One example is usually enough for straightforward tasks. Use two or three when the output format varies (e.g., different types of analysis for different scenarios) or when the quality bar is hard to describe in words. More than three is usually context overload.

Key insight

Examples are your highest-leverage context. They compress paragraphs of instructions into a single demonstration. If you only have time to add one thing to a prompt, make it an example of great output.

07 / Advanced

Progressive Disclosure


Not everything belongs upfront. Dumping 5,000 words of background before the actual task is as bad as providing no context at all. Structure your context in layers.

Three layers

Layer 1: Core Context (Always Include)

Role definition, primary objective, output format, critical constraints. This is the minimum viable context for any task. If you only provide one layer, make it this one.

Layer 2: Supporting Context (Include When Relevant)

Business rationale, background information, related prior work, stakeholder preferences. Include this for complex tasks where the AI needs to make judgment calls.

Layer 3: Reference Material (Available on Request)

Full documentation, historical examples, edge case handling, detailed specifications. Don't paste these in. Point to them. "For detailed specs, see [document]."

Implementation

# CORE CONTEXT (always loaded)
ROLE: Senior product strategist
OBJECTIVE: Evaluate launch timing for Q2
FORMAT: Decision memo, 1 page
CONSTRAINT: Must ship before competitor

# SUPPORTING CONTEXT (if needed)
BACKGROUND: We tested with 200 beta users
in Jan. NPS was 42. Three bugs remain.
STAKEHOLDER: CEO wants speed. CTO wants stability.

# REFERENCE (don't paste, point to)
Full beta report: /docs/beta-q1.pdf
Competitor timeline: /research/comp-analysis.md
For edge cases, ask.

Why layers matter

AI models have finite context windows. Every word you include competes for attention with every other word. Layer 1 context gets processed with the highest fidelity. Layers 2 and 3 progressively dilute that attention. Front-load what matters most.

Rule of thumb

If you can reference it instead of including it, reference it. If you can summarize it in three bullets instead of pasting the full document, summarize it. Save the context window for the work itself.

08 / Templates

Ready-to-Use Task Templates


Copy these, fill in the blanks, and use them today. Each one applies the full RORE structure.

Research Task

ROLE: Research analyst in [domain]

OBJECTIVE: Investigate [topic] to
support [business decision]

REQUIREMENTS:
- Sources: [preferred, recency]
- Must answer: [specific questions]
- Output: [report format, length]

SUCCESS: Reader can [action]
based on findings.

Content Creation

ROLE: Writer for [persona],
audience: [who reads this]

OBJECTIVE: Create [type] that
[intended impact/positioning]

REQUIREMENTS:
- Tone: [specific tone]
- Length: [word/char limit]
- Avoid: [anti-patterns]

EXAMPLE: [Sample in voice]

Analysis Task

ROLE: [Domain] analyst, expert
in [specific area]

OBJECTIVE: Analyze [subject] to
determine [insight/decision]

REQUIREMENTS:
- Metrics: [specific KPIs]
- Compare: [benchmarks, periods]
- Audience: [who reads this]

SUCCESS: Enables [decision].

Code / Build Task

ROLE: Senior engineer in [stack]

OBJECTIVE: Implement [feature] to
enable [user benefit]

REQUIREMENTS:
- Output: [production/prototype]
- Must include: [tests, docs]
- Must avoid: [anti-patterns]
- Style: [code style ref]

EXAMPLE: [Code snippet]

Anti-patterns to avoid

Anti-Pattern What Goes Wrong Fix
Command without context AI guesses at purpose Add the OBJECTIVE layer
No success criteria No way to evaluate output Define "success looks like"
No examples for ambiguous tasks Quality is a coin flip Show one good output
Context overload Signal buried in noise Use progressive disclosure
Assuming shared context "Do the usual" fails Be explicit every time

09 / Putting It All Together

From Playbook to Operating System


This playbook gives you the framework. You can apply RORE today, to any AI tool, and see immediate improvement. But doing it manually for every task is overhead. What if the context was always there, loaded automatically, refined over time?

That's what a properly configured Claude Code setup does. And that's what BriefDay is.

How BriefDay applies context engineering

RORE Component BriefDay Implementation
Role CLAUDE.md defines who the AI is, your expertise level, your decision-making style, and how it should think. Loaded every session, automatically.
Objective CONTEXT.md carries your full business background, active projects, and priorities. The AI always knows WHY.
Requirements 24 skills with built-in constraints, voice guidelines, format rules, and quality gates. No guessing.
Examples Voice samples, past outputs, templates. The AI learns your style from real examples, not descriptions.

Beyond single tasks

Ready to build your own operating system?

BriefDay is a pre-configured Claude Code setup with the RORE framework, 24 skills, memory management, and daily workflows built in. Set it up once, use it every day.

Get BriefDay ($67) →

Or start with the free starter kit to try the framework first.