Skip to main content

Constraints

Constraints defines the hard boundaries of a context — what the output must contain, what it must never contain, how it should be formatted, the output contract, and the tone and style requirements.

Unlike Guidance (which shapes identity) and Directive (which defines the task), constraints are guardrails: non-negotiable requirements applied to every response.

Import

from mycontext import Constraints
# or
from mycontext.foundation import Constraints

Constructor

Constraints(
must_include: list[str] | None = None,
must_not_include: list[str] | None = None,
format_rules: list[str] | None = None,
output_contract: str | None = None,
style_guide: str | None = None,
max_length: int | None = None,
language: str | None = None,
output_schema: list[dict] | None = None,
# Quality controls (auto-suggested by PromptArchitect) — new in 0.11.0
verbosity: "minimal" | "standard" | "detailed" | None = None,
communication_posture: "direct" | "collaborative" | "educational" | None = None,
answer_first: bool | None = None,
forbidden_phrases: list[str] | None = None,
self_check: list[str] | None = None,
)

All fields are optional. Use only what you need. The five quality control fields are auto-suggested by PromptArchitect when you use build() or improve() — you can override any of them.

Fields

FieldTypeDescription
output_contractstr | NoneExplicit statement of what the final response must look like — rendered first in section ⑦
style_guidestr | NoneTone, voice, and prose style — separate from structural format rules
must_includelist[str] | NoneElements that must appear in the output
must_not_includelist[str] | NoneElements that must never appear — rendered as positive redirects for Anthropic/Gemini
format_ruleslist[str] | NoneOutput formatting and structure requirements
max_lengthint | NoneMaximum output length (tokens or chars — interpreted by the LLM)
languagestr | NoneRequired output language (e.g., "en", "Spanish")
output_schemalist[dict] | NoneStructured field schema for JSON responses — [{"name": str, "type": str}]
verbosity"minimal" | "standard" | "detailed" | NoneOutput detail level. Auto-suggested by PromptArchitect based on task complexity
communication_posture"direct" | "collaborative" | "educational" | NoneInteraction tone. Auto-suggested based on audience
answer_firstbool | NoneWhen True, LLM states conclusion before reasoning. Auto-suggested: True for decisions, False for tutorials
forbidden_phraseslist[str] | NonePhrases the LLM must never use. Extends the built-in anti-boilerplate list
self_checklist[str] | NoneDomain-specific verification questions the LLM must confirm before finalizing

output_contract — explicit response shape

output_contract is a single binding statement that describes exactly what the response must look like. It renders before all other constraint content in the ## OUTPUT FORMAT section — the highest-attention position for format instructions.

Constraints(
output_contract="Return ONLY a ranked bullet list of findings. Each finding: metric name, delta %, magnitude (σ), likely cause. No preamble, no conclusion.",
)

Why it matters. Format instructions buried in format_rules arrive after guard rails and are at lower attention. An output_contract at the start of section ⑦ is the last thing the LLM reads before it begins generating — maximising compliance.

style_guide — tone separate from format

style_guide captures prose voice, register, and style requirements — distinct from structural format_rules. This separation allows you to change tone without changing format, and vice versa.

Constraints(
style_guide="Formal, third-person, present tense. No hedging language (avoid: probably, might, could be).",
format_rules=["Use bullet points", "Include a one-line summary at the top"],
)

format_rules = structure ("Use bullet points"). style_guide = voice ("Formal, third-person").

must_not_include — positive rendering

must_not_include renders differently depending on the target provider. When provider_hint="anthropic" or "gemini" is set on the Context, items are reframed as positive redirects rather than negative prohibitions:

# Generic / OpenAI
Must NOT include:
- speculation
- vague language

# Anthropic / Gemini
Exclude the following (use alternatives where needed):
- Omit speculation.
- Omit vague language.

Anthropic's prompt engineering guide specifically documents that positive framing produces better constraint compliance than bare negation — the model is given a direction, not just a boundary. You do not set this manually; it is handled automatically by provider_hint on Context.

Basic Usage

from mycontext import Constraints

constraints = Constraints(
output_contract="Return ONLY a numbered list of findings. Each finding: severity, affected component, remediation. No prose summary.",
style_guide="Technical, direct, imperative voice. No hedging.",
must_include=[
"severity rating (critical/high/medium/low)",
"affected code snippet",
"remediation steps with code example",
],
must_not_include=[
"generic security advice",
"caveats about not being a licensed security auditor",
],
format_rules=[
"Use a markdown table for findings summary",
"Code examples must be in Python",
"Number all findings",
],
max_length=2000,
language="en",
)

How It Renders

constraints.render() produces the constraints block. The output_contract and style_guide always appear first:

CONSTRAINTS:

Output contract: Return ONLY a numbered list of findings. Each finding: severity, affected component, remediation. No prose summary.

Style: Technical, direct, imperative voice. No hedging.

Must include:
- severity rating (critical/high/medium/low)
- affected code snippet
- remediation steps with code example

Must NOT include:
- generic security advice
- caveats about not being a licensed security auditor

Format rules:
- Use a markdown table for findings summary
- Code examples must be in Python
- Number all findings

Maximum length: 2000

Language: en

In the assembled Context with research_flow=True:

  • output_contract renders in section ⑦ OUTPUT FORMAT — just before guard rails
  • style_guide renders in section ④ STYLE — alongside Guidance.style
  • must_not_include and must_include render in section ⑧ GUARD RAILS

Quality Controls (New in 0.11.0)

Five fields that control how the LLM communicates its response. When you use PromptArchitect.build() or improve(), these are auto-inferred from the task description. You can also set them manually.

verbosity — output detail level

Constraints(verbosity="minimal")   # Quick lookups, yes/no decisions
Constraints(verbosity="standard") # Most analysis and planning tasks
Constraints(verbosity="detailed") # Deep research, multi-factor analysis

Renders as a conciseness or thoroughness instruction in the prompt. Also auto-set by TransformationEngine.transform() based on complexity assessment.

communication_posture — interaction tone

Constraints(communication_posture="direct")        # Experienced audiences, brevity
Constraints(communication_posture="collaborative") # Brainstorming, ideation
Constraints(communication_posture="educational") # Tutorials, explanations

answer_first — conclusion ordering

Constraints(answer_first=True)   # State conclusion, then reasoning
Constraints(answer_first=False) # Reasoning journey first, then conclusion

True for decisions, recommendations, and analysis tasks. False for tutorials, explorations, and step-by-step walkthroughs.

forbidden_phrases — anti-boilerplate

Constraints(
forbidden_phrases=["it depends", "delve into", "it's worth noting that"],
)

Extends the built-in anti-boilerplate system. OutputEvaluator checks these alongside its default banned phrase list and penalizes the score when found.

self_check — domain-specific verification

Constraints(
self_check=[
"Did I distinguish correlation from causation?",
"Did I include risks the user may not want to hear?",
"Are at least 30% of ideas genuinely unconventional?",
],
)

Renders as a SELF-VERIFICATION block in the prompt. Every template ships with domain-specific defaults — for example, DataAnalyzer defaults to checks about statistical rigor and data gaps.

Auto-suggestion via PromptArchitect

When you use PromptArchitect, all five fields are inferred from the task:

from mycontext.intelligence import PromptArchitect

arch = PromptArchitect(provider="openai")
result = arch.build("Analyze customer churn and identify at-risk segments")

# Inspect what was auto-suggested
ctx = result.improved_context
print(ctx.constraints.verbosity) # "detailed"
print(ctx.constraints.communication_posture) # "direct"
print(ctx.constraints.answer_first) # True
print(ctx.constraints.forbidden_phrases) # ["it depends", ...]
print(ctx.constraints.self_check) # ["Did I distinguish correlation...", ...]

# Override before execution
ctx.constraints.verbosity = "minimal"
response = ctx.execute(provider="openai")

Common Patterns

Explicit output contract

Constraints(
output_contract="Return ONLY valid JSON matching this schema. No prose, no markdown wrapper.",
output_schema=[
{"name": "sentiment", "type": "str"},
{"name": "confidence", "type": "float"},
{"name": "reasoning", "type": "str"},
],
)

Tone control

Constraints(
style_guide="Executive audience — lead with the so-what, not the how. No technical jargon. Max 3 bullet points per finding.",
output_contract="Return a 3-part structure: Key Finding (1 sentence), Evidence (2-3 bullets), Recommended Action (1 sentence).",
)

Content safety

Constraints(
must_not_include=[
"specific medical diagnoses",
"medication dosage recommendations",
"statements that could be construed as professional medical advice",
],
must_include=["a recommendation to consult a licensed physician"],
style_guide="Empathetic, plain language. Avoid clinical jargon without explanation.",
)

Response length control

Constraints(
output_contract="Response must not exceed 500 words. Lead with the conclusion — no preamble.",
format_rules=["Use bullet points, not prose paragraphs"],
)

Multi-language output

Constraints(
language="Spanish",
must_include=["technical terms in both Spanish and English on first use"],
)

Code review standards

Constraints(
output_contract="Return findings grouped by severity: Critical → High → Medium → Low. Each finding: one-sentence summary, code snippet, fix.",
must_include=[
"security impact assessment",
"performance implications",
"suggested refactoring with code diff",
],
must_not_include=[
"praise for good code — focus only on improvements",
"style suggestions unrelated to correctness",
],
style_guide="Direct, technical, no preamble. Write as a senior engineer in a PR review.",
)

Using with Context

from mycontext import Context, Guidance, Directive, Constraints

ctx = Context(
guidance=Guidance(
role="Financial analyst specializing in startup metrics",
goal="Surface the revenue story in 3 actionable findings",
rules=["Base every claim on the provided data", "Flag missing data explicitly"],
),
directive=Directive(
content="Analyze the monthly recurring revenue trend for Q3–Q4.",
priority=8,
),
constraints=Constraints(
output_contract="Return exactly 3 findings in bullet form. Each: metric, value, trend direction, implication.",
style_guide="Concise, CFO-audience. No financial jargon without explanation.",
must_include=["MoM growth rate", "churn contribution", "net new MRR"],
must_not_include=["speculation about future performance", "comparisons to competitors"],
format_rules=["Use a table for month-by-month breakdown"],
max_length=800,
),
knowledge="MRR data: Jul $120k, Aug $132k, Sep $145k, Oct $138k, Nov $150k, Dec $167k",
research_flow=True,
)

result = ctx.execute(provider="openai")

Provider-Aware Rendering

Constraints.render() accepts a provider parameter that adjusts how must_not_include items are phrased. You do not call this directly — set provider_hint on the Context and it is applied automatically:

ctx = Context(
constraints=Constraints(must_not_include=["speculation", "vague language"]),
research_flow=True,
provider_hint="anthropic", # → positive reframe applied automatically
)
Providermust_not_include rendering
openai / genericMust NOT include: speculation
anthropic / geminiOmit speculation. (positive redirect)

See Provider-Aware Assembly →.

Combining with Directive.constraints

There are two places to express constraints:

Constraints classDirective.constraints
ScopeOutput-wide guardrailsTask-specific focus
Position in promptSections ⑦–⑧Appended to Directive content
Examples"Return ONLY JSON", "Must include severity""Focus on SQL injection only", "Ignore CSS files"

Use both when needed:

ctx = Context(
directive=Directive(
content="Review the authentication module.",
constraints=["Focus only on auth logic — ignore utility functions"], # task scope
),
constraints=Constraints(
output_contract="Return findings as a numbered list. No preamble.",
must_include=["CVSS score", "PoC exploit outline"],
),
)

Integration with CrewAI

Constraints.must_include is used to derive expected_output when exporting to CrewAI:

crew = ctx.to_crewai()
print(crew["expected_output"])
# → "Output must include: CVSS score, PoC exploit outline"

Best Practices

Use output_contract for the shape, format_rules for the details. The contract states the overall form ("Return ONLY a bullet list"); format rules specify the internal structure ("Each bullet: metric, delta, cause").

Separate style_guide from format_rules. Style is about voice; format is about structure. Mixing them makes both harder to maintain.

must_not_include = your rejection criteria. Use this to suppress unwanted content: disclaimers, hedging language, off-topic digressions, speculation, and hallucination-prone patterns.

Don't over-constrain. Each constraint narrows the response space. Too many constraints — especially contradictory ones — cause LLMs to produce awkward or incomplete responses. Keep constraints to what's genuinely non-negotiable.

max_length is a soft limit. Most LLMs treat it as a target rather than a hard cap. Reinforce it with an output_contract: "Response must not exceed 500 words." is stronger than max_length=500 alone.

API Reference

Method / FieldTypeDescription
output_contractstr | NoneExplicit output shape statement — rendered first in section ⑦.
style_guidestr | NoneTone, voice, and prose style — rendered in section ④ STYLE.
must_includelist[str] | NoneRequired output elements — rendered in section ⑧ GUARD RAILS.
must_not_includelist[str] | NoneForbidden elements — rendered as positive redirects for Anthropic/Gemini.
format_ruleslist[str] | NoneStructural format requirements.
output_schemalist[dict] | NoneJSON field schema — [{"name": str, "type": str}].
max_lengthint | NoneMaximum length (1 or higher).
languagestr | NoneRequired response language.
verbosity"minimal" | "standard" | "detailed" | NoneOutput detail level — auto-suggested by PromptArchitect.
communication_posture"direct" | "collaborative" | "educational" | NoneInteraction tone — auto-suggested by PromptArchitect.
answer_firstbool | NoneConclusion-first ordering — auto-suggested by PromptArchitect.
forbidden_phraseslist[str] | NoneBanned phrases — checked by OutputEvaluator.
self_checklist[str] | NoneDomain-specific verification questions — renders as SELF-VERIFICATION block.
render(provider="generic")strProduces the formatted constraints block. provider adjusts must_not_include phrasing.

Next: Patterns →