Skip to main content

GuidanceOptimizer

GuidanceOptimizer audits Guidance objects in SDK templates for three structural weaknesses and rewrites only the weak rules using an LLM. Strong, binding rules are kept exactly as written.

Why it exists

Template rules written with suggestive language (should, try to, ideally) are statistically less likely to be followed by LLMs than binding rules (must, always, never). GuidanceOptimizer automates the audit and rewrite, closing this gap without requiring a full template rebuild.

Import

from mycontext.intelligence import GuidanceOptimizer

The three weakness patterns it targets

PatternExample weak ruleExample rewritten
Suggestive modal"Try to find patterns""Identify and quantify every pattern — report the metric and its value"
Vague directive"Be accurate""Every numeric claim must reference the exact figure from the source data"
Under-specified"Check data""Verify completeness of every required field before proceeding"

Rules that are already binding (must, always, never, do not) are left untouched.

audit() — inspect without rewriting

from mycontext.foundation import Guidance
from mycontext.intelligence import GuidanceOptimizer

guidance = Guidance(
role="Data analyst",
rules=[
"Try to look for patterns",
"You should mention limitations",
"Be accurate",
"Every claim must cite the specific data point that supports it",
],
)

opt = GuidanceOptimizer()
audit = opt.audit(guidance)

print(audit.summary())
# Rules: 4 total | 1 binding | 3 weak | Strength: 28%

for r in audit.rule_audits:
print(f" [{r.status}] {r.rule[:60]}{r.weakness_type or 'ok'}")
# [WEAK] Try to look for patterns — suggestive_modal
# [WEAK] You should mention limitations — suggestive_modal
# [WEAK] Be accurate — vague_directive
# [BINDING] Every claim must cite the specific data ... — ok

audit() makes no LLM calls — instant, cost-free.

optimize() — audit + rewrite

opt = GuidanceOptimizer(provider="openai", model="gpt-4o-mini")
result = opt.optimize(guidance)

print(result.summary())
# Rule strength: 28% → 91% (+63%) | 3/4 rules rewritten

print(result.optimized_guidance.rules)
# [
# "Identify and quantify every pattern — report the metric and its value.",
# "Must explicitly state each data gap: what is absent and what it prevents.",
# "Every numeric claim must reference the exact figure from the dataset.",
# "Every claim must cite the specific data point that supports it.", ← unchanged
# ]

Return types

GuidanceAuditResult

AttributeTypeDescription
rule_auditslist[RuleAudit]Per-rule analysis
rule_strength_scorefloatFraction of binding rules (0–1)
weak_countintNumber of weak rules
binding_countintNumber of already-binding rules

RuleAudit

AttributeTypeDescription
rulestrOriginal rule text
status"BINDING" | "WEAK"Classification
weakness_typestr | None"suggestive_modal", "vague_directive", "under_specified", or None

OptimizedGuidance

AttributeTypeDescription
original_guidanceGuidanceInput
optimized_guidanceGuidanceRewritten output (binding rules kept, weak rules replaced)
auditGuidanceAuditResultAudit result
before_scorefloatRule strength before (0–1)
after_scorefloatRule strength after (0–1)
score_deltafloatImprovement
rewriteslist[RuleRewrite]Per-rule original → rewritten pairs

Use with any template

from mycontext.templates.free.analysis import DataAnalyzer
from mycontext.intelligence import GuidanceOptimizer

template = DataAnalyzer()
ctx = template.build_context(dataset_description="Monthly sales by region")

opt = GuidanceOptimizer(provider="openai", model="gpt-4o-mini")
result = opt.optimize(ctx.guidance)

# Use the upgraded guidance in a new context
import dataclasses
upgraded_ctx = dataclasses.replace(ctx, guidance=result.optimized_guidance)
response = upgraded_ctx.execute(provider="openai")

See also