suggest_patterns()
suggest_patterns() recommends the optimal cognitive patterns for a given question and suggests a workflow chain order. Three modes: keyword matching (zero LLM calls), LLM-powered selection, or hybrid.
from mycontext.intelligence import suggest_patterns
result = suggest_patterns(
"Why did our churn spike 40% this quarter?",
mode="hybrid",
llm_provider="openai",
)
print(result.suggested_chain)
# → ['root_cause_analyzer', 'data_analyzer', 'decision_framework']
Function Signature
suggest_patterns(
question: str,
include_enterprise: bool = True,
suggest_chain: bool = True,
max_patterns: int = 5,
mode: str = "keyword",
llm_provider: str = "openai",
temperature: float = 0,
model: str | None = None,
**llm_kwargs,
) -> SuggestionResult
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
question | str | required | Question or problem description |
include_enterprise | bool | True | Include enterprise patterns. If False, enterprise patterns appear with a license note. |
suggest_chain | bool | True | Order suggestions as a workflow chain |
max_patterns | int | 5 | Maximum patterns to suggest |
mode | str | "keyword" | Selection mode: "keyword", "llm", or "hybrid" |
llm_provider | str | "openai" | LLM provider for "llm" or "hybrid" mode |
temperature | float | 0 | Temperature for LLM selection (0 = deterministic) |
model | str | None | None | Override model name |
Returns: SuggestionResult
Three Modes
Mode 1: "keyword" (instant, 0 LLM calls)
Matches question keywords against a curated map of all 87 patterns. Fast, deterministic, and free.
result = suggest_patterns(
"Why is our conversion rate dropping?",
mode="keyword",
)
print(result.source) # "keyword"
print(result.suggested_patterns[0].name) # "root_cause_analyzer"
print(result.suggested_patterns[0].confidence) # 0.85
Best for:
- Real-time suggestions
- High-volume usage
- Deterministic routing pipelines
- Cost-sensitive applications
Mode 2: "llm" (deep, 1 LLM call)
Sends the full 85-pattern catalog to an LLM with the question. The LLM reasons about domains, reasoning types, and pipeline order.
result = suggest_patterns(
"We're seeing a 40% churn increase and need to understand why and what to do",
mode="llm",
llm_provider="openai",
temperature=0,
)
print(result.llm_reasoning[:200]) # LLM's raw reasoning
print(result.suggested_chain)
Best for:
- Complex, multi-domain questions
- When quality matters more than latency
- Novel question types
Mode 3: "hybrid" (recommended, 1 LLM call)
Runs keyword analysis first, then sends keyword suggestions as hints to the LLM. The LLM can confirm, refine, or override keyword choices.
result = suggest_patterns(
"Should we build or buy our analytics infrastructure?",
mode="hybrid",
llm_provider="openai",
)
print(result.source) # "hybrid"
Best for:
- Production applications
- High-quality suggestions with reasonable cost
- When you want the LLM to catch what keywords miss
SuggestionResult
Every call returns a SuggestionResult dataclass:
@dataclass
class SuggestionResult:
question: str
suggested_patterns: list[PatternSuggestion] # Ordered suggestions
suggested_chain: list[str] | None # Workflow chain order
reasoning: str # Why these patterns
source: str # "keyword" | "llm" | "hybrid"
llm_reasoning: str | None # Raw LLM output (llm/hybrid only)
PatternSuggestion
@dataclass
class PatternSuggestion:
name: str # Pattern name (snake_case)
category: str # "free" | "enterprise"
reason: str # Why it was selected
confidence: float # 0.0 to 1.0
chain_position: int | None # Position in workflow chain
Export Formats
SuggestionResult exports to multiple formats for integration:
result = suggest_patterns("Why did our server crash?", mode="keyword")
# Markdown (for notebooks, dashboards)
print(result.to_markdown())
# JSON (for APIs, storage)
json_str = result.to_json()
# → {"question": "...", "suggested_patterns": [...], "suggested_chain": [...]}
# YAML (for config files, pipelines)
yaml_str = result.to_yaml()
# XML (for XML-based systems)
xml_str = result.to_xml()
# Dict (for Python processing)
d = result.to_dict()
# Round-trip
loaded = SuggestionResult.from_json(json_str)
loaded2 = SuggestionResult.from_dict(d)
Workflow Chains
When suggest_chain=True (default), patterns are ordered into a logical workflow pipeline:
investigation/analysis → reasoning/comparison → synthesis/decision
The chain ordering priority:
temporal_sequence_analyzer— establish timeline firsthistorical_context_mapper— historical contextroot_cause_analyzer/diagnostic_root_cause_analyzer— diagnosiscausal_reasoner— causal chaindifferential_diagnoser— differential diagnosisfuture_scenario_planner— future statespattern_recognition_engine— patternscross_domain_synthesizer/holistic_integrator— synthesis
result = suggest_patterns("Why is our product stagnating?", mode="hybrid")
print(result.suggested_chain)
# → ['root_cause_analyzer', 'data_analyzer', 'scenario_planner']
# │ diagnosis │ evidence │ future options
Enterprise Pattern Handling
When include_enterprise=False, enterprise patterns still appear in suggestions but with a license note — so you know they exist and can get a license:
result = suggest_patterns(
"Complex multi-domain business question",
include_enterprise=False,
mode="keyword",
)
for pattern in result.suggested_patterns:
print(f"{pattern.name} [{pattern.category}]: {pattern.reason}")
# → decision_framework [enterprise]: ... Requires enterprise license.
# → root_cause_analyzer [free]: ...
Examples
Simple Root Cause Analysis
result = suggest_patterns(
"Why is our API response time increasing?",
mode="keyword",
max_patterns=3,
)
# Suggested: root_cause_analyzer, step_by_step_reasoner
Complex Strategic Decision
result = suggest_patterns(
"Should we expand into European markets given Brexit uncertainty, competitive pressure, and our limited runway?",
mode="hybrid",
llm_provider="anthropic",
max_patterns=4,
)
# Likely: scenario_planner, risk_assessor, decision_framework, stakeholder_mapper
Code Review Request
result = suggest_patterns(
"Review my Python authentication code for security issues",
mode="keyword",
)
# Suggested: code_reviewer, risk_assessor
Using Results to Execute
After getting suggestions, execute the top pattern:
from mycontext.intelligence import suggest_patterns
from mycontext.templates.free.reasoning import RootCauseAnalyzer
result = suggest_patterns("Why did our conversion drop?", mode="keyword")
top_pattern = result.suggested_patterns[0].name
if top_pattern == "root_cause_analyzer":
rca_result = RootCauseAnalyzer().execute(
provider="openai",
problem="Conversion rate dropped 30% after the redesign",
)
assess_complexity() — Should You Even Use Templates?
Before suggesting patterns, check if templates will actually add value:
from mycontext.intelligence import assess_complexity
assessment = assess_complexity(
"What is the capital of France?",
provider="openai",
)
print(assessment.complexity) # "low"
print(assessment.recommendation) # "raw"
print(assessment.reasoning) # "Simple factual question..."
assessment2 = assess_complexity(
"Should we migrate our 10-year monolith to microservices given our team structure?",
provider="openai",
)
print(assessment2.complexity) # "high"
print(assessment2.recommendation) # "integrated"
print(assessment2.domains) # ["technical", "organizational", "strategic"]
print(assessment2.reasoning_type) # "strategic"
ComplexityResult:
@dataclass
class ComplexityResult:
complexity: str # "low", "medium", "high"
domains: list[str] # Knowledge domains involved
reasoning_type: str # "diagnostic", "comparative", "strategic", etc.
recommendation: str # "raw", "single_template", "integrated"
reasoning: str # Why this recommendation
best_template: str | None # For "single_template" recommendation
API Reference
suggest_patterns()
def suggest_patterns(
question: str,
include_enterprise: bool = True,
suggest_chain: bool = True,
max_patterns: int = 5,
mode: str = "keyword",
llm_provider: str = "openai",
temperature: float = 0,
model: str | None = None,
**llm_kwargs,
) -> SuggestionResult
assess_complexity()
def assess_complexity(
question: str,
provider: str = "openai",
temperature: float = 0,
model: str | None = None,
**kwargs,
) -> ComplexityResult
get_pattern_class()
def get_pattern_class(
pattern_name: str,
include_enterprise: bool = True,
) -> type | None
Returns the pattern class for a given name, or None if not found / not licensed.