Not a developer? No problem.
The mycontext web app brings context engineering to everyone — no code required. Build, compose, and measure contexts through a visual interface.
- Context Studio — 9-step guided wizard to build structured prompts
- Context Copilot — describe your goal in plain English, AI builds the context
- Cognitive Studio — browse and use all 87 cognitive patterns visually
- Chain Composer — compose multi-pattern analysis pipelines
- Quality Metrics — score contexts and outputs before you ship
Why mycontext-ai
Capabilities that don't exist in any other open-source prompt engineering library.
87 Cognitive Patterns
Research-backed patterns implementing real cognitive frameworks — Five Whys, Socratic method, systems archetypes, ethical reasoning — grounded in 150+ peer-reviewed papers.
13 Export Formats
Build once, run anywhere. Export to OpenAI, Anthropic, Gemini, LangChain, CrewAI, AutoGen, DSPy, Semantic Kernel, YAML, JSON, XML, and more.
Measurable Quality
Score contexts on 6 dimensions. Evaluate LLM outputs on 5 dimensions. Prove templates work with the Context Amplification Index (CAI).
Async-Native Execution
ctx.aexecute() is a native coroutine — no thread pools, no blocking. Fan out multiple LLM calls in parallel with asyncio.gather.
Token-Budget Assembly
assemble_for_model() fits any context precisely within a model's window using tiktoken-accurate counting. No silent overflow, no over-truncation.
7 Framework Integrations
Drop into LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, Semantic Kernel, or Google ADK. Dedicated helpers for each framework.
Intelligence Layer
Auto-transform questions into perfect contexts. Pattern suggestion, multi-template fusion, chain orchestration, and complexity routing — all automatic.
Production-Ready Reliability
Template injection prevention, structured logging, Pydantic-validated LLM output, execution tracing, retry logic, and in-process caching — built in.
One call. Perfect context.
Don't know which pattern fits? The intelligence layer analyzes your question, selects the optimal cognitive pattern, builds the context, and executes — automatically.
- Auto-selects from 87 patterns via keyword, LLM, or hybrid matching
- Fuses multiple patterns when your question spans domains
- Builds multi-step workflow chains for complex analysis
- Routes to the optimal cost/quality tier
from mycontext.intelligence import smart_execute
import asyncio
# One call — auto-selects the right cognitive pattern,
# builds the context, and executes
response, meta = smart_execute(
"Why did API response times triple after last deploy?",
provider="openai",
)
print(meta["templates_used"]) # ['root_cause_analyzer']
# Or run multiple contexts concurrently — true async fan-out
async def parallel():
from mycontext import Context, Guidance, Directive
ctx1 = Context(guidance="Risk analyst", directive="Assess launch risk.")
ctx2 = Context(guidance="Data analyst", directive="Review Q3 trends.")
r1, r2 = await asyncio.gather(
ctx1.aexecute(provider="openai"),
ctx2.aexecute(provider="anthropic"),
)
return r1.response, r2.response
from mycontext.intelligence import QualityMetrics, ContextAmplificationIndex
# Score any context on 6 dimensions
metrics = QualityMetrics()
score = metrics.evaluate(ctx)
print(f"Quality: {score.overall:.2f}") # 0.87
# Prove templates work with CAI
cai = ContextAmplificationIndex(provider="openai")
result = cai.measure(question, template_name="root_cause_analyzer")
print(f"CAI: {result.cai_overall:.2f}x") # 1.42x — 42% better output
No more guessing.
Other tools score prompts. mycontext scores prompts and outputs — and proves that templates produce measurably better results.
- Quality Metrics — 6 dimensions: clarity, completeness, specificity, relevance, structure, efficiency
- Output Evaluator — 5 dimensions: instruction following, reasoning depth, actionability, structure compliance, cognitive scaffolding
- CAI — Context Amplification Index proves templates produce better output with a single number
At a Glance
What sets mycontext-ai apart from typical prompt libraries.
| Capability | mycontext-ai | Typical prompt libraries |
|---|---|---|
| Cognitive patterns | 87 research-backed | 10–20 generic |
| Zero-cost generic prompts | 87 pre-authored | None |
| Prompt compilation | 3-tier pipeline | None |
| Async-native execution | aexecute() coroutine | Sync only or manual |
| Token-budget assembly | tiktoken-accurate | None or char-based |
| Validated structured output | Pydantic + instructor | None |
| Prompt injection prevention | safe_format_template | None |
| Context quality scoring | 6 dimensions | None |
| Output quality scoring | 5 dimensions | None |
| Template effectiveness proof | CAI metric | None |
| Export formats | 13 | 1–2 |
| Framework integrations | 7 | 0–1 |
| Research citations | 150+ papers | 0–5 |
This is our soft launch. Context engineering is just getting started.
87 patterns today. Research-validated templates, agent memory, grounded RAG, and a lot more on the way.
