Skip to main content

SocraticQuestioner

Category: Specialized | Module: mycontext.templates.free.specialized

Applies the classical Socratic method to examine statements, claims, and positions. Generates 2–3 penetrating questions across six categories: clarification, assumptions, evidence, alternative perspectives, implications, and meta-questions. Reveals what's taken for granted and what needs examination.

When to Use

  • Strategic decision examination before committing
  • Evaluating business proposals or plans
  • Critical thinking training
  • Examining AI-generated conclusions
  • Reviewing arguments or recommendations
  • Philosophy, ethics, or policy analysis
  • Challenging your own team's assumptions

Quick Start

from mycontext.templates.free.specialized import SocraticQuestioner

questioner = SocraticQuestioner()

ctx = questioner.build_context(
statement="We should implement AI in our hiring process",
depth="thorough",
)
result = ctx.execute(provider="openai")
print(result.response)

Methods

build_context(statement, context="", depth="detailed")

Parameters:

ParameterTypeDefaultDescription
statementstrrequiredThe claim, belief, or position to examine
contextstr""Background context for the statement
depthstr"detailed"Depth of inquiry

execute(provider, statement, context="", depth="detailed", **kwargs)

result = questioner.execute(
provider="openai",
statement="Remote work is more productive than office work",
depth="thorough",
)

Six Categories of Socratic Questions

1. Clarifying Questions

Ensure the statement is precisely understood before examining it.

  • What exactly do you mean by "implement AI"?
  • What specific part of hiring — screening, interviewing, selection?
  • Does "we" mean all roles or specific ones?

2. Probing Assumptions

Surface what's taken for granted without evidence.

  • What are you assuming about AI's ability to judge candidates?
  • Are you assuming current hiring is unbiased?
  • What would change if those assumptions were wrong?

3. Probing Reasons and Evidence

Test the factual foundation of the statement.

  • What evidence shows AI improves hiring outcomes?
  • What data do you have about current hiring quality?
  • What would change your view on this?

4. Questioning Viewpoints

Surface alternative perspectives.

  • How would a rejected candidate view this?
  • What would an employment lawyer say?
  • What does research on algorithmic bias show?

5. Probing Implications

Follow the claim to its consequences.

  • If implemented, what legal obligations arise?
  • If it works, what happens to human hiring expertise?
  • What precedent does this set for other AI decisions?

6. Questioning the Question

Examine whether this is even the right question.

  • Why is this the proposed solution? What problem are we solving?
  • Is "implement AI" the right level of specificity?
  • What better question should we be asking?

Depth Levels

DepthQuestions per categoryUse case
"basic"1Quick assumption check
"detailed"2Standard Socratic examination
"thorough"3Deep, comprehensive inquiry

Examples

Strategic Decision

result = questioner.execute(
provider="openai",
statement="We should pivot from B2B to B2C to access a larger market",
depth="thorough",
)

Technical Claim

result = questioner.execute(
provider="anthropic",
statement="Microservices are always better than monoliths for scalability",
depth="detailed",
)

Policy Examination

result = questioner.execute(
provider="gemini",
statement="Mandatory office attendance 3 days per week will improve team collaboration",
context="Post-pandemic policy proposal for a 200-person technology company",
depth="thorough",
)

AI Output Verification

# Use Socratic questioning to critically examine LLM outputs
llm_conclusion = "Your product should target enterprise customers because they have higher LTV"

result = questioner.execute(
provider="openai",
statement=llm_conclusion,
context="Early-stage startup, $0 revenue, B2C product currently",
depth="detailed",
)

The Synthesis Section

After the six categories, the pattern synthesizes the most important insights the questions collectively reveal — what they expose about the statement's strengths, weaknesses, and blind spots.

Chaining with Other Patterns

Combine with RiskAssessor for complete decision examination:

from mycontext.templates.free.specialized import SocraticQuestioner, RiskAssessor

statement = "We should acquire a competitor to accelerate growth"

# Step 1: Examine assumptions
soc_result = SocraticQuestioner().execute(
provider="openai",
statement=statement,
depth="thorough",
)

# Step 2: Assess risks of proceeding
risk_result = RiskAssessor().execute(
provider="openai",
decision=statement,
context=soc_result.response[:500],
depth="comprehensive",
)

API Reference

MethodReturnsDescription
build_context(statement, context, depth)ContextAssembled context
execute(provider, statement, context, depth, **kwargs)ProviderResponseExecute questioning
generic_prompt(statement, context_section, depth)strZero-cost prompt string