RCP-001-001-016-PROMPTFWKS-WITHIN-CRAFT – Prompt Framework Intelligence Assistant for CRAFT

The Prompt Framework Intelligence Assistant embeds expertise on 20+ proven prompt engineering frameworks (Chain of Thought, Tree of Thoughts, ReAct, Few-Shot, etc.) to analyze prompts and suggest optimal structures for dramatically improved clarity and AI response quality. It matches task types to frameworks, explains why specific patterns help, offers restructuring, and tracks effectiveness – making expert-level prompt engineering accessible within CRAFT.

Recipe Name: RCP-001-001-016-PROMPTFWKS-WITHIN-CRAFT – Prompt Framework Intelligence Assistant Recipe
RCP-001-001-016-PROMPTFWKS-WITHIN-CRAFT
Analyzes prompts and suggests optimal prompt crafting
frameworks to improve communication clarity. Embeds
knowledge of 20 popular frameworks with intelligent
recommendations based on task type and context.
Multi-Recipe Combo Stage Single Recipe
Recipe Category CFT-FWK-COOKBK-CORE – CRAFT CORE Cookbook
Recipe Subcategory Blogging with A.I., Brainstorming with A.I.
Recipe Difficulty Easy
Recipe Tags: Foundational | Introduced in the POC

How To Start
 

A Note From The Author of CRAFT
  • After hundreds (perhaps thousands) of hours of using these recipes, I rarely need to use any of the CORE Cookbook recipes aside from Recipes RCP-001-001-002-HANDOFF-SNAPSHOT and RCP-001-001-002-HANDOFF-SNAPSHOT, but when I do, they are essential to the functioning of CRAFT. Also, the A.I. reads all of these recipes at the start of each session. This happens quietly in the background. Even though you may never need to call the recipe, the A.I. will know all of them and it helps the A.I. to understand what CRAFT is and how it works.
    Even if you rarely need to use these recipes, they are still working for you and are essential to the CRAFT Framework.
STEP 1: Understand the Recipe Purpose
  • This recipe analyzes your prompts and suggests which of
    20 prompt engineering frameworks would improve clarity
    and AI response quality. It integrates with COM for
    automatic detection of improvement opportunities.
    The recipe contains an embedded knowledge base of:
    Chain-of-Thought, Tree of Thoughts, ReAct,
    Self-Consistency, Least-to-Most, Maieutic,
    Generated Knowledge, Prompt Chaining,
    Directional Stimulus, Role Prompting,
    Few-Shot Learning, Zero-Shot CoT,
    Structured Output, Analogical, Emotion,
    Meta-Cognitive, Contrastive, Recursive,
    Constitutional AI, and Synthetic Prompting
STEP 2: Trigger the Recipe
  • The recipe can be triggered in three ways:
    AUTOMATIC via COM:
    When CRAFT-OPERATIONS-MANAGER detects low prompt
    clarity (score below 7) or high ambiguity, it
    automatically suggests this recipe.
    MANUAL via Directive:
    #H->AI::Directive: (Analyze this prompt for
    framework improvement: [your prompt here])
    PATTERN-BASED:
    Using phrases like "help me phrase", "better way
    to ask", or "how should I ask" triggers analysis.
STEP 3: Review the Analysis
  • The recipe provides:
    PROMPT ANALYSIS
    Clarity score (1-10)
    Complexity assessment (low/medium/high)
    Task type identification
    Ambiguity points
    Improvement potential percentage
    FRAMEWORK RECOMMENDATION
    Primary recommended framework
    Why this framework matches your prompt
    Effectiveness percentage
    Restructured prompt example
    ALTERNATIVES
    Top 2-3 alternative frameworks
    Relevance scores for each
STEP 4: Apply or Skip
  • After reviewing the recommendation:
    TO APPLY:
    Confirm you want the restructured prompt applied.
    The AI will reformulate your original prompt using
    the recommended framework pattern.
    TO SKIP:
    If your prompt is already well-structured or you
    prefer your original phrasing, simply decline.
    No changes will be made.
STEP 5: Learn the Framework (Optional)
  • When learning_mode is enabled (default), the recipe
    explains:
    How the framework works mechanically
    CRAFT-specific applications
    Example patterns to follow
    Tips for effective usage
    Set learning_mode to False for recommendations
    without detailed explanations.

How AI Reads This Recipe

When processing this recipe, the AI assistant:
1. Loads the embedded PROMPT_FRAMEWORKS knowledge base
containing 20 frameworks with trigger patterns,
effectiveness scores, and CRAFT contexts.
2. Analyzes the user prompt for clarity score,
complexity level, and task type identification.
3. Calculates relevance scores for each framework:
+0.3 for trigger pattern matches
+0.5 for task context alignment
+0.2 for complexity level match
+0.4 for CRAFT context relevance
4. Selects frameworks scoring above 0.5 threshold
and sorts by combined score times effectiveness.
5. Generates recommendation using PROMPT_FWK comment
types for COM integration.
6. Offers alternatives if multiple frameworks apply.
7. Explains framework mechanics when learning mode on.
8. Asks user permission before applying changes.
PROCESSING PRIORITY
Priority Description
——– ———–
CRITICAL Load framework knowledge base
CRITICAL Assess prompt clarity accurately
HIGH Match appropriate framework
HIGH Calculate relevance scores
MEDIUM Generate restructured example
LOW Provide learning explanations
COM INTEGRATION
This recipe uses custom comment types:
#AI->H::PROMPT_FWK::Recommendation:
#AI->H::PROMPT_FWK::Alternatives:
#AI->H::PROMPT_FWK::Learning:
#AI->H::PROMPT_FWK::Status:
#AI->H::PROMPT_FWK::Opportunity:
These are detected by CRAFT-OPERATIONS-MANAGER
(RCP-001-001-015) for automated suggestions.

When to Use This Recipe

IDEAL USE CASES
When user prompts seem unclear or could benefit
from more structure.
When tackling complex tasks that would benefit
from structured reasoning approaches.
When learning prompt engineering techniques
through practical examples.
When COM detects low prompt clarity scores
during automatic monitoring.
When working on specific task types:
Recipe creation (use Tree of Thoughts)
Debugging (use ReAct)
Documentation (use Generated Knowledge)
Analysis (use Chain-of-Thought)
Validation (use Self-Consistency)
WHEN NOT TO USE THIS RECIPE
For simple, clear prompts that need no improvement.
When rapid response is more important than
optimal prompt structure.
When user explicitly prefers conversational
style over structured prompts.
For quick factual queries with obvious intent.

Recipe FAQ

Q: How many frameworks does this recipe know?
A: The recipe embeds knowledge of 20 popular prompt
engineering frameworks including Chain-of-Thought,
Tree of Thoughts, ReAct, and more.
Q: Will this recipe always suggest changes?
A: No. If your prompt is already well-structured with
a clarity score above threshold, the recipe confirms
no changes are needed.
Q: Can I disable the learning explanations?
A: Yes, set learning_mode to False for suggestions
without detailed framework explanations.
Q: How does task context improve recommendations?
A: Providing task_context allows the recipe to weight
frameworks that are preferred for that specific
type of work (10 task contexts mapped).
Q: Does this integrate with COM?
A: Yes, CRAFT-OPERATIONS-MANAGER can automatically
trigger this recipe when low prompt clarity is
detected.
Q: What task contexts are supported?
A: recipe_creation, debugging, documentation, analysis,
workflow, creative, validation, learning,
optimization, and testing.

Actual Recipe Code

(Copy This Plaintext Code To Use)
# ===========================================================
# START RECIPE: RCP-001-001-016-PROMPTFWKS-v2.00a
# ===========================================================
PROMPTFWKS_WITHIN_CRAFT_RECIPE = Recipe(
recipe_id="RCP-001-001-016-PROMPTFWKS-v2.00a",
title="Prompt Framework Intelligence Assistant",
description="Analyzes prompts and suggests optimal prompt
crafting frameworks to improve communication clarity
and effectiveness. Embeds knowledge of 20 popular
frameworks with intelligent recommendations based
on task type.",
category="CAT-001",
subcategory="SUBCAT-001-Communication",
difficulty="medium",
parameters={
"user_prompt": {
"type": "string",
"required": True,
"description": "Prompt to analyze"
},
"task_context": {
"type": "string",
"required": False,
"description": "Task type context"
},
"learning_mode": {
"type": "boolean",
"required": False,
"default": True,
"description": "Show explanations"
},
"auto_suggest": {
"type": "boolean",
"required": False,
"default": True,
"description": "Auto-suggest mode"
}
},
prompt_template='''
#H->AI::Directive: (Analyze prompt for framework
improvement opportunities)
#H->AI::Context: (You are the Prompt Framework
Intelligence Assistant)
# ——————————————————-
# EMBEDDED FRAMEWORK KNOWLEDGE BASE
# ——————————————————-
PROMPT_FRAMEWORKS = {
"CHAIN_OF_THOUGHT": {
"name": "Chain-of-Thought (CoT)",
"description": "Break down complex problems
into sequential reasoning steps",
"best_for": [
"mathematical problems",
"logical reasoning",
"complex analysis",
"multi-step tasks"
],
"trigger_patterns": [
"calculate",
"analyze step by step",
"explain your reasoning",
"work through"
],
"example_pattern": "Let us think step by step:
1) First… 2) Then… 3) Finally…",
"craft_contexts": [
"recipe debugging",
"complex function creation",
"workflow design"
],
"effectiveness": 0.85,
"complexity": "low"
},
"TREE_OF_THOUGHTS": {
"name": "Tree of Thoughts (ToT)",
"description": "Explore multiple reasoning
paths and evaluate them",
"best_for": [
"creative problem solving",
"strategic planning",
"exploring alternatives",
"complex decisions"
],
"trigger_patterns": [
"explore options",
"consider alternatives",
"what are the possibilities",
"evaluate approaches"
],
"example_pattern": "Consider three approaches:
A) [approach]… B) [approach]…
C) [approach]… Evaluating each…",
"craft_contexts": [
"recipe architecture",
"framework design",
"optimization strategies"
],
"effectiveness": 0.90,
"complexity": "high"
},
"REACT": {
"name": "ReAct (Reasoning + Acting)",
"description": "Combine reasoning with action
steps iteratively",
"best_for": [
"debugging",
"interactive tasks",
"tool usage",
"real-time problem solving"
],
"trigger_patterns": [
"debug this",
"fix the error",
"troubleshoot",
"iterate until"
],
"example_pattern": "Thought: [reasoning] then
Action: [step] then Observation: [result]
then Repeat",
"craft_contexts": [
"error resolution",
"recipe testing",
"iterative improvement"
],
"effectiveness": 0.88,
"complexity": "medium"
},
"SELF_CONSISTENCY": {
"name": "Self-Consistency",
"description": "Generate multiple solutions
and find consensus",
"best_for": [
"validation",
"accuracy improvement",
"reducing hallucination",
"fact checking"
],
"trigger_patterns": [
"verify this",
"double check",
"ensure accuracy",
"validate"
],
"example_pattern": "Approach 1: [solution]…
Approach 2: [solution]…
Approach 3: [solution]…
Consensus: [final]",
"craft_contexts": [
"SOURCE-VALID enhancement",
"fact verification",
"quality assurance"
],
"effectiveness": 0.92,
"complexity": "medium"
},
"LEAST_TO_MOST": {
"name": "Least-to-Most",
"description": "Decompose complex problems
into simpler subproblems",
"best_for": [
"complex calculations",
"hierarchical tasks",
"learning new concepts",
"building understanding"
],
"trigger_patterns": [
"break this down",
"start simple",
"build up to",
"decompose"
],
"example_pattern": "Subproblem 1: [simple]…
Solve… Subproblem 2: [harder]… Solve…
Main problem: [complex]",
"craft_contexts": [
"learning CRAFT",
"complex recipe creation",
"framework understanding"
],
"effectiveness": 0.87,
"complexity": "medium"
},
"MAIEUTIC": {
"name": "Maieutic Prompting",
"description": "Use Socratic questioning to
reach deeper understanding",
"best_for": [
"deep analysis",
"uncovering assumptions",
"philosophical reasoning",
"critical thinking"
],
"trigger_patterns": [
"question assumptions",
"dig deeper",
"what is really happening",
"challenge this"
],
"example_pattern": "Initial claim: [X]. But is
[X] always true? What if [Y]? This
suggests…",
"craft_contexts": [
"framework philosophy",
"design decisions",
"architectural choices"
],
"effectiveness": 0.83,
"complexity": "high"
},
"GENERATED_KNOWLEDGE": {
"name": "Generated Knowledge",
"description": "Generate relevant knowledge
before answering",
"best_for": [
"knowledge-intensive tasks",
"fact-based responses",
"educational content",
"documentation"
],
"trigger_patterns": [
"explain",
"teach me about",
"document",
"what do we know about"
],
"example_pattern": "Background: [generate
context]… Given this knowledge:
[apply to question]",
"craft_contexts": [
"documentation creation",
"blog posts",
"educational content"
],
"effectiveness": 0.86,
"complexity": "low"
},
"PROMPT_CHAINING": {
"name": "Prompt Chaining",
"description": "Chain multiple prompts where
outputs feed into next inputs",
"best_for": [
"workflows",
"multi-stage processes",
"complex transformations",
"pipeline tasks"
],
"trigger_patterns": [
"then",
"after that",
"multi-step",
"workflow"
],
"example_pattern": "Step 1: [task] then Output
then Step 2: Using [output], [next task]
then Continue",
"craft_contexts": [
"recipe sequences",
"workflow automation",
"complex processes"
],
"effectiveness": 0.89,
"complexity": "medium"
},
"DIRECTIONAL_STIMULUS": {
"name": "Directional Stimulus",
"description": "Include hints or cues to
guide reasoning",
"best_for": [
"guided exploration",
"specific focus areas",
"avoiding tangents",
"targeted analysis"
],
"trigger_patterns": [
"focus on",
"specifically",
"pay attention to",
"emphasize"
],
"example_pattern": "Analyze [X], particularly
focusing on [specific aspect]. Key
consideration: [hint]",
"craft_contexts": [
"targeted debugging",
"specific improvements",
"focused analysis"
],
"effectiveness": 0.84,
"complexity": "low"
},
"ROLE_PROMPTING": {
"name": "Role Prompting",
"description": "Assign specific role or
expertise perspective",
"best_for": [
"expert analysis",
"specialized knowledge",
"perspective taking",
"professional output"
],
"trigger_patterns": [
"as a",
"from perspective of",
"expert in",
"act as"
],
"example_pattern": "As a [role/expert], analyze
this: [task]. Consider [role-specific
factors]",
"craft_contexts": [
"persona creation",
"specialized recipes",
"expert systems"
],
"effectiveness": 0.82,
"complexity": "low"
},
"FEW_SHOT": {
"name": "Few-Shot Learning",
"description": "Provide examples to demonstrate
desired pattern",
"best_for": [
"pattern matching",
"format specification",
"style mimicking",
"consistent output"
],
"trigger_patterns": [
"like this",
"following format",
"similar to",
"examples"
],
"example_pattern": "Example 1: [input] then
[output]. Example 2: [input] then [output].
Now: [actual task]",
"craft_contexts": [
"recipe examples",
"format templates",
"pattern establishment"
],
"effectiveness": 0.91,
"complexity": "low"
},
"ZERO_SHOT_COT": {
"name": "Zero-Shot Chain-of-Thought",
"description": "Add reasoning trigger without
examples",
"best_for": [
"quick analysis",
"simple reasoning tasks",
"when examples unavailable",
"general problem solving"
],
"trigger_patterns": [
"think about this",
"reason through",
"consider",
"lets think"
],
"example_pattern": "Lets think about this step
by step…",
"craft_contexts": [
"quick debugging",
"initial analysis",
"exploratory work"
],
"effectiveness": 0.80,
"complexity": "low"
},
"STRUCTURED_OUTPUT": {
"name": "Structured Output",
"description": "Define exact output format
and structure",
"best_for": [
"data extraction",
"consistent formatting",
"API responses",
"form filling"
],
"trigger_patterns": [
"format as",
"structure like",
"JSON",
"in this format"
],
"example_pattern": "Provide output in this
exact structure: { field1: value1,
field2: value2 }",
"craft_contexts": [
"recipe output formatting",
"data extraction",
"consistent responses"
],
"effectiveness": 0.93,
"complexity": "low"
},
"ANALOGICAL": {
"name": "Analogical Prompting",
"description": "Use analogies to explain or
solve problems",
"best_for": [
"explaining complex concepts",
"creative solutions",
"teaching",
"cross-domain transfer"
],
"trigger_patterns": [
"like",
"similar to",
"analogy",
"compare to"
],
"example_pattern": "This is like [familiar
concept] because [similarities]. So we
can apply [solution]…",
"craft_contexts": [
"framework explanations",
"teaching CRAFT",
"creative problem solving"
],
"effectiveness": 0.81,
"complexity": "medium"
},
"EMOTION_PROMPTING": {
"name": "Emotion Prompting",
"description": "Include emotional context or
stakes to improve engagement",
"best_for": [
"creative writing",
"persuasive content",
"user-facing messages",
"important communications"
],
"trigger_patterns": [
"important",
"crucial",
"this matters because",
"critical"
],
"example_pattern": "This is important because
[stakes]. Please ensure [quality aspect]
as [consequence]…",
"craft_contexts": [
"stakeholder communications",
"error handling messages",
"user guidance"
],
"effectiveness": 0.79,
"complexity": "low"
},
"METACOGNITIVE": {
"name": "Meta-Cognitive Prompting",
"description": "Ask AI to reflect on its own
reasoning process",
"best_for": [
"quality improvement",
"self-correction",
"confidence assessment",
"reasoning validation"
],
"trigger_patterns": [
"reflect on",
"are you sure",
"check your reasoning",
"reconsider"
],
"example_pattern": "After answering, reflect:
What assumptions did I make? What could
be wrong? How confident am I?",
"craft_contexts": [
"CONFIDENCE-CALIB enhancement",
"quality assurance",
"validation workflows"
],
"effectiveness": 0.86,
"complexity": "medium"
},
"CONTRASTIVE": {
"name": "Contrastive Prompting",
"description": "Compare correct and incorrect
examples",
"best_for": [
"error identification",
"best practices",
"quality standards",
"training"
],
"trigger_patterns": [
"right vs wrong",
"good vs bad",
"do vs dont",
"compare"
],
"example_pattern": "GOOD example: [correct].
BAD example: [incorrect]. The difference
is [key distinction]…",
"craft_contexts": [
"recipe best practices",
"error prevention",
"quality standards"
],
"effectiveness": 0.85,
"complexity": "medium"
},
"RECURSIVE": {
"name": "Recursive Prompting",
"description": "Apply same process iteratively
to refine results",
"best_for": [
"iterative improvement",
"refinement",
"optimization",
"progressive enhancement"
],
"trigger_patterns": [
"refine",
"improve on",
"iterate",
"make better"
],
"example_pattern": "First attempt: [result].
Improve by [criteria]. Second attempt:
[better]. Continue until [threshold]…",
"craft_contexts": [
"recipe optimization",
"content refinement",
"iterative development"
],
"effectiveness": 0.88,
"complexity": "high"
},
"CONSTITUTIONAL_AI": {
"name": "Constitutional AI",
"description": "Apply explicit principles or
rules to guide output",
"best_for": [
"ethical considerations",
"policy compliance",
"safety checks",
"principle-based decisions"
],
"trigger_patterns": [
"according to rules",
"must comply with",
"following principles",
"ensure safe"
],
"example_pattern": "Principle 1: [rule].
Principle 2: [rule]. Apply these when
generating response…",
"craft_contexts": [
"security validation",
"policy compliance",
"safe content generation"
],
"effectiveness": 0.90,
"complexity": "medium"
},
"SYNTHETIC_PROMPTING": {
"name": "Synthetic Prompting",
"description": "Generate test cases or
examples programmatically",
"best_for": [
"testing",
"edge case generation",
"data augmentation",
"comprehensive coverage"
],
"trigger_patterns": [
"generate test cases",
"create examples",
"edge cases",
"comprehensive testing"
],
"example_pattern": "Generate test cases
covering: [normal case], [edge case],
[error case], [boundary case]…",
"craft_contexts": [
"recipe testing",
"validation coverage",
"edge case handling"
],
"effectiveness": 0.84,
"complexity": "high"
}
}
# ——————————————————-
# TASK CONTEXT MAPPINGS
# ——————————————————-
TASK_FRAMEWORK_PREFERENCES = {
"recipe_creation": [
"TREE_OF_THOUGHTS",
"LEAST_TO_MOST",
"FEW_SHOT"
],
"debugging": [
"REACT",
"CHAIN_OF_THOUGHT",
"METACOGNITIVE"
],
"documentation": [
"GENERATED_KNOWLEDGE",
"STRUCTURED_OUTPUT",
"FEW_SHOT"
],
"analysis": [
"CHAIN_OF_THOUGHT",
"SELF_CONSISTENCY",
"MAIEUTIC"
],
"workflow": [
"PROMPT_CHAINING",
"LEAST_TO_MOST",
"DIRECTIONAL_STIMULUS"
],
"creative": [
"TREE_OF_THOUGHTS",
"ANALOGICAL",
"EMOTION_PROMPTING"
],
"validation": [
"SELF_CONSISTENCY",
"CONSTITUTIONAL_AI",
"CONTRASTIVE"
],
"learning": [
"LEAST_TO_MOST",
"GENERATED_KNOWLEDGE",
"ANALOGICAL"
],
"optimization": [
"RECURSIVE",
"METACOGNITIVE",
"TREE_OF_THOUGHTS"
],
"testing": [
"SYNTHETIC_PROMPTING",
"CONTRASTIVE",
"SELF_CONSISTENCY"
]
}
# ——————————————————-
# STEP 1: ANALYZE PROMPT
# ——————————————————-
#AI->H::Status: (Analyzing prompt for opportunities)
prompt_analysis = {
"clarity_score": assess_prompt_clarity(
"{user_prompt}"
),
"complexity": assess_complexity(
"{user_prompt}"
),
"task_type": identify_task_type(
"{user_prompt}",
"{task_context}"
),
"ambiguity_points": find_ambiguities(
"{user_prompt}"
),
"improvement_potential": calculate_potential()
}
# ——————————————————-
# STEP 2: IDENTIFY APPLICABLE FRAMEWORKS
# ——————————————————-
applicable_frameworks = []
for framework_id, framework in PROMPT_FRAMEWORKS.items():
score = 0
# Check trigger patterns (+0.3 each)
for pattern in framework["trigger_patterns"]:
if pattern.lower() in "{user_prompt}".lower():
score += 0.3
# Check task context alignment (+0.5)
if "{task_context}" in TASK_FRAMEWORK_PREFERENCES:
prefs = TASK_FRAMEWORK_PREFERENCES[
"{task_context}"
]
if framework_id in prefs:
score += 0.5
# Check complexity match (+0.2)
if matches_complexity(
framework["complexity"],
prompt_analysis["complexity"]
):
score += 0.2
# Check CRAFT context relevance (+0.4)
for craft_context in framework["craft_contexts"]:
prompt_lower = "{user_prompt}".lower()
context_lower = "{task_context}".lower()
if craft_context in prompt_lower:
score += 0.4
if craft_context == context_lower:
score += 0.4
# Add if score exceeds threshold
if score > 0.5:
applicable_frameworks.append({
"framework": framework_id,
"score": score,
"effectiveness": framework["effectiveness"],
"details": framework
})
# Sort by combined score times effectiveness
applicable_frameworks.sort(
key=lambda x: x["score"] * x["effectiveness"],
reverse=True
)
# ——————————————————-
# STEP 3: GENERATE RECOMMENDATIONS
# ——————————————————-
IF len(applicable_frameworks) > 0:
top_framework = applicable_frameworks[0]
alternatives = applicable_frameworks[1:3]
#AI->H::PROMPT_FWK::Recommendation: (
Detected opportunity to improve prompt clarity
using {top_framework["details"]["name"]}
CURRENT PROMPT ANALYSIS
Clarity: {prompt_analysis["clarity_score"]}/10
Complexity: {prompt_analysis["complexity"]}
Improvement Potential:
{prompt_analysis["improvement_potential"]}%
RECOMMENDED FRAMEWORK
{top_framework["details"]["name"]}
{top_framework["details"]["description"]}
WHY THIS FRAMEWORK
Best for: {top_framework["details"]["best_for"]}
Effectiveness:
{top_framework["effectiveness"]*100}%
Complexity:
{top_framework["details"]["complexity"]}
SUGGESTED RESTRUCTURE
{generate_restructured_prompt(
user_prompt,
top_framework
)}
)
IF len(alternatives) > 0:
#AI->H::PROMPT_FWK::Alternatives: (
Alternative frameworks to consider:
{format_alternatives(alternatives)}
)
IF {learning_mode}:
#AI->H::PROMPT_FWK::Learning: (
UNDERSTANDING
{top_framework["details"]["name"]}
This framework works by
{explain_framework_mechanism(top_framework)}.
In CRAFT contexts, useful for:
{top_framework["details"]["craft_contexts"]}
Example pattern:
{top_framework["details"]["example_pattern"]}
Tips for using this framework:
{generate_framework_tips(top_framework)}
)
#AI->H::Question: (Would you like me to apply
the {top_framework["details"]["name"]}
framework to restructure your prompt?)
ELSE:
#AI->H::PROMPT_FWK::Status: (Prompt is already
well-structured – no framework improvements
needed)
# ——————————————————-
# STEP 4: TRACK USAGE PATTERNS
# ——————————————————-
IF framework_applied:
track_framework_usage({
"framework": selected_framework,
"context": task_context,
"effectiveness": user_feedback,
"prompt_type": prompt_analysis["task_type"]
})
# ——————————————————-
# HELPER FUNCTIONS
# ——————————————————-
def assess_prompt_clarity(prompt):
"""
Assess prompt clarity on 1-10 scale.
Checks: specificity, objectives, constraints.
"""
clarity_score = 10
# Deduct for vague language
vague_terms = [
"something", "stuff", "things",
"maybe", "probably", "kind of"
]
for term in vague_terms:
if term in prompt.lower():
clarity_score -= 1
# Deduct for missing objectives
if not has_clear_objective(prompt):
clarity_score -= 2
# Deduct for undefined constraints
if has_ambiguous_scope(prompt):
clarity_score -= 1
return max(1, clarity_score)
def assess_complexity(prompt):
"""
Assess prompt complexity level.
Returns: low, medium, or high.
"""
word_count = len(prompt.split())
has_multiple_parts = (
"and" in prompt or "then" in prompt
)
if word_count < 20 and not has_multiple_parts:
return "low"
elif word_count < 50:
return "medium"
else:
return "high"
def identify_task_type(prompt, context):
"""
Identify the type of task from prompt content.
"""
if context:
return context
task_indicators = {
"recipe_creation": ["create", "build", "make"],
"debugging": ["fix", "error", "bug", "issue"],
"documentation": [
"document", "explain", "guide"
],
"analysis": ["analyze", "review", "assess"],
"workflow": ["workflow", "process", "steps"],
"creative": ["creative", "brainstorm", "ideas"],
"validation": ["validate", "verify", "check"],
"learning": ["learn", "understand", "teach"],
"optimization": [
"optimize", "improve", "better"
],
"testing": ["test", "edge case", "coverage"]
}
for task_type, indicators in task_indicators.items():
for indicator in indicators:
if indicator in prompt.lower():
return task_type
return "general"
def generate_restructured_prompt(original, framework):
"""
Apply framework pattern to original prompt.
Maintains intent while adding structure.
"""
pattern = framework["details"]["example_pattern"]
# Integrate original content with pattern
return restructured_prompt
def explain_framework_mechanism(framework):
"""
Provide clear explanation of how framework works.
"""
return explanation
def format_alternatives(alternatives):
"""
Format alternative frameworks for display.
"""
formatted = []
for alt in alternatives:
formatted.append(
f"{alt['details']['name']}: "
f"Score {alt['score']:.2f}, "
f"Effectiveness {alt['effectiveness']*100}%"
)
return formatted
def generate_framework_tips(framework):
"""
Generate usage tips for the framework.
"""
best_for_item = framework['details']['best_for'][0]
complexity = framework['details']['complexity']
contexts = framework['details']['craft_contexts']
tips = [
f"Use when: {best_for_item}",
f"Complexity: {complexity}",
f"CRAFT contexts: {contexts}"
]
return tips
#H->AI::OnError: (If no frameworks apply, continue
with original prompt without modification)
'''
)
# ===========================================================
# END RECIPE: RCP-001-001-016-PROMPTFWKS-WITHIN-CRAFT-v2.00a
# ===========================================================

Similar Posts

Leave a Reply