How To Start
A Note From The Author of CRAFT
- After hundreds (perhaps thousands) of hours of using these recipes, I rarely need to use any of the CORE Cookbook recipes aside from Recipes RCP-001-001-002-HANDOFF-SNAPSHOT and RCP-001-001-002-HANDOFF-SNAPSHOT, but when I do, they are essential to the functioning of CRAFT. Also, the A.I. reads all of these recipes at the start of each session. This happens quietly in the background. Even though you may never need to call the recipe, the A.I. will know all of them and it helps the A.I. to understand what CRAFT is and how it works. Even if you rarely need to use these recipes, they are still working for you and are essential to the CRAFT Framework.
STEP 1: UNDERSTAND VALIDATION MODES
- This recipe operates in four modes: AUTOMATIC: Silent validation during responses STRICT: Thorough checking of all claim types LENIENT: Check only critical claims RESEARCH_PROMPT: Generate research queries
STEP 2: IDENTIFY CLAIM TYPES
- The recipe scans content for these claim categories: STATISTICS AND NUMBERS: - Percentages, rates, amounts - Growth figures, comparisons - Rankings, scores TEMPORAL CLAIMS: - Specific dates and timeframes - "Current" or "latest" assertions - Historical sequences ATTRIBUTED STATEMENTS: - Quotes from individuals - Claims about organizations - Research findings TECHNICAL SPECIFICATIONS: - Feature lists and capabilities - Performance metrics - Compatibility claims
STEP 3: VERIFY AGAINST SOURCES
- For each identified claim, check sources in order: 1. ATTACHED DOCUMENTS (highest priority) Note specific file, section, line number Confidence: 90-100% 2. PREVIOUS RESEARCH PDFs Check if relevant PDF already attached Confidence: 85-95% 3. GENERAL KNOWLEDGE (use cautiously) Only for widely-known facts Confidence: 60-80% 4. NO SOURCE AVAILABLE Flag for Deep Research Confidence: 0%
STEP 4: ASSIGN CONFIDENCE LEVELS
- Format output based on source quality: SOURCED CLAIMS: #AI->H::Note: (Claim: "[claim]" - Source: [file]) #AI->H::BestGuess::ConfidenceLevel:[X]%: (Verified) COMMON KNOWLEDGE: #AI->H::Note: (Claim: "[claim]" - Common knowledge) #AI->H::BestGuess::ConfidenceLevel:70%: (No source) UNSOURCED CLAIMS: #AI->H::Caution: (Claim: "[claim]" - No source found) #AI->H::Status: (Marking as unsourced assertion)
STEP 5: GENERATE DEEP RESEARCH PROMPT
- When critical claims are unsourced, generate a prompt: DEEP RESEARCH REQUEST: [Topic] Priority: [Critical/Helpful/Optional] Primary Questions: 1. [Specific claim to verify] 2. [Related fact needing verification] 3. [Context question for coverage] Source Priority: 1. Academic/peer-reviewed sources 2. Government/official statistics 3. Industry reports 4. Reputable news outlets
STEP 6: FORMAT VALIDATION REPORT
- Provide a summary: VALIDATION SUMMARY: - Total claims identified: [X] - Sourced claims: [Y] (avg confidence: [Z]%) - Unsourced claims: [A] - Research needed: [Yes/No] #AI->H::RequestingFeedback: (Run Deep Research?)
When to Use This Recipe
Use this recipe when accuracy is critical, such as researchreports, factual content, or technical documentation. Therecipe can run automatically during responses or be invokedexplicitly for thorough validation. Particularly valuablewhen making claims that could affect user decisions.
Recipe FAQ
Q1: Does SOURCE-VALID access the internet to verify claims?
A: No. SOURCE-VALID checks claims against your project files, uploaded documents, and the AI's training knowledge. It does not access real-time data or external databases. When it cannot verify a claim from available sources, it generates a Deep Research prompt for you to run separately.
Q2: What's the difference between "sourced," "inferred," and "unsourced" claims?
A: Sourced claims (85-100% confidence) come directly from project files, uploaded documents, or the AI's training knowledge with clear attribution. Inferred claims (50-84% confidence) are logical conclusions based on available information but not explicitly stated in sources. Unsourced claims (<50% confidence) have no verifiable source and require Deep Research or user confirmation.
Q3: Can I adjust how strict the validation is?
A: Yes. Use validation_mode parameter with four options: "automatic" (balanced checking for everyday use), "strict" (flags even inferred claims, best for public content), "lenient" (only flags statistics and quotes, good for brainstorming), or "research_prompt" (skips validation and generates research prompts immediately).
Q4: What happens when SOURCE-VALID finds unsourced claims?
A: The AI flags them in the validation report, explains why they couldn't be verified, assigns a low confidence score, and generates a structured Deep Research prompt. You then decide whether to remove the claims, run Deep Research to verify them, or accept them with appropriate caveats.
Q5: Should I run SOURCE-VALID on every response?
A: Use SOURCE-VALID whenever factual accuracy matters - especially for external communications, documentation, reports, or decisions. For casual brainstorming or internal discussions, lenient mode or skipping validation may be appropriate. The recipe integrates well with content creation workflows by validating drafts before finalizing.
Q: What is the difference between validation modes?
A: Automatic runs silently, Strict checks everything,
Lenient checks only critical claims, Research_prompt
generates queries for Deep Research. Q: How does confidence assignment work?
A: Confidence reflects source quality: attached docs 90%+,
previous research 85%+, general knowledge 60-80%, no
source = 0% (flagged for research). Q: What triggers a Deep Research recommendation?
A: Critical unsourced claims that are central to the
discussion or could affect user decisions. Q: Why mark claims as unsourced instead of guessing?
A: Transparency. Users can decide whether to verify claims
themselves or use Deep Research.
A: Automatic runs silently, Strict checks everything,
Lenient checks only critical claims, Research_prompt
generates queries for Deep Research. Q: How does confidence assignment work?
A: Confidence reflects source quality: attached docs 90%+,
previous research 85%+, general knowledge 60-80%, no
source = 0% (flagged for research). Q: What triggers a Deep Research recommendation?
A: Critical unsourced claims that are central to the
discussion or could affect user decisions. Q: Why mark claims as unsourced instead of guessing?
A: Transparency. Users can decide whether to verify claims
themselves or use Deep Research.
Actual Recipe Code
(Copy This Plaintext Code To Use)
# =========================================================# START RECIPE-ID: RCP-001-001-007-SOURCE-VALID-v1.00a# =========================================================SOURCE_VALIDATOR_RECIPE = Recipe( recipe_id="RCP-001-004-007-SOURCE-VALID-v1.00a", title="Factual Claim Validator with Deep Research Integration", description="Checks factual claims against available sources, assigns confidence levels, flags unsourced assertions, and generates Deep Research prompts when needed", category="CAT-Foundational", subcategory="SUBCAT-Base-Cookbook", difficulty="medium", parameters={ "content": { "type": "string", "required": True, "description": "Content to validate for factual claims" }, "validation_mode": { "type": "string", "required": False, "default": "automatic", "options": ["automatic", "strict", "lenient", "research_prompt"], "description": "How aggressive to be in validation" }, "claim_types": { "type": "list", "required": False, "default": ["statistics", "dates", "quotes", "technical_specs"], "description": "Types of claims to validate" } }, prompt_template=""" #H->AI::Directive: (Validate factual claims in '{content}') #H->AI::Context: (Mode: {validation_mode}, Checking: {claim_types}) STEP 0: POLICY PRE-CHECK ====================== Check if content involves: - Claims about AI capabilities/limitations - Political statements or statistics - Medical/legal/financial facts - Personal information about individuals IF policy-sensitive: #AI->H::PolicyCaution: (Fact-checking {topic} requires careful handling) Proceed with extra verification requirements STEP 1: IDENTIFY FACTUAL CLAIMS =============================== Scan content for: STATISTICS & NUMBERS: - Percentages, rates, amounts - Growth figures, comparisons - Rankings, scores TEMPORAL CLAIMS: - Specific dates, timeframes - "Current" or "latest" assertions - Historical sequences ATTRIBUTED STATEMENTS: - Quotes from individuals - Claims about what organizations said/did - Research findings TECHNICAL SPECIFICATIONS: - Feature lists, capabilities - Performance metrics - Compatibility claims STEP 2: SOURCE VERIFICATION =========================== For each identified claim: CHECK AVAILABLE SOURCES: 1. Attached documents (highest priority) - Note specific file, section, line number - Confidence: 90-100% 2. Previous Deep Research PDFs - Check if relevant PDF already attached - Confidence: 85-95% 3. General knowledge (use cautiously) - Only for widely-known facts - Confidence: 60-80% 4. No source available - Flag for Deep Research - Confidence: 0% STEP 3: ASSIGN CONFIDENCE LEVELS ================================ For each claim, assign confidence based on: SOURCED CLAIMS: #AI->H::Note: (Claim: "{claim}" - Source: {file}, {section}) #AI->H::BestGuess::ConfidenceLevel:{X}%: (Based on source quality) COMMON KNOWLEDGE: #AI->H::Note: (Claim: "{claim}" - Common knowledge) #AI->H::BestGuess::ConfidenceLevel:{70}%: (No specific source) UNSOURCED CLAIMS: #AI->H::Caution: (Claim: "{claim}" - No source found) #AI->H::Status: (Marking as unsourced assertion) STEP 4: DEEP RESEARCH TRIGGERS ============================== Identify claims needing Deep Research: PRIORITY LEVELS: - Critical: Core claims central to discussion - Helpful: Supporting facts that enhance accuracy - Optional: Interesting but non-essential details If critical unsourced claims found: #AI->H::Status: (Deep Research recommended for fact verification) #AI->H::Note: (Generating comprehensive research prompt) STEP 5: GENERATE DEEP RESEARCH PROMPT ==================================== When {validation_mode} == "research_prompt" OR critical claims unsourced: DEEP RESEARCH REQUEST: [Topic from claims] Priority: [Critical/Helpful/Optional] Primary Questions: 1. [Specific claim to verify] 2. [Related fact needing verification] 3. [Context question for comprehensive coverage] Additional Context to Include: - Historical data on topic (if relevant) - Current statistics and trends - Authoritative sources on subject - Common misconceptions to address - Related topics for anticipatory coverage Time Range: [Relevant period] Source Priority: 1. Academic/peer-reviewed sources 2. Government/official statistics 3. Industry reports 4. News from reputable outlets Anticipatory Coverage: [List related questions likely to arise] STEP 6: FORMAT VALIDATION REPORT ================================ VALIDATION SUMMARY: - Total claims identified: [X] - Sourced claims: [Y] (average confidence: [Z]%) - Unsourced claims: [A] - Research needed: [Yes/No] DETAILED RESULTS: [For each claim, show source and confidence] If Deep Research needed: #AI->H::RequestingFeedback: (Would you like to run Deep Research on the unsourced claims? Priority: [level]) #AI->H::Note: (Copy the research prompt above for Deep Research) #H->AI::OnError: (If unable to determine source, always mark as unsourced rather than guessing) """)# USAGE EXAMPLES:# ==============# Automatic validation during response:# AI: "90% of developers use AI tools"# Recipe triggers, checks for source, assigns confidence# Generate research prompt:# execute_recipe({# "content": "Claims about CRAFT adoption rates",# "validation_mode": "research_prompt"# })# Strict validation mode:# execute_recipe({# "content": draft_response,# "validation_mode": "strict",# "claim_types": ["statistics", "quotes"]# })# =========================================================# END RECIPE-ID: RCP-001-001-007-SOURCE-VALID-v1.00a# =========================================================
