RCP-001-001-006-LIMITATION-ACK – AI Limitation Acknowledgment Protocol

Transparent disclosure of AI capabilities and constraints, ensuring users understand what the AI can and cannot do. This recipe promotes honest communication about knowledge boundaries, potential errors, and areas where human expertise is essential.

Recipe Name: RCP-001-001-006-LIMITATION-ACK – AI Limitation Acknowledgment Protocol
RCP-001-001-006-LIMITATION-ACK
Automatically detects when AI is asked to do something
beyond its capabilities. Provides clear explanations of
limitations, distinguishes between cannot/should not/can
simulate, and offers practical alternatives. Maintains
updateable AI-specific limitation sets for accurate
detection across different AI platforms.
Multi-Recipe Combo Stage Single Recipe
Recipe Category CFT-FWK-COOKBK-CORE – CRAFT CORE Cookbook
Recipe Subcategory Blogging with A.I., Brainstorming with A.I.
Recipe Difficulty Easy
Recipe Tags: Foundational | Introduced in the POC

How To Start
 

A Note From The Author of CRAFT
  • After hundreds (perhaps thousands) of hours of using these recipes, I rarely need to use any of the CORE Cookbook recipes aside from Recipes RCP-001-001-002-HANDOFF-SNAPSHOT and RCP-001-001-002-HANDOFF-SNAPSHOT, but when I do, they are essential to the functioning of CRAFT. Also, the A.I. reads all of these recipes at the start of each session. This happens quietly in the background. Even though you may never need to call the recipe, the A.I. will know all of them and it helps the A.I. to understand what CRAFT is and how it works.
    Even if you rarely need to use these recipes, they are still working for you and are essential to the CRAFT Framework.
STEP 1: UNDERSTAND THE THREE MODES
  • This recipe operates in three modes:
    AUTOMATIC: Detects limitations in user requests
    LIST_LIMITATIONS: Shows current AI capabilities
    UPDATE_LIMITATIONS: Generates updated limitation set
STEP 2: RECOGNIZE LIMITATION TRIGGERS
  • The recipe scans requests for these triggers:
    FILE SYSTEM: save, read, delete files
    NETWORK: fetch, API calls, web browsing
    REAL-TIME DATA: current prices, weather, news
    CODE EXECUTION: run, execute, test in real env
    DATA PERSISTENCE: remember, store, save for later
    IMAGE OPERATIONS: see, analyze, generate images
    SYSTEM ACCESS: install, configure, access OS
STEP 3: UNDERSTAND LIMITATION CATEGORIES
  • Limitations fall into three categories:
    TECHNICAL IMPOSSIBILITIES:
    Things the AI literally cannot do due to platform
    architecture (e.g., save files, access internet).
    POLICY RESTRICTIONS:
    Things the AI should not do due to safety policies
    (e.g., harmful content, bypassing security).
    SIMULATION CAPABILITIES:
    Things the AI cannot actually do but can demonstrate
    conceptually (e.g., simulated code execution).
STEP 4: AUTOMATIC DETECTION RESPONSES
  • When a limitation is detected automatically:
    FOR TECHNICAL IMPOSSIBILITY:
    #AI->H::Caution: (Cannot [action] – technical limit)
    #AI->H::Note: (This is a platform restriction)
    FOR POLICY RESTRICTION:
    #AI->H::Caution: (Should not [action] – policy limit)
    #AI->H::Note: (This protects safety and ethics)
    FOR SIMULATION POSSIBLE:
    #AI->H::Note: (Cannot actually [action], can simulate)
    #AI->H::Question: (Want me to demonstrate?)
STEP 5: PROVIDE ALTERNATIVES
  • For each limitation, offer practical alternatives:
    CANNOT SAVE FILE:
    "I can display content for you to copy and save."
    CANNOT ACCESS URL:
    "Use Go to [URL] command to provide content."
    CANNOT EXECUTE CODE:
    "I can simulate execution or show expected output."
    CANNOT ACCESS REAL-TIME:
    "Provide current data, or I can use estimates."
    NO ALTERNATIVE EXISTS:
    "This requires [capability] I do not have."
STEP 6: LIST LIMITATIONS MODE
  • When asked to list current limitations:
    #AI->H::Status: (Current limitations for [AI model])
    TECHNICAL IMPOSSIBILITIES:
    [numbered list]
    POLICY RESTRICTIONS:
    [numbered list]
    CAN SIMULATE:
    [numbered list]
    Knowledge Cutoff: [date]
    Token Limit: [limit]
STEP 7: VERIFICATION PROTOCOL
  • Before stating a limitation, always:
    1. Test if truly limited (do not assume)
    2. Be specific about what aspect is limited
    3. Distinguish cannot/should not/can simulate
    4. Check for recent capability updates

How AI Reads This Recipe

When this recipe executes, the AI performs these operations:
1. TRIGGER SCAN: Checks user request for keywords that
indicate potentially limited operations.
2. LIMITATION LOOKUP: References the AI-specific limitation
set to categorize the detected limitation.
3. CATEGORY ASSIGNMENT: Determines if the limitation is
technical, policy, or simulation-possible.
4. ALTERNATIVE SEARCH: Identifies practical workarounds
that achieve the user's underlying goal.
5. RESPONSE FORMATTING: Provides clear explanation using
appropriate CRAFT comment syntax.
6. VERIFICATION: Tests assumptions before stating limits
and distinguishes between cannot/should not/can simulate.
The recipe maintains updateable limitation sets for Claude,
ChatGPT, and Gemini to ensure accurate detection.

When to Use This Recipe

This recipe runs automatically when the AI detects requests
that exceed its capabilities. Use list_limitations mode when
you want to understand what your AI can and cannot do. Use
update_limitations mode periodically to refresh the AI's
understanding of its own capabilities.
"""
WPRM_FIELD_HOW_AI_READS = """
When this recipe executes, the AI performs these operations:
1. TRIGGER SCAN: Checks user request for keywords that
indicate potentially limited operations.
2. LIMITATION LOOKUP: References the AI-specific limitation
set to categorize the detected limitation.
3. CATEGORY ASSIGNMENT: Determines if the limitation is
technical, policy, or simulation-possible.
4. ALTERNATIVE SEARCH: Identifies practical workarounds
that achieve the user's underlying goal.
5. RESPONSE FORMATTING: Provides clear explanation using
appropriate CRAFT comment syntax.
6. VERIFICATION: Tests assumptions before stating limits
and distinguishes between cannot/should not/can simulate.
The recipe maintains updateable limitation sets for Claude,
ChatGPT, and Gemini to ensure accurate detection.

Recipe FAQ

Q: Why is it important for AI to acknowledge limitations?
A: Acknowledging limitations prevents dangerous overreliance on AI, sets realistic expectations, and helps users know when to seek human expertise. It’s especially critical for medical, legal, financial, or safety-related queries where mistakes could cause harm.
Q: What types of limitations does this recipe address?
A: The recipe covers knowledge cutoff dates (training data limits), domain expertise gaps (specialized fields requiring certification), task impossibilities (things AI cannot do like real-time data access), reasoning limitations (complex logic or creative leaps), and potential biases or errors in responses.
Q: How does this differ from the Confidence Calibration recipe?
A: While Confidence Calibration shows certainty levels for provided information, Limitation Acknowledgment identifies what the AI fundamentally cannot or should not attempt. It’s about capability boundaries rather than confidence in specific answers.
Q: Will constant limitation warnings make the AI less useful?
A: No, the recipe uses smart disclosure – only mentioning limitations when relevant. For routine tasks within AI capabilities, no warnings appear. Limitations are acknowledged specifically when users might assume capabilities that don’t exist.
Q: What should I do when the AI acknowledges a limitation?
A: When the AI identifies a limitation, it will suggest alternatives: consulting human experts, using specialized tools, verifying with authoritative sources, or reframing the request within AI capabilities. Use these suggestions to find the right resource for your needs.
Q: What is the difference between cannot and should not?
A: Cannot means technically impossible due to platform
architecture. Should not means policy restriction for
safety or ethical reasons.
Q: How often should I update limitation sets?
A: Update when you notice new capabilities or after major
AI model updates. The sets can become outdated.
Q: Why does the AI offer simulations?
A: Simulations help demonstrate concepts even when actual
execution is not possible. They show expected behavior.
Q: What if the AI is wrong about a limitation?
A: Test it. The verification protocol encourages testing
before assuming. AI capabilities change over time.

Actual Recipe Code

(Copy This Plaintext Code To Use)
# ===========================================================
# RECIPE: RCP-001-001-006-LIMITATION-ACK-v2.00a
# AI Limitation Detection and Alternative Solutions
# ===========================================================
LIMITATION_ACKNOWLEDGMENT = Recipe(
recipe_id="RCP-001-001-006-LIMITATION-ACK-v2.00a",
title="AI Limitation Detection and Alternatives",
description="Detects limits and provides alternatives",
category="CAT-001-CORE",
difficulty="medium",
version="2.00a",
parameters={
"request": {
"type": "string",
"required": True,
"description": "Request to analyze"
},
"ai_model": {
"type": "string",
"required": True,
"options": [
"Claude",
"ChatGPT",
"Gemini",
"Other"
],
"description": "Current AI model"
},
"check_mode": {
"type": "string",
"required": False,
"default": "automatic",
"options": [
"automatic",
"list_limitations",
"update_limitations"
],
"description": "Operation mode"
}
},
prompt_template="""
#H->AI::Directive: (Analyze for limitations)
#H->AI::Context: (AI: {ai_model}, Mode: {check_mode})
# —————————————————
# STEP 0: POLICY PRE-CHECK
# —————————————————
Scan for sensitive categories:
– Platform capabilities/limitations
– Security/vulnerability research
– Personal data handling
– Political topics
IF potential_conflict_detected:
#AI->H::PolicyCaution: (Topic may trigger policies)
#AI->H::RecommendedChange: (Focus on [safe aspect])
# —————————————————
# AI-SPECIFIC LIMITATION SETS
# —————————————————
CLAUDE_LIMITATIONS = {
"technical_impossibilities": [
"Cannot save files or persist data",
"Cannot execute code in real environment",
"Cannot access real-time data",
"Cannot make actual API calls",
"Cannot access user file system"
],
"policy_restrictions": [
"Should not provide medical/legal advice",
"Should not assist with harmful activities",
"Should not generate copyrighted content"
],
"simulation_capabilities": [
"Can simulate code execution",
"Can simulate API responses",
"Can simulate file operations"
],
"knowledge_cutoff": "Training data dependent",
"token_limit": "~100,000 tokens"
}
# —————————————————
# AUTOMATIC MODE
# —————————————————
IF {check_mode} == "automatic":
STEP 1: SCAN FOR LIMITATION TRIGGERS
Check if request involves:
[ ] File system operations
[ ] Network requests
[ ] Real-time data
[ ] Code execution
[ ] Data persistence
[ ] Image operations
[ ] System access
STEP 2: CATEGORIZE LIMITATION TYPE
IF technical_impossibility:
#AI->H::Caution: (Cannot [action] – technical)
#AI->H::Note: (Platform restriction)
IF policy_restriction:
#AI->H::Caution: (Should not [action] – policy)
#AI->H::Note: (Protects safety and ethics)
IF simulation_possible:
#AI->H::Note: (Cannot actually do, can simulate)
#AI->H::Question: (Want a demonstration?)
STEP 3: PROVIDE ALTERNATIVES
Cannot save file:
#AI->H::RecommendedChange: (Display to copy)
Cannot access URL:
#AI->H::RecommendedChange: (Use Go to [URL])
Cannot execute code:
#AI->H::RecommendedChange: (Simulate output)
Cannot access real-time:
#AI->H::RecommendedChange: (Provide data)
No alternative:
#AI->H::Note: (Requires [capability] unavailable)
# —————————————————
# LIST LIMITATIONS MODE
# —————————————————
IF {check_mode} == "list_limitations":
#AI->H::Status: (Limitations for {ai_model})
TECHNICAL IMPOSSIBILITIES:
[formatted numbered list]
POLICY RESTRICTIONS:
[formatted numbered list]
CAN SIMULATE:
[formatted numbered list]
Knowledge Cutoff: [date]
Token Limit: [limit]
# —————————————————
# UPDATE LIMITATIONS MODE
# —————————————————
IF {check_mode} == "update_limitations":
#AI->H::Status: (Generating updated limitation set)
Test capabilities and generate:
{AI_MODEL}_LIMITATIONS = {
"technical_impossibilities": [test each],
"policy_restrictions": [check current],
"simulation_capabilities": [verify each],
"knowledge_cutoff": "[detected]",
"token_limit": "[detected]"
}
#AI->H::Note: (Copy to update limitation set)
# —————————————————
# VERIFICATION PROTOCOL
# —————————————————
When stating a limitation:
1. Test if truly limited (do not assume)
2. Be specific about what aspect
3. Distinguish cannot/should not/can simulate
4. Check for capability updates
#H->AI::OnError: (If unsure, test first then report)
"""
)
# ===========================================================
# END RECIPE: RCP-001-001-006-LIMITATION-ACK-v2.00a
# ===========================================================

Similar Posts

Leave a Reply