Transparent disclosure of AI capabilities and constraints, ensuring users understand what the AI can and cannot do. This recipe promotes honest communication about knowledge boundaries, potential errors, and areas where human expertise is essential.

How To Start
- After hundreds (perhaps thousands) of hours of using these recipes, I rarely need to use any of the CORE Cookbook recipes aside from Recipes RCP-001-001-002-HANDOFF-SNAPSHOT and RCP-001-001-002-HANDOFF-SNAPSHOT, but when I do, they are essential to the functioning of CRAFT. Also, the A.I. reads all of these recipes at the start of each session. This happens quietly in the background. Even though you may never need to call the recipe, the A.I. will know all of them and it helps the A.I. to understand what CRAFT is and how it works. Even if you rarely need to use these recipes, they are still working for you and are essential to the CRAFT Framework.
- This recipe operates in three modes: AUTOMATIC: Detects limitations in user requests LIST_LIMITATIONS: Shows current AI capabilities UPDATE_LIMITATIONS: Generates updated limitation set
- The recipe scans requests for these triggers: FILE SYSTEM: save, read, delete files NETWORK: fetch, API calls, web browsing REAL-TIME DATA: current prices, weather, news CODE EXECUTION: run, execute, test in real env DATA PERSISTENCE: remember, store, save for later IMAGE OPERATIONS: see, analyze, generate images SYSTEM ACCESS: install, configure, access OS
- Limitations fall into three categories: TECHNICAL IMPOSSIBILITIES: Things the AI literally cannot do due to platform architecture (e.g., save files, access internet). POLICY RESTRICTIONS: Things the AI should not do due to safety policies (e.g., harmful content, bypassing security). SIMULATION CAPABILITIES: Things the AI cannot actually do but can demonstrate conceptually (e.g., simulated code execution).
- When a limitation is detected automatically: FOR TECHNICAL IMPOSSIBILITY: #AI->H::Caution: (Cannot [action] – technical limit) #AI->H::Note: (This is a platform restriction) FOR POLICY RESTRICTION: #AI->H::Caution: (Should not [action] – policy limit) #AI->H::Note: (This protects safety and ethics) FOR SIMULATION POSSIBLE: #AI->H::Note: (Cannot actually [action], can simulate) #AI->H::Question: (Want me to demonstrate?)
- For each limitation, offer practical alternatives: CANNOT SAVE FILE: "I can display content for you to copy and save." CANNOT ACCESS URL: "Use Go to [URL] command to provide content." CANNOT EXECUTE CODE: "I can simulate execution or show expected output." CANNOT ACCESS REAL-TIME: "Provide current data, or I can use estimates." NO ALTERNATIVE EXISTS: "This requires [capability] I do not have."
- When asked to list current limitations: #AI->H::Status: (Current limitations for [AI model]) TECHNICAL IMPOSSIBILITIES: [numbered list] POLICY RESTRICTIONS: [numbered list] CAN SIMULATE: [numbered list] Knowledge Cutoff: [date] Token Limit: [limit]
- Before stating a limitation, always: 1. Test if truly limited (do not assume) 2. Be specific about what aspect is limited 3. Distinguish cannot/should not/can simulate 4. Check for recent capability updates
How AI Reads This Recipe
When to Use This Recipe
Recipe FAQ
A: Cannot means technically impossible due to platform
architecture. Should not means policy restriction for
safety or ethical reasons. Q: How often should I update limitation sets?
A: Update when you notice new capabilities or after major
AI model updates. The sets can become outdated. Q: Why does the AI offer simulations?
A: Simulations help demonstrate concepts even when actual
execution is not possible. They show expected behavior. Q: What if the AI is wrong about a limitation?
A: Test it. The verification protocol encourages testing
before assuming. AI capabilities change over time.
