--- name: improve-prompt description: Diagnose and improve an LLM prompt — fix ambiguity, add constraints, specify output format, add examples and edge case handling. disable-model-invocation: true argument-hint: "[prompt-text-or-file]" context: fork agent: prompt-engineer --- # Improve Prompt Diagnose and improve the provided prompt. ## Input $ARGUMENTS ## Steps 1. **Diagnose the current prompt:** - Ambiguity: vague instructions, unclear scope - Missing output format: no schema or structure specified - Weak constraints: "try to", "avoid if possible" - No examples: complex tasks without few-shot - Missing edge cases: no error/fallback handling - No safety rules: missing refusal/deferral instructions - Token bloat: redundant or filler text 2. **Improve following these principles:** - Constraints before instructions (what NOT to do first) - Explicit output schema with required fields and types - 2-3 representative examples for complex tasks - Edge case handling (empty input, malicious input, ambiguous request) - Refusal rules for user-facing prompts - Remove every sentence that doesn't change model behavior 3. **Verify via context7** — check target model capabilities and best practices 4. **Output:** ```markdown ## Diagnosis - [Bullet list of issues found in the original prompt] ## Improved Prompt [Clean, copy-ready prompt with clear sections] ## Changes Made - [What changed and why — 3-5 items max] ## Usage Notes - [Model, temperature, any caveats — only if non-obvious] ```