1.5 KiB
1.5 KiB
name, description, disable-model-invocation, argument-hint, context, agent
| name | description | disable-model-invocation | argument-hint | context | agent |
|---|---|---|---|---|---|
| improve-prompt | Diagnose and improve an LLM prompt — fix ambiguity, add constraints, specify output format, add examples and edge case handling. | true | [prompt-text-or-file] | fork | prompt-engineer |
Improve Prompt
Diagnose and improve the provided prompt.
Input
$ARGUMENTS
Steps
-
Diagnose the current prompt:
- Ambiguity: vague instructions, unclear scope
- Missing output format: no schema or structure specified
- Weak constraints: "try to", "avoid if possible"
- No examples: complex tasks without few-shot
- Missing edge cases: no error/fallback handling
- No safety rules: missing refusal/deferral instructions
- Token bloat: redundant or filler text
-
Improve following these principles:
- Constraints before instructions (what NOT to do first)
- Explicit output schema with required fields and types
- 2-3 representative examples for complex tasks
- Edge case handling (empty input, malicious input, ambiguous request)
- Refusal rules for user-facing prompts
- Remove every sentence that doesn't change model behavior
-
Verify via context7 — check target model capabilities and best practices
-
Output:
## Diagnosis
- [Bullet list of issues found in the original prompt]
## Improved Prompt
[Clean, copy-ready prompt with clear sections]
## Changes Made
- [What changed and why — 3-5 items max]
## Usage Notes
- [Model, temperature, any caveats — only if non-obvious]