Prompts with explicit output/response contract keywords: 61 (1.4%)
Prompts whose first line starts with an imperative verb: 883 (20.0%)
Median prompt length: 153 words
Very short prompts (<80 words): 16
Very long prompts (>500 words): 0
Why this matters for agent skills
Agent skills work best when prompts include clear trigger conditions, deterministic checklist-style actions, and an expected response format. The current corpus is strong on examples, but weaker on explicit execution contracts (what an agent should output) and standardized structure.
Recommended improvements
Standardize a skill wrapper section during export
Inject sections like When to apply, Review checklist, and Expected output when converting to SKILL.md.
Keep original reviewer text under Source guidance to preserve nuance.
Introduce an optional strict mode linter
Fail if prompt lacks imperative guidance, bullets/checklist, or output contract.
Segment by prompt length
Split very long prompts into concise must-follow rules + optional rationale.
Leverage JSON datasets for synthesis
Mine recurring comment patterns and auto-generate candidate checklist statements to backfill weak prompts.
Add confidence metadata
Include metadata like evidence_count and source_repos in generated skills to help agents prioritize stronger signals.