feat: improve skill scores for 5 SDD and context skills#47
feat: improve skill scores for 5 SDD and context skills#47rohan-tessl wants to merge 1 commit intomasseater:masterfrom
Conversation
Hey @masseater 👋 I ran your skills through `tessl skill review` at work and found some targeted improvements. Here's the full before/after: | Skill | Before | After | Change | |-------|--------|-------|--------| | refactor-context-file | 16% | 87% | +71% | | next | 16% | 84% | +68% | | quality-check | 16% | 81% | +65% | | sdd-utils | 16% | 80% | +64% | | sync | 16% | 78% | +62% | This PR is intentionally scoped to 5 skills to keep it reviewable — more skills can be improved in follow-ups or via automated review on future PRs. <details> <summary>Changes summary</summary> **All 5 skills** had a common blocker: the `name` field used colon-prefixed format (e.g. `sdd:next`) which fails tessl's kebab-case validation, preventing the LLM judge from running at all. This single fix unlocked scoring for every skill. Additional improvements per skill: - **refactor-context-file**: Rewrote description with "Use when..." trigger clause, restructured workflow with numbered steps, added output destination table, improved third-person voice - **next**: Added descriptive "Use when..." description, restructured state detection table with clearer column headers, improved section naming - **quality-check**: Expanded with concrete sub-agent invocation examples, added output report template, linked to agent definition files, added validation step - **sdd-utils**: Added "Use when..." description expanding the SDD acronym context, renamed table columns for clarity, added skill heading - **sync**: Restructured into clear numbered workflow with status criteria table, added argument inference priority chain, added TDD step criteria, improved important notes section </details> Honest disclosure — I work at @tesslio where we build tooling around skills like these. Not a pitch - just saw room for improvement and wanted to contribute. Want to self-improve your skills? Just point your agent (Claude Code, Codex, etc.) at [this Tessl guide](https://docs.tessl.io/evaluate/optimize-a-skill-using-best-practices) and ask it to optimize your skill. Ping me - [@rohan-tessl](https://github.com/rohan-tessl) - if you hit any snags. Thanks in advance 🙏
📝 WalkthroughWalkthroughFive skill documentation files are being updated: skill names have plugin-style prefixes removed ( Changes
Sequence Diagram(s)(Skipped: changes are documentation updates and metadata/naming refactors without new control flow or significant feature additions.) Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Warning |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
plugins/sdd/skills/next/SKILL.md (1)
31-43:⚠️ Potential issue | 🟡 MinorExample output language is inconsistent with the rest of the file.
This section is still in Japanese while the surrounding documentation is English. Consider translating it to keep the skill output spec uniform.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/sdd/skills/next/SKILL.md` around lines 31 - 43, The "Example Output" block in SKILL.md is in Japanese while the rest of the file is English; update that example to English to match the surrounding documentation by replacing the Japanese text in the fenced code block under "## Example Output" with an English translation (e.g., "## Next step: auth-feature", "Current state: Phase 2 implementation (task 2.3 completed)", "**Recommended command:** /sdd:phase implement auth-feature 2.4", "**Reason:** Phase 2 task 2.4 \"Authentication token validation\" is not yet complete"); ensure the formatting and phrasing follow the existing English style used elsewhere in the file.
🧹 Nitpick comments (2)
plugins/sdd/skills/next/SKILL.md (1)
4-4: Use a single placeholder name across arguments and command templates.
[spec]and{taskname}appear to refer to the same value; unify to one label for clarity.Also applies to: 14-14, 23-29
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/sdd/skills/next/SKILL.md` at line 4, The placeholders in SKILL.md are inconsistent: the argument-hint uses "[spec]" while command templates use "{taskname}"; pick one placeholder name and apply it consistently across the file (e.g., rename all occurrences of "{taskname}" to "{spec}" or vice versa), updating the argument-hint line and every command/template instance mentioned (including the blocks at lines referenced around 14 and 23-29) so the same identifier is used everywhere for the same value.plugins/sdd/skills/sync/SKILL.md (1)
4-4: Unifyspecvstasknameterminology.The file mixes
[spec]and[taskname]for the same concept. Standardizing on one term will reduce prompt ambiguity.Also applies to: 13-13, 21-21, 35-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/sdd/skills/sync/SKILL.md` at line 4, The file mixes the two terms "[spec]" and "[taskname]"—standardize on one term (use "[taskname]") by replacing the argument-hint line argument-hint: "[spec] [phaseNumber]" and all other occurrences of "[spec]" with "[taskname]" and updating any descriptive text or examples that reference "spec" so they consistently use "taskname" (check the other occurrences of the literal tokens "[spec]" and "[taskname]" in the document and make them uniform).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/context/skills/refactor-context-file/SKILL.md`:
- Around line 18-21: The fenced code block in SKILL.md is missing a language
identifier, triggering markdownlint MD040; update the block surrounding the
two-line message (the triple-backtick fence that contains "エラー:
ファイルパスを引数に指定してください" and "使用方法: /context:refactor-context-file [file-path]") to
include a language tag such as "text" (i.e., change ``` to ```text) so the
markdown linter recognizes the code block language.
- Around line 16-21: The usage example in SKILL.md still shows the old
invocation `/context:refactor-context-file`; update the usage text to match the
new skill name `refactor-context-file` (e.g., replace
`/context:refactor-context-file [file-path]` with the correct command for the
renamed skill `refactor-context-file [file-path]`) so the error message and
usage string align with the current skill name.
- Line 2: Search the repo for any uses of the old skill identifier
"context:refactor-context-file" and update them to the new skill id that matches
SKILL.md's "name: refactor-context-file" (i.e., ensure all chained skill calls,
allowed-tools entries and usage examples reference the exact same skill id
string as declared in SKILL.md); specifically update the three documented
occurrences in the active skill definitions and docs so the identifier in
plugins/context/AGENTS.md, plugins/context/skills/refactor-context-file/SKILL.md
usage section, and plugins/context/skills/refactor-project-context/SKILL.md
allowed-tools and workflow steps all match the SKILL.md name.
In `@plugins/sdd/skills/sync/SKILL.md`:
- Around line 44-52: Update the SKILL.md text to remove execution-dependent
phrases and make test-status criteria evidence-based: in the "Complete/In
progress/Not started" table and the "Determine TDD Step" section (references:
the table rows and the "Red/Green/Refactor" bullets), change "tests pass" and
"Red/Green confirmed" to criteria like "test files exist", "CI status shows
passing tests", or "conversation/commit evidence of tests" and change "Red" and
"Green" definitions to "Red: test file exists" and "Green: implementation exists
with evidence of passing tests (e.g., CI badge, commit message, or maintainer
confirmation)". Ensure the wording on lines noted (around the table and the TDD
bullets) consistently uses evidence-based signals and update the duplicate
mention at the other cited location (lines ~64-65).
---
Outside diff comments:
In `@plugins/sdd/skills/next/SKILL.md`:
- Around line 31-43: The "Example Output" block in SKILL.md is in Japanese while
the rest of the file is English; update that example to English to match the
surrounding documentation by replacing the Japanese text in the fenced code
block under "## Example Output" with an English translation (e.g., "## Next
step: auth-feature", "Current state: Phase 2 implementation (task 2.3
completed)", "**Recommended command:** /sdd:phase implement auth-feature 2.4",
"**Reason:** Phase 2 task 2.4 \"Authentication token validation\" is not yet
complete"); ensure the formatting and phrasing follow the existing English style
used elsewhere in the file.
---
Nitpick comments:
In `@plugins/sdd/skills/next/SKILL.md`:
- Line 4: The placeholders in SKILL.md are inconsistent: the argument-hint uses
"[spec]" while command templates use "{taskname}"; pick one placeholder name and
apply it consistently across the file (e.g., rename all occurrences of
"{taskname}" to "{spec}" or vice versa), updating the argument-hint line and
every command/template instance mentioned (including the blocks at lines
referenced around 14 and 23-29) so the same identifier is used everywhere for
the same value.
In `@plugins/sdd/skills/sync/SKILL.md`:
- Line 4: The file mixes the two terms "[spec]" and "[taskname]"—standardize on
one term (use "[taskname]") by replacing the argument-hint line argument-hint:
"[spec] [phaseNumber]" and all other occurrences of "[spec]" with "[taskname]"
and updating any descriptive text or examples that reference "spec" so they
consistently use "taskname" (check the other occurrences of the literal tokens
"[spec]" and "[taskname]" in the document and make them uniform).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 02280c69-e93f-4732-bbf1-ef2bf2f69112
📒 Files selected for processing (5)
plugins/context/skills/refactor-context-file/SKILL.mdplugins/sdd/skills/next/SKILL.mdplugins/sdd/skills/quality-check/SKILL.mdplugins/sdd/skills/sdd/utils/SKILL.mdplugins/sdd/skills/sync/SKILL.md
| --- | ||
| name: context:refactor-context-file | ||
| description: コンテキストファイル(CLAUDE.md/AGENTS.md)をClaude Codeの各機能に分解・リファクタリングする | ||
| name: refactor-context-file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify stale references to the old skill name across the repo.
rg -n --no-heading -C2 'context:refactor-context-file'Repository: masseater/claude-code-plugin
Length of output: 4096
Update all dependent skill/docs references after renaming this skill.
The rename in this SKILL.md file is correct, but stale references to context:refactor-context-file remain in active skill definitions and context documentation. These will break chained skill calls:
plugins/context/AGENTS.mdline 16 (skill table)plugins/context/skills/refactor-context-file/SKILL.mdline 20 (usage instructions)plugins/context/skills/refactor-project-context/SKILL.mdlines 4 and 72 (allowed-tools and workflow steps)
Update all references to align with the new name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/context/skills/refactor-context-file/SKILL.md` at line 2, Search the
repo for any uses of the old skill identifier "context:refactor-context-file"
and update them to the new skill id that matches SKILL.md's "name:
refactor-context-file" (i.e., ensure all chained skill calls, allowed-tools
entries and usage examples reference the exact same skill id string as declared
in SKILL.md); specifically update the three documented occurrences in the active
skill definitions and docs so the identifier in plugins/context/AGENTS.md,
plugins/context/skills/refactor-context-file/SKILL.md usage section, and
plugins/context/skills/refactor-project-context/SKILL.md allowed-tools and
workflow steps all match the SKILL.md name.
| If `$ARGUMENTS` is empty, output an error and stop: | ||
|
|
||
| ``` | ||
| エラー: ファイルパスを引数に指定してください | ||
| 使用方法: /context:refactor-context-file [file-path] | ||
| ``` |
There was a problem hiding this comment.
Update the command example to the new skill name.
The usage text still says /context:refactor-context-file, but the skill was renamed to refactor-context-file. This will mislead users and likely fail invocation.
Suggested fix
-使用方法: /context:refactor-context-file [file-path]
+Usage: /refactor-context-file [file-path]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/context/skills/refactor-context-file/SKILL.md` around lines 16 - 21,
The usage example in SKILL.md still shows the old invocation
`/context:refactor-context-file`; update the usage text to match the new skill
name `refactor-context-file` (e.g., replace `/context:refactor-context-file
[file-path]` with the correct command for the renamed skill
`refactor-context-file [file-path]`) so the error message and usage string align
with the current skill name.
| ``` | ||
| エラー: ファイルパスを引数に指定してください | ||
| 使用方法: /context:refactor-context-file [file-path] | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced code block.
This block triggers markdownlint MD040 (fenced-code-language).
Suggested fix
-```
+```text
エラー: ファイルパスを引数に指定してください
使用方法: /context:refactor-context-file [file-path]</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/context/skills/refactor-context-file/SKILL.md` around lines 18 - 21,
The fenced code block in SKILL.md is missing a language identifier, triggering
markdownlint MD040; update the block surrounding the two-line message (the
triple-backtick fence that contains "エラー: ファイルパスを引数に指定してください" and "使用方法:
/context:refactor-context-file [file-path]") to include a language tag such as
"text" (i.e., change ``` to ```text) so the markdown linter recognizes the code
block language.
| | **Complete** | Implementation files exist, tests exist (TDD Red/Green confirmed), git log has relevant commits, or already marked complete | | ||
| | **In progress** | Partial files exist, marked in-progress, or conversation history shows work started | | ||
| | **Not started** | No implementation files, no conversation mentions | | ||
|
|
||
| `specs/`ディレクトリ内のタスクをリスト表示し、ユーザーに選択を求めます。 | ||
| ### 4. Determine TDD Step | ||
|
|
||
| ## 実行手順 | ||
| - **Red**: test file exists | ||
| - **Green**: implementation exists and tests pass | ||
| - **Refactor**: code meets review standards (no `any` types, proper type definitions) |
There was a problem hiding this comment.
Contradictory test-status criteria in sync logic.
The workflow says the command does not run tests, but completion/TDD criteria require “tests pass” / “Red/Green confirmed.” Please switch these to evidence-based signals (e.g., existing test files, CI/status notes, conversation evidence) rather than execution outcomes.
Suggested doc fix
-| **Complete** | Implementation files exist, tests exist (TDD Red/Green confirmed), git log has relevant commits, or already marked complete |
+| **Complete** | Implementation files exist, test artifacts/evidence exist, relevant commits exist, or task is already marked complete |
...
-- **Green**: implementation exists and tests pass
+- **Green**: implementation exists and available evidence indicates tests passed (e.g., recorded CI/result context)Also applies to: 64-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/sdd/skills/sync/SKILL.md` around lines 44 - 52, Update the SKILL.md
text to remove execution-dependent phrases and make test-status criteria
evidence-based: in the "Complete/In progress/Not started" table and the
"Determine TDD Step" section (references: the table rows and the
"Red/Green/Refactor" bullets), change "tests pass" and "Red/Green confirmed" to
criteria like "test files exist", "CI status shows passing tests", or
"conversation/commit evidence of tests" and change "Red" and "Green" definitions
to "Red: test file exists" and "Green: implementation exists with evidence of
passing tests (e.g., CI badge, commit message, or maintainer confirmation)".
Ensure the wording on lines noted (around the table and the TDD bullets)
consistently uses evidence-based signals and update the duplicate mention at the
other cited location (lines ~64-65).
Hey @masseater 👋
I ran your skills through
tessl skill reviewat work and found some targeted improvements.Here's the full before/after:
This PR is intentionally scoped to 5 skills to keep it reviewable — more skills can be improved in follow-ups or via automated review on future PRs.
Changes summary
All 5 skills had a common blocker: the
namefield used colon-prefixed format (e.g.sdd:next) which fails tessl's kebab-case validation, preventing the LLM judge from running at all. This single fix unlocked scoring for every skill.Additional improvements per skill:
Honest disclosure — I work at @tesslio where we build tooling around skills like these. Not a pitch - just saw room for improvement and wanted to contribute.
Want to self-improve your skills? Just point your agent (Claude Code, Codex, etc.) at this Tessl guide and ask it to optimize your skill. Ping me - @rohan-tessl - if you hit any snags.
Thanks in advance 🙏
Summary by CodeRabbit