Context
The reproduce skill's entire process assumes you can run the application interactively: install dependencies, follow the reported steps, capture screenshots/logs, try variations. For UI bugs in environments without a browser (which is the common case on ACP), none of this applies.
This was identified during testing of the skill invocation cleanup (PR #107). The agent handling a React UI bug "confirmed via code analysis" — tracing the code paths rather than reproducing in a running UI. This was pragmatically correct but deviated from the skill's instructions, which have no provision for this approach.
Problem
The reproduce skill prescribes a single reproduction model: set up environment → follow reproduction steps → capture output. This doesn't cover:
- UI bugs where reproduction requires a browser (React, Angular, etc.)
- Infrastructure bugs where reproduction requires a running cluster
- Race conditions where reproduction requires specific timing or load
- Environment-specific bugs where the CI/sandbox environment differs from production
In all these cases, the agent either (a) skips reproduction entirely, (b) claims "confirmed via code analysis" without guidance on what that means, or (c) follows the skill's steps and produces a misleading report.
Proposal
Add a "Code-Level Reproduction" section to the reproduce skill for cases where the bug is observable from code analysis alone. This would include:
- When code-level reproduction is appropriate: The bug is in deterministic logic (not timing/state), the affected code path can be traced statically, and the environment doesn't support interactive reproduction
- What code-level reproduction looks like: Trace the execution path from entry point to failure, identify the specific code that produces the wrong behavior, verify the conditions under which it triggers, document the analysis with file:line references
- What it does NOT substitute for: Performance bugs, intermittent/race conditions, UX issues where the experience matters (not just correctness), anything where the bug might be in the integration between components rather than in a single code path
- Documentation: The reproduction report should clearly state "Code-level reproduction" and explain why interactive reproduction wasn't feasible
Scope
workflows/bugfix/.claude/skills/reproduce/SKILL.md — add code-level reproduction path
Context
The reproduce skill's entire process assumes you can run the application interactively: install dependencies, follow the reported steps, capture screenshots/logs, try variations. For UI bugs in environments without a browser (which is the common case on ACP), none of this applies.
This was identified during testing of the skill invocation cleanup (PR #107). The agent handling a React UI bug "confirmed via code analysis" — tracing the code paths rather than reproducing in a running UI. This was pragmatically correct but deviated from the skill's instructions, which have no provision for this approach.
Problem
The reproduce skill prescribes a single reproduction model: set up environment → follow reproduction steps → capture output. This doesn't cover:
In all these cases, the agent either (a) skips reproduction entirely, (b) claims "confirmed via code analysis" without guidance on what that means, or (c) follows the skill's steps and produces a misleading report.
Proposal
Add a "Code-Level Reproduction" section to the reproduce skill for cases where the bug is observable from code analysis alone. This would include:
Scope
workflows/bugfix/.claude/skills/reproduce/SKILL.md— add code-level reproduction path