feat: Local DevEx: Component Hot Reloading, component benchmarks for warm/cold builds#1095
Conversation
Adds a build-time benchmark harness that measures cold-install and warm-rebuild times for every component, compares baseline vs candidate refs, and emits human/TSV/JSON reports. Key implementation details: - Proper signal handling: Ctrl+C recursively kills process trees (bench_kill_tree via pgrep), cleans up git worktrees and temp dirs, then re-raises SIGINT so make sees exit code 130. - All cd calls in bench-manifest.sh guarded with || return 1 (benchmark functions) or || return 0 (cleanup functions) per shellcheck SC2164. - CI workflow (.github/workflows/component-benchmarks.yml) runs self-tests and full benchmarks on workflow_dispatch or 'benchmark' label. - Self-test suite (tests/bench-test.sh) validates syntax, function coverage, report generation, and ANSI suppression (8 tests). - CLAUDE.md, dev-cluster SKILL, and .env.local.example updated with benchmark usage guidance. Made-with: Cursor
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (12)
Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to data retention organization setting WalkthroughThis PR introduces a comprehensive component benchmarking system, including new Bash scripts for benchmark execution and component setup, a GitHub Actions workflow for automated benchmark runs, significant Makefile enhancements for local development workflows (new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…ain permissions Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
ktdreyer
left a comment
There was a problem hiding this comment.
concept LGTM - ready to merge when you are.
Merge Queue Status
This pull request spent 26 seconds in the queue, including 4 seconds running CI. Required conditions to merge
|
Why
Every contributor's first experience with the platform is a cold install — cloning the repo and waiting for dependencies to download and compile. Today there's no systematic way to know whether that experience is getting better or worse. This PR establishes a measurable baseline for contributor setup time and incremental rebuild speed across all components, so regressions are caught before they land and improvements can be tracked over time.
Jira Story: RHOAIENG-55731
Summary
scripts/benchmarks/) that measures cold-install and warm-rebuild times for every component, compares baseline vs candidate git refs, and emits human/TSV/JSON reportsmakesees exit code 130cdcalls inbench-manifest.shwith|| return 1(benchmark functions) or|| return 0(cleanup functions) to prevent wrong-directory builds (shellcheck SC2164)Changes
scripts/benchmarks/component-bench.shscripts/benchmarks/bench-manifest.shscripts/benchmarks/README.md.github/workflows/component-benchmarks.ymlworkflow_dispatchorbenchmarklabeltests/bench-test.shMakefilemake benchmarktargetCLAUDE.md.claude/skills/dev-cluster/SKILL.md.env.local.example,.gitignoreTest plan
shellcheckclean on both shell scripts (only pre-existing false-positive SC2034/SC1091/SC2155 remain)bash tests/bench-test.shpasses 8/8 locallyact(nektos/act v0.2.86 + Podman) — self-tests pass, benchmarks execute correctly in containermake benchmarkkills all children and exits cleanly