misc: Store perf data uncompressed and remove score fallback#762
misc: Store perf data uncompressed and remove score fallback#762jensen-yan merged 1 commit intoxs-devfrom
Conversation
📝 WalkthroughWalkthroughThe gem5 performance template workflow was modified to replace tar.gz archive compression with direct directory copying, update size reporting to reference the copied directory, remove fallback artifact saving on failure, and enable the upload step to continue on errors. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Change-Id: Ie93b309f48546cce494c910b2570609b5bc38b6d
There was a problem hiding this comment.
🧹 Nitpick comments (2)
.github/workflows/gem5-perf-template.yml (2)
209-209: Storage impact consideration.Switching from tar.gz compression to direct copy will significantly increase storage usage (typically 5-10x depending on compressibility of the data). With 200 runs retained across multiple benchmark types, ensure the NFS storage at
/nfs/home/share/gem5_ci/performance_datahas adequate capacity for the increased footprint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/gem5-perf-template.yml at line 209, The direct cp of "$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all" to "$TARGET_DIR" will blow up NFS storage; change the step to create a compressed tarball and copy that instead (e.g., tar -czf spec_all.tar.gz -C "$GITHUB_WORKSPACE/util/xs_scripts/test" spec_all then mv/cp the tarball to "$TARGET_DIR"), or at minimum add a pre-copy free-space check against the target mount (/nfs/home/share/gem5_ci/performance_data) and fail with a clear message if insufficient space; update the pipeline step that currently runs the cp command so it produces a compressed artifact and/or validates available capacity before copying.
207-244: Consider defensive check for missingspec_alldirectory.Since this step runs with
if: always(), it may execute when earlier steps have failed andspec_alldoesn't exist. Thecp -aon line 209 would fail, and subsequentlydu -shon line 239 would also fail, potentially causing confusing error output.Consider adding a check before archiving:
🛡️ Proposed defensive check
# Create target directory mkdir -p "$TARGET_DIR" + SPEC_ALL_DIR="$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all" + if [ ! -d "$SPEC_ALL_DIR" ]; then + echo "Warning: spec_all directory not found, skipping archive" + echo "### Archive output" >> $GITHUB_STEP_SUMMARY + echo "- Status: Skipped (no data to archive)" >> $GITHUB_STEP_SUMMARY + exit 0 + fi + # Archive entire spec_all directory without compression echo "Archiving performance data to $TARGET_DIR" - cp -a "$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all" "$TARGET_DIR/" + cp -a "$SPEC_ALL_DIR" "$TARGET_DIR/"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/gem5-perf-template.yml around lines 207 - 244, Add a defensive existence check for the spec_all source and guard the archive-size calculation: before calling cp -a for "$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all", verify that the directory exists and only run cp if it does (otherwise log a warning and skip copying); likewise, when computing ARCHIVE_SIZE with du -sh "$TARGET_DIR/spec_all", check that "$TARGET_DIR/spec_all" exists and set ARCHIVE_SIZE to a sensible default or a "not found" message if it doesn't, so ARCHIVE_SIZE and subsequent steps won't fail; update the blocks around TARGET_DIR/spec_all and the ARCHIVE_SIZE assignment to use these checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/gem5-perf-template.yml:
- Line 209: The direct cp of "$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all"
to "$TARGET_DIR" will blow up NFS storage; change the step to create a
compressed tarball and copy that instead (e.g., tar -czf spec_all.tar.gz -C
"$GITHUB_WORKSPACE/util/xs_scripts/test" spec_all then mv/cp the tarball to
"$TARGET_DIR"), or at minimum add a pre-copy free-space check against the target
mount (/nfs/home/share/gem5_ci/performance_data) and fail with a clear message
if insufficient space; update the pipeline step that currently runs the cp
command so it produces a compressed artifact and/or validates available capacity
before copying.
- Around line 207-244: Add a defensive existence check for the spec_all source
and guard the archive-size calculation: before calling cp -a for
"$GITHUB_WORKSPACE/util/xs_scripts/test/spec_all", verify that the directory
exists and only run cp if it does (otherwise log a warning and skip copying);
likewise, when computing ARCHIVE_SIZE with du -sh "$TARGET_DIR/spec_all", check
that "$TARGET_DIR/spec_all" exists and set ARCHIVE_SIZE to a sensible default or
a "not found" message if it doesn't, so ARCHIVE_SIZE and subsequent steps won't
fail; update the blocks around TARGET_DIR/spec_all and the ARCHIVE_SIZE
assignment to use these checks.
🚀 Coremark Smoke Test Results
✅ Difftest smoke test passed! |
Change-Id: Ie93b309f48546cce494c910b2570609b5bc38b6d
Summary by CodeRabbit
Bug Fixes
Chores