Upgrade the TensorRT container version#1112
Conversation
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
📝 WalkthroughWalkthroughThis pull request updates TensorRT Docker image versions across CI/CD configuration and example documentation. The workflow configuration updates from version 26.01 to 26.02, while example docs update from 25.08 to 26.02. One documentation addition includes a note on CUDA 12 compatibility for onnxruntime-gpu. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
examples/onnx_ptq/README.md (1)
25-27: Document specificonnxruntime-gpuversions tested with each TensorRT container tag.The note recommends
25.06-py3foronnxruntime-gpucompatibility, but the reasoning is unclear. Latest stableonnxruntime-gpu(v1.24.4) requires CUDA 12.x, and the recommended26.02-py3container includes CUDA 13.1.1, which maintains forward compatibility with CUDA 12.x applications. Rather than suggesting version downgrades, explicitly list whichonnxruntime-gpuversions (or version ranges) have been tested with each recommended container tag to avoid drift and provide clear guidance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/onnx_ptq/README.md` around lines 25 - 27, Update the README note to specify which onnxruntime-gpu versions were tested with each TensorRT container tag: explicitly list tested versions or version ranges for nvcr.io/nvidia/tensorrt:26.02-py3 and nvcr.io/nvidia/tensorrt:25.06-py3 and reference the package name onnxruntime-gpu (e.g., tested v1.24.x, v1.25.x, etc. as applicable); mention the CUDA compatibility rationale (CUDA 13.1.1 in 26.02 is forward-compatible with CUDA 12.x apps) so readers aren’t advised to downgrade, and add a short “Tested with” line or table near the existing recommendation for clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@examples/onnx_ptq/README.md`:
- Around line 25-27: Update the README note to specify which onnxruntime-gpu
versions were tested with each TensorRT container tag: explicitly list tested
versions or version ranges for nvcr.io/nvidia/tensorrt:26.02-py3 and
nvcr.io/nvidia/tensorrt:25.06-py3 and reference the package name onnxruntime-gpu
(e.g., tested v1.24.x, v1.25.x, etc. as applicable); mention the CUDA
compatibility rationale (CUDA 13.1.1 in 26.02 is forward-compatible with CUDA
12.x apps) so readers aren’t advised to downgrade, and add a short “Tested with”
line or table near the existing recommendation for clarity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 86d9cb9d-874d-449b-9881-e3f1c39e7135
📒 Files selected for processing (4)
.github/workflows/example_tests.ymlexamples/diffusers/README.mdexamples/onnx_ptq/README.mdexamples/torch_onnx/README.md
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1112 +/- ##
==========================================
- Coverage 70.23% 70.18% -0.06%
==========================================
Files 227 228 +1
Lines 25909 25952 +43
==========================================
+ Hits 18198 18214 +16
- Misses 7711 7738 +27 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What does this PR do?
Type of change: Container version update
Testing
Unit and integrations tests pass
Before your PR is "Ready for review"
Make sure you read and follow Contributor guidelines and your commits are signed (
git commit -s -S).Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded
trust_remote_code=True,torch.load(..., weights_only=False),pickle, etc.).CONTRIBUTING.md: ✅ / ❌ / N/ASummary by CodeRabbit
Documentation
Chores