Conversation
Greptile OverviewGreptile SummaryThis PR addresses CI/CD infrastructure issues by upgrading CUDA to 13.0, simplifying the build process, and updating GitHub Actions. Key Changes:
Critical Issue:
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant GH as GitHub Actions
participant Core as Core Job
participant PyTorch as PyTorch Job
participant JAX as JAX Job
participant All as All Job
GH->>Core: Trigger build
Core->>Core: Install deps in CUDA 13.0 container
Core->>Core: Build with MAX_JOBS=1
Core->>Core: Sanity check
GH->>PyTorch: Trigger build
PyTorch->>PyTorch: Install deps + PyTorch (CUDA 13.0)
PyTorch->>PyTorch: Build with MAX_JOBS=1
PyTorch->>PyTorch: Test import
GH->>JAX: Trigger build
JAX->>JAX: Install deps + jax[cuda13] ❌
Note over JAX: Invalid extra syntax
JAX--xJAX: Installation fails
GH->>All: Trigger build
All->>All: Install PyTorch + jax[cuda13] ❌
Note over All: Invalid extra syntax
All--xAll: Installation fails
|
Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
.github/workflows/build.yml
Outdated
| root-reserve-mb: 4096 | ||
| temp-reserve-mb: 32 | ||
| swap-size-mb: 10240 | ||
| swap-size-mb: 4096 |
There was a problem hiding this comment.
Verify that reduced memory allocation (root: 5120→4096 MB, swap: 10240→4096 MB) is sufficient for PyTorch builds to avoid OOM issues.
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install torch --no-cache-dir |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 which was present before. This may install CPU-only PyTorch instead of the CUDA version needed for testing.
| pip install torch --no-cache-dir | |
| pip install torch --no-cache-dir --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install cmake==3.21.0 pybind11[global] einops onnxscript | ||
| pip install torch --no-cache-dir |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 which was present before. This may install CPU-only PyTorch instead of the CUDA version.
| pip install torch --no-cache-dir | |
| pip install torch --no-cache-dir --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 when installing PyTorch. Without this, pip will install the default PyTorch from PyPI, which may be CPU-only or have incompatible CUDA version. This was present in the base commit for the all job and is needed here too.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 when installing PyTorch. The base commit explicitly used this index URL to ensure CUDA 13.0 support matching the JAX container (see commit 4cf2f12). Without it, the default PyPI version will be installed, which may be CPU-only.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch installation. The JAX container has CUDA 13.0, and commit 4cf2f12 explicitly added this index URL to ensure compatibility. Without it, pip may install a CPU-only or incompatible CUDA version from PyPI.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch installation. This was explicitly added in commit 4cf2f12 to match the CUDA 13.0 version in the JAX container. Without it, the default PyPI version will be installed, which may be CPU-only or have incompatible CUDA version.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch installation. This was explicitly added in commit 4cf2f12 to match CUDA 13.0 in the JAX container. Without it, pip installs the default PyPI version (likely CPU-only or wrong CUDA version).
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch installation. This was present in the base commit and is required to match CUDA 13.0 in the JAX container (see commit 4cf2f12).
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch. The JAX container has CUDA 13.0, but without this flag pip installs the default PyPI version (likely CPU-only or wrong CUDA version). This was explicitly added in commit 4cf2f12 (#2308) for this exact reason.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| - name: 'Dependencies' | ||
| run: | | ||
| pip install --no-cache-dir cmake==3.21.0 pybind11[global] einops onnxscript | ||
| pip install --no-cache-dir torch |
There was a problem hiding this comment.
Missing --index-url https://download.pytorch.org/whl/cu130 for PyTorch. Without this, pip installs the default PyPI version which may not match the JAX container's CUDA 13.0. This was present in the base commit for this exact job.
| pip install --no-cache-dir torch | |
| pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu130 |
.github/workflows/build.yml
Outdated
| run: NVTE_USE_CCACHE=1 NVTE_CCACHE_BIN=sccache pip install --no-build-isolation . -v | ||
| env: | ||
| NVTE_FRAMEWORK: none | ||
| MAX_JOBS: 1 | ||
| SCCACHE_GHA_ENABLED: "true" | ||
| SCCACHE_CACHE_SIZE: "5G" |
There was a problem hiding this comment.
MAX_JOBS=1 removed from Core job. Check that the build completes successfully without this limit to prevent OOM issues.
.github/workflows/build.yml
Outdated
| image: nvcr.io/nvidia/cuda:12.1.0-devel-ubuntu22.04 | ||
| options: --user root |
There was a problem hiding this comment.
Check that CUDA 12.1.0 (downgraded from 12.8.0) doesn't break PyTorch compatibility or cause runtime issues
| container: | ||
| image: ghcr.io/nvidia/jax:jax | ||
| image: nvcr.io/nvidia/cuda:12.1.0-devel-ubuntu22.04 | ||
| options: --user root |
There was a problem hiding this comment.
Switched from ghcr.io/nvidia/jax:jax to base CUDA container - verify JAX[cuda12] install is compatible with CUDA 12.1 and includes all necessary dependencies
.github/workflows/build.yml
Outdated
| apt-get update | ||
| apt-get install -y git python3.9 pip cudnn9-cuda-12 | ||
| pip install cmake==3.21.0 pybind11[global] ninja packaging | ||
| pip install jax |
There was a problem hiding this comment.
pip install jax installs CPU-only JAX by default. Need jax[cuda12] to match CUDA 13.0 runtime
| pip install jax | |
| pip install jax[cuda12] |
Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
| apt-get update | ||
| apt-get install -y git python3.9 pip cudnn9-cuda-12 | ||
| pip install cmake==3.21.0 pybind11[global] ninja packaging | ||
| pip install jax[cuda13] flax[cuda13] |
There was a problem hiding this comment.
Verify jax[cuda13] is a valid extra. JAX typically uses extras like jax[cuda12_local] or jax[cuda12_pip] (see build_tools/wheel_utils/build_wheels.sh:66). Also note that transformer_engine/jax/pyproject.toml:6 specifies jax[cuda12]. Check JAX documentation to confirm cuda13 is the correct syntax for CUDA 13.0.
| apt-get install -y git python3.9 pip cudnn9-cuda-12 | ||
| pip install cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install torch --index-url https://download.pytorch.org/whl/cu130 | ||
| pip install jax[cuda13] flax[cuda13] |
There was a problem hiding this comment.
Same concern as JAX job: verify jax[cuda13] is the correct syntax for CUDA 13.0 installation
Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
| apt-get update | ||
| apt-get install -y git python3.9 pip cudnn9-cuda-12 | ||
| pip install cmake==3.21.0 pybind11[global] ninja packaging | ||
| pip install jax[cuda13] flax[cuda13] |
There was a problem hiding this comment.
jax[cuda13] syntax is likely invalid. JAX typically uses extras like jax[cuda12_local] or jax[cuda12_pip] (see build_tools/wheel_utils/build_wheels.sh:66). Also, transformer_engine/jax/pyproject.toml:6 specifies jax[cuda12]. This will fail to install the CUDA-enabled version.
| pip install jax[cuda13] flax[cuda13] | |
| pip install "jax[cuda12_pip]" "flax[cuda12_pip]" |
| apt-get install -y git python3.9 pip cudnn9-cuda-12 | ||
| pip install cmake==3.21.0 pybind11[global] ninja pydantic importlib-metadata>=1.0 packaging numpy einops onnxscript | ||
| pip install torch --index-url https://download.pytorch.org/whl/cu130 | ||
| pip install jax[cuda13] flax[cuda13] |
There was a problem hiding this comment.
Same issue as JAX job: jax[cuda13] and flax[cuda13] are invalid extras. Use jax[cuda12_pip] and flax[cuda12_pip] instead.
| pip install jax[cuda13] flax[cuda13] | |
| pip install jax[cuda12_pip] flax[cuda12_pip] |
Description
This PR fixes following issues:
Fixes # (issue)
Type of change
Checklist: