Closed
Conversation
1a6141a to
b3d34b4
Compare
Merged
tomtseng
commented
Jan 16, 2026
| @@ -1,5 +1,6 @@ | |||
| ARG PYTORCH_CUDA_VERSION=2.0.1-cuda11.7-cudnn8 | |||
| FROM pytorch/pytorch:${PYTORCH_CUDA_VERSION}-runtime | |||
| ARG PYTORCH_CUDA_VERSION=2.9.0-cuda12.8-cudnn9 | |||
Collaborator
Author
There was a problem hiding this comment.
needed to bump CUDA version or else flash-attn would have a compile error. decided to bump PyTorch version to the latest compatible version too
- Alphabetized all dependencies (case-insensitive) - Changed flash-attention>=1.0.0 to flash-attn>=2.8.3; platform_system == 'Linux' - Changed platform_system != 'Darwin' to platform_system == 'Linux' for vllm and bitsandbytes - Docker image needs to be `devel` image, since flash-attn needs nvcc
- needs nvcc (devel image) - need PyTorch image version to match PyTorch library version, and also need CUDA version to match (12.8). Old version 2.6.0 doesn't have CUDA 12.8 support, so bumped torch to the latest compatible version (2.9.0; 2.9.1 doesn't work with the current version of vllm) and then updated image version - fiddling with the Dockerfile
b3d34b4 to
18b7ac7
Compare
Collaborator
Author
|
GCG refactor in #82 removed the reference to flash-attn, this is no longer relevant. Though I may take some of the Dockerfile refactors and open a separate PR for that |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Changes
Our implementation of GCG uses flash attention. PR #53 added the flash attention package, but actually it's the wrong one — we want
flash-attninstead offlash_attention. Howeverflash-attnis tricky to install, it needs to be compiled. So the Docker image needs to be updated.It's not clear we care about GCG and we can prob turn off flash attention in GCG. so this PR is low prio, I think I'll put it aside for now.
TODO: trying to build dockerfile but currently it's OOMing when compiling locally, even when I max out the Docker Desktop memory cap setting and I set MAX_JOBS=1. Maybe compile on flamingo?
Testing
TODO build image, launch a new devbox and get this script to work: