Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROCm build broken #605

Merged
merged 1 commit into from
Jan 21, 2025
Merged

ROCm build broken #605

merged 1 commit into from
Jan 21, 2025

Conversation

ericcurtin
Copy link
Collaborator

@ericcurtin ericcurtin commented Jan 21, 2025

The build flags changed in upstream llama.cpp so we were no longer building with ROCm acceleration.

Summary by Sourcery

Fix ROCm builds.

Build:

  • Use -DGGML_HIP=ON build flag instead of -DGGML_HIPBLAS=1.
  • Remove unnecessary ROCm libraries after build.

Copy link
Contributor

sourcery-ai bot commented Jan 21, 2025

Reviewer's Guide by Sourcery

The build flags for llama.cpp were updated to enable ROCm acceleration. Additionally, the script was updated to remove more ROCm libraries.

Flow diagram for updated ROCm build configuration

graph TD
    A[Start Build] --> B{Container Type?}
    B -->|ROCm| C[Enable ROCm/HIP]
    B -->|CUDA| D[Enable CUDA]
    B -->|CPU| E[CPU Only Build]
    C --> F[Set -DGGML_HIP=ON]
    D --> G[Set -DGGML_CUDA=ON]
    F --> H[Clean Up ROCm Libraries]
    G --> I[Build Complete]
    E --> I
    H --> I
    style C fill:#f9f,stroke:#333
    style F fill:#f9f,stroke:#333
Loading

File-Level Changes

Change Details Files
Updated build flags to enable ROCm acceleration.
  • Changed -DGGML_HIPBLAS=1 to -DGGML_HIP=ON.
container-images/scripts/build_llama_and_whisper.sh
Updated the script to remove more ROCm libraries.
  • Removed the explicit listing of rocblas and hipblaslt libraries and instead remove all libraries in the /opt/rocm-*/lib/*/library/*gfx9* directory.
container-images/scripts/build_llama_and_whisper.sh

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Jan 21, 2025

@maxamillion @bmahabirbu upstream llama.cpp changed their build flags for ROCm, as a result we've been building without ROCm here for a little while as the code would still build, just without ROCm.

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Jan 21, 2025

This was happening basically:

CMake Warning:
  Manually-specified variables were not used by the project:

    GGML_HIPBLAS

@ericcurtin
Copy link
Collaborator Author

@rhatdan we probably need new container images, once we get the green PRs in here. ROCm is broken

@bmahabirbu
Copy link
Collaborator

Really good catch! I wonder if there is a way to catch these sort of errors? I can look into editing the make file to look for mismatched arguments

@ericcurtin
Copy link
Collaborator Author

Really good catch! I wonder if there is a way to catch these sort of errors? I can look into editing the make file to look for mismatched arguments

If you find out great, it would be appreciated, I tried briefly to make cmake treat this as an error, but didn't figure it out.

@ericcurtin
Copy link
Collaborator Author

I thought of something to check for cmake warnings @bmahabirbu , I'd prefer something that was built-into cmake, as it would be more elegant than this, but this should work.

@ericcurtin
Copy link
Collaborator Author

This build may actually fail, it might find other hidden warnings around the place

The build flags changed in upstream llama.cpp so we were no longer
building with ROCm acceleration.

Signed-off-by: Eric Curtin <[email protected]>
Copy link
Collaborator

@maxamillion maxamillion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

This is great, thank you for catching this! This might explain why I wasn't seeing any GPU utilization on my Ryzen APU when watching nvtop ... I had tossed it up to nvtop oddities.

@ericcurtin
Copy link
Collaborator Author

LGTM!

This is great, thank you for catching this! This might explain why I wasn't seeing any GPU utilization on my Ryzen APU when watching nvtop ... I had tossed it up to nvtop oddities.

Could have been! If you want to test this before it gets pushed, there is a flag to manually pass container images to RamaLama (you could build this locally).

@rhatdan rhatdan merged commit fc0428b into main Jan 21, 2025
12 checks passed
@ericcurtin ericcurtin deleted the fix-rocm-build branch January 21, 2025 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants