Skip to content

Conversation

@guan404ming
Copy link
Member

Why

  • Resolve todo in python/tvm/relax/pipeline.py

How

  • Check specific GPU targets first (cuda/rocm/metal/adreno)
  • If not matched, use BackendDispatcher.is_gpu_target() to detect GPU
  • Fall back to gpu_generic for unrecognized GPUs (vulkan, webgpu, etc.)
  • Raise error only for non-GPU unsupported targets

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @guan404ming, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant improvement to the Relax pipeline's target dispatching logic by implementing a gpu_generic fallback. This change ensures that GPU targets not explicitly handled by specialized backends (like Vulkan or WebGPU) can still utilize a generic GPU pipeline, resolving a previously identified TODO. The update enhances the system's flexibility and robustness, preventing unnecessary errors for valid GPU targets while maintaining strict error handling for truly unsupported non-GPU targets.

Highlights

  • Resolved TODO: Addressed a pending task in python/tvm/relax/pipeline.py to support gpu-generic.
  • GPU Generic Fallback: Implemented a mechanism to use gpu_generic pipeline passes for GPU targets (e.g., Vulkan, WebGPU) that are not explicitly recognized by specific backends (like CUDA, ROCm, Metal, Adreno).
  • Improved Target Dispatching: The system now first checks for specific GPU targets, then uses BackendDispatcher.is_gpu_target() for a broader GPU check, and only raises an error if the target is neither a recognized GPU nor a generic GPU.
  • New Test Coverage: Added unit tests to confirm the gpu_generic fallback behavior for various pipeline functions and to ensure correct error handling for unsupported non-GPU targets.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a fallback to a generic GPU pipeline for unrecognized GPU targets, resolving a TODO. The changes are implemented across several pipeline dispatch functions, using BackendDispatcher.is_gpu_target() to identify generic GPU targets after specific ones have been checked. This is a clean solution that improves target support. The accompanying tests for both the new fallback mechanism (vulkan, webgpu) and the error-raising for unsupported non-GPU targets (hexagon, c) are comprehensive and well-written. I have one suggestion to improve the maintainability of the new tests by reducing code duplication.

Comment on lines +121 to +128
"pipeline_func",
[
relax.pipeline.library_dispatch_passes,
relax.pipeline.legalize_passes,
relax.pipeline.dataflow_lower_passes,
relax.pipeline.finalize_passes,
relax.pipeline.get_default_pipeline,
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This list of pipeline functions is duplicated in test_non_gpu_target_raises_error below. To improve maintainability and avoid this duplication, consider extracting the list into a module-level constant and reusing it in both pytest.mark.parametrize decorators.

For example:

PIPELINE_FUNCS_FOR_TESTING = [
    relax.pipeline.library_dispatch_passes,
    relax.pipeline.legalize_passes,
    relax.pipeline.dataflow_lower_passes,
    relax.pipeline.finalize_passes,
    relax.pipeline.get_default_pipeline,
]


@pytest.mark.parametrize("pipeline_func", PIPELINE_FUNCS_FOR_TESTING)
# ...

@guan404ming guan404ming marked this pull request as ready for review December 26, 2025 13:20
@guan404ming
Copy link
Member Author

guan404ming commented Dec 26, 2025

cc @tlopex @mshr-h

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant