Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schedule loop domains such that reshape transforms are cancelled #3679

Merged
merged 7 commits into from
Jan 10, 2025

Conversation

naoyam
Copy link
Collaborator

@naoyam naoyam commented Jan 8, 2025

This PR adds a scheduling primitive, cancelReshapeInLoopDomains(TensorView* from_tv), where all reshape transforms appearing between from_tv and fusion outputs are effectively cancelled in their loop domains. Please see the comment for a motivating example.

This could be used to remove the restriction of the interfering reshape in reduction/normalization fusions.

@naoyam
Copy link
Collaborator Author

naoyam commented Jan 8, 2025

!test

@naoyam
Copy link
Collaborator Author

naoyam commented Jan 9, 2025

!test

@naoyam naoyam marked this pull request as ready for review January 9, 2025 02:01
@naoyam naoyam requested a review from jacobhinkle January 9, 2025 02:01
@@ -62,7 +66,48 @@ void scheduleLoopDomainsLike(
// LoopDomainSchedulingTest.ScheduleLoopDomainsBy1 for more examples.
void scheduleLoopDomainsBy(
const std::vector<TensorView*>& tvs,
Expr* transform);
Expr* transform,
Direction replay_dir = Direction::Undefined);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an optional parameter to restrict the direction.

@naoyam
Copy link
Collaborator Author

naoyam commented Jan 9, 2025

!test

Copy link
Collaborator

@jacobhinkle jacobhinkle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just one small clarifying question.

csrc/scheduler/tools/loop_domain_scheduler.h Outdated Show resolved Hide resolved
csrc/scheduler/tools/loop_domain_scheduler.h Outdated Show resolved Hide resolved
// as t0, which could minimize strided accesses.
//
// This scheduling is not always feasible. Specifically, if a reshape
// outout iter domain is resized, the loop domain needs to keep using
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give an example of what the output would look like if some transforms can't be cancelled? For example if we had a further tensor

// t4 = pad(t3) // [i1, i0*i2 + 2]

Then we cannot cancel anything. Presumably if there is a decomposition of the reshape where some is cancellable and another part is not, we will cancel the part that is possible to cancel.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about this test?

https://github.com/NVIDIA/Fuser/pull/3679/files#diff-add3baa66fa88dd28b1baec00ec023373d88630908bff6583a0a4d61379e17cbR705

The reshape for tv1 is not cancelled as the following slice depends on it, whereas the tv3 reshape is cancelled.

@naoyam
Copy link
Collaborator Author

naoyam commented Jan 10, 2025

!build

@naoyam
Copy link
Collaborator Author

naoyam commented Jan 10, 2025

!build

@naoyam naoyam merged commit 05ec62b into main Jan 10, 2025
14 of 15 checks passed
@naoyam naoyam deleted the cancel_reshape branch January 10, 2025 18:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants