Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streamk v0.2 #646

Closed
wants to merge 51 commits into from
Closed

Streamk v0.2 #646

wants to merge 51 commits into from

Conversation

xiaohuguo2023
Copy link
Member

streamk v0.2:

  • new streamk tuning script to reduce compiling and profiling time

  • use load/store cache modifier to reimplement spinning lock

  • add CI test for streamk-kernel

  • able to use streampipelineV2

micmelesse and others added 30 commits July 17, 2024 05:04
Add Perf Kernels

This is a combination of 2 commits.

Add Perf Kernels

Add Perf Kernels

This is a combination of 6 commits.

add perf-kernels

fix formating issues

fix unused variables and other bugs

fix other issues

remove scripts

save

check changes

format

save

save

try

pre-commit check

save
Change all block pointers to tensor pointers

Block pointers are for nvidia TMAs. They are useful for regular loads as well but not well supported.

Also cleaned up some code I came across along the way and updated comment at the top.
Add support for layouts commonly used by users.

Add option for varlen / thd layout to specify equal context lengths for all batches. Also often used by users.
* remove on push for Integration Tests

* rename

* add post merge test

* save

* dtype params

* skip bad config

* fix more stuff
Increase CI timeout
Couple of FA optimizations

Set SM scale multiplication to a constexpr. Minor asm improvement.

Changed acc scaling to adjust for softmax division to
multiplication with reciprocal. ~10% perf improvement.

---------

Co-authored-by: Michael Melesse <[email protected]>
* streamk v0.1

* remove unused variable

* fix format issues

* add README

* fix format issue

* change num_sms to num_cus
* Add explicit multiply-reduce GEMM kernel

* Remove `SPLIT_K` argument from kernel

* Remove `GROUP_SIZE_M` argument from kernel

* Remove conditional call to `tl.dot` from kernel

* Remove table with performance data from README
* Copy *tune_gemm* from `triton-mlir` branch to `main_perf` branch

The source commit in `triton-mlir` branch is the following one:
```
commit cf44637
Author: Lixun Zhang <[email protected]>
Date:   Tue Jul 23 14:22:01 2024 -0500

    [tuning] gemm tuning script v3.3 (#606)
```

*tune_gemm* was copied from the source branch directory `scripts/amd/gemm`
to the destination branch directory `python/perf-kernels/tune_gemm`.

The SHA-256 hashes of *tune_gemm* files are the following ones:
```
423aef1deb6c60f6578a1ecfc94d2473f8746b00d0368c553d31641fcfa5e354  README.md
46ab93978fee33f75df23332f12546dae7910478c391f08b7b1ebd415d8266b7  icache_flush.py
f18711544641b810a652e6a6629bfa2b613f6ade87399e88fdf05b81d4af58a4  matmul.py
84a1c80ede36d3154e51188276eda2d2d0f52ed4f496ff69349c390d83b8ec10  matmul_kernel.py
2812b40183637bc8d7e47d283c7d66b1792134a43de76f3eacf7b9b3e1c2431a  one_config.py
0ac09c33b0173cea06ddabbf9f4e3afa1816781dea4fdcce5894a7e7d6a80e19  rocprof_gemm.py
00eff41cf1c0bfc41d623e42b51706af67639fec76146741e2067d2a93e0148a  utils/file_generator.py
cb7afb773ccee835b00396cccf87e0d44fe513131161f031fae42453725b3c82  utils/utils.py
59f23811b660e49e566927853926a21f02a7014bb19c8ea67e6b382db6c59900  tune_gemm.py
e787f35d750b869f113b3c01692f64243a9cb8a71a18ade2f0465f614f7284e4  tune_gemm.sh
```

The files were kept as-is despite `pre-commit` intentions to change them.

After that, *tune_gemm* directory in code and documentation was fixed to reflect
it's new location.
* Reformat *tune_gemm* files with Triton's pre-commit

The following command was executed to reformat the files:
```
$ pre-commit run --files \
    python/perf-kernels/tune_gemm/* \
    python/perf-kernels/tune_gemm/utils/*
```

* Fix *tune_gemm* issue with (1, 1) bias tensors

* Fix `ruff` F405 errors

Fix the following linter error:
F405 `identifier` may be undefined, or defined from star imports

* Fix `ruff` F841 errors

Fix the following linter error:
F841 Local variable `identifier` is assigned to but never used

* Fix minor issues in README file

* Add `--` to `num_threads` argument.
* Replace `--icahe` argument (non-existent argument) with
  `--icache_flush` (existent argument).

* Remove old files from *tune_gemm* V1

* Add dependency graph to README file

* Selectively disable `yapf` for parts of `one_config.py`
…ofv1 (#630)

* Change to rocprofv1

* improve post processing of rocprof results

- set --iters=200 as default. This is enough since the time is stable
after the first few runs.
- Filter out kernel time that is too large. We use the first kernel
time as the threshold. There must be something wrong with the kernel
if its elapsedTime is larger than the first run. We need to
investigate the reason. For now, just filter them out.

* Add xcd-based pid remapping

* Enable EVEN_K=false for large gemms

* Update readme
* Move utility tools from triton-mlir to main_perf branch

- Plot layout script
- occ.sh
- amdgcn-cfg

* yapf format

* More formats

* remove executablility of plot_layout.py

* Address ruff complains

* Move tune_gemm to tools
Online softmax implementation
Additionally changes locks to use uint8 instead of int32
for smaller space footprint.
@neoblizz neoblizz self-requested a review September 26, 2024 21:17
Copy link
Member

@neoblizz neoblizz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put in some comments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, for gfx90a the load/stores with cache_modifiers do not work. Documented here: https://github.com/ROCm/triton-internal/issues/311

rm1 = tl.max_contiguous(tl.multiple_of(rm1, BLOCK_SIZE_M), BLOCK_SIZE_M)
rn1 = tl.max_contiguous(tl.multiple_of(rn1, BLOCK_SIZE_N), BLOCK_SIZE_N)
P_ = P + pid * BLOCK_SIZE_M * BLOCK_SIZE_N + rm1[:, None] * BLOCK_SIZE_N + rn1[None, :]
tl.store(P_, acc, cache_modifier=".wt")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, for gfx90a the load/stores with cache_modifiers do not work. Documented here: https://github.com/ROCm/triton-internal/issues/311

# todo: try use tl.load once cache modifier landed upstream
while tl.atomic_cas(locks + next_pid, 1, 1) != 1:
while (end < tile_iter_end and next_pid < NUM_SMS):
while tl.load(locks + next_pid, cache_modifier=".cv", volatile=True) != 1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also does not work in gfx90a: https://github.com/ROCm/triton-internal/issues/311

EVEN_K: tl.constexpr,
):
pid = tl.program_id(0)
pid = get_new_pid(pid, num_cus)
pid = (pid % 8) * (NUM_SMS // 8) + (pid // 8)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed for anything but gfx942, so we will actually remove this if the arch was gfx90a.

P = torch.zeros((num_cus, block_m * block_n), device="cuda", dtype=torch.float32)
triton_output = matmul(a, b, c, P, locks, num_cus, block_m, block_n, block_k, group_m, num_warps, num_stages,
waves_per_eu, mfmaInstrSize, kpack, EVEN_K)
locks = torch.zeros((num_sms, ), device="cuda", dtype=torch.int32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

locks can be less than int32 type, we only need 1 byte: uint8 should work.

micmelesse and others added 7 commits October 1, 2024 17:25
* Added regression tests to tune_gemm

* Add regression tests to pipelines

* Add missing imports

* Use warnings to signal that no performance comparison is found

* Split regression tests into separate file

* Disable github pipeline in favour of jenkins

* Improve output and skip tests if no performance reference can be found

* Add testcase for overall mean regression

* Extend parameters which can be adjusted for perf regression tests

* Switch to geo mean for overall result

* Always recompile kernels in perf regression tests in case the user does no specify otherwise

* Report default values in exported result to support changing them in the future
@xiaohuguo2023
Copy link
Member Author

let's close this PR as there are too many difference with new main_perf, it's not safe to merge anymore. I have created a new ]PR]( #652) for v0.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants