Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add native conversions fp8<->fp32 #908

Merged
merged 5 commits into from
Sep 18, 2023
Merged

Add native conversions fp8<->fp32 #908

merged 5 commits into from
Sep 18, 2023

Conversation

geyyer
Copy link
Contributor

@geyyer geyyer commented Sep 11, 2023

  • fp8 -> fp32
  • fp32 -> fp8 RNE
  • fp32 -> fp8 SR
  • bf8 -> fp32
  • fp32 -> bf8 RNE
  • fp32 -> bf8 SR

@geyyer geyyer marked this pull request as ready for review September 15, 2023 03:30
@geyyer geyyer requested a review from zjing14 September 15, 2023 03:30
@zjing14 zjing14 merged commit f17af2e into develop Sep 18, 2023
shaojiewang added a commit to shaojiewang/composable_kernel that referenced this pull request Oct 12, 2023
* Fixed fp8 gemm (ROCm#882)

* add generic instances; fixed initi with fp8

* fixed comment

---------

Co-authored-by: Jing Zhang <[email protected]>

* added padding of K into gemm_v2r3 (ROCm#887)

* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

---------

Co-authored-by: Jing Zhang <[email protected]>

* Redesign the DPP8 GEMM kernel to use warp-wise component (ROCm#863)

* Redesign the DPP8 GEMM kernel to use warp-wise component

* Review: Improve error messages

* Review: Remove unnecessary empty lines

* Review: Fix M, N per thread names

* Review: Rename mfma_input_type to dpp_input_type

* Review: Fix tensor adaptor; remove unnecessary element

* Review: Remove calls to dpp_gemm's MakeCDescriptor

* Review: Add blockwise doc, change function names to include dimension names

* Review: Remove duplicated code; Move Block2CtileMap alias to the top of the file

* Review: Add __restrict__ keywords

* Review: Use MatrixPadder for padding A, B, C matrices

* Review: Remove hardcoded datatypes

* Review: Change names from FloatX to XDataType

* Review: Introduce AK0 and BK0 instead of a single K0

* Review: Remove construction of dpp_datatypes object

* Review: Rename DppInstrRunner to DppLanegroupGemm

* [Navi3x] Add fp16/int8 wmma conv forward instances (ROCm#746)

* fix wmma gemm int8; add grouped conv int8 example

* Add int8 gemm-bilinear instances

* compile sanity check unknown

* Sanity pass + clang-format

* add int8 conv profiler instances

* solve merge conflict

---------

Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Chao Liu <[email protected]>

* Enable DPP8 GEMM on Navi3 (ROCm#892)

* Add codeowners for documentation (ROCm#902)

Co-authored-by: samjwu <[email protected]>

* Add new instances and support for small cases in DPP8 GEMM (ROCm#896)

* clean up the workspace after every stage (ROCm#909)

* Refactor f8_t, add bf8_t (ROCm#792)

* Refactor f8_t to add bf8_t

* Add check_err impl for f8_t

* Update fp8 test

* Format

* Revert the fix

* Update vector_type implementation

* Add bf8 test

* Add bf8, use BitInt types

* Add bf8 conversion methods

* Update type_convert for fp8/bf8

* Add check_err fp8/bf8 support

* Add subnorm fp8 tests

* Add subnorm bf8 tests

* Fix conversion

* Add bf8 cmake bindings

* Add macros to enable build with disabled fp8/bf8

* Remove is_native method

* Update flag combination for mixed precision instances

* Add more flag checks

* Add another flag to a client example

* Add type traits, decouple f8/bf8 casting

* Clean up

* Decouple fp8 and bf8 flags

* Remove more redundant flags

* Remove leftover comments

* Add a switch to build DL kernels and build them with staging compiler. (ROCm#907)

* enable building DL kernels with the daily staging compiler

* move the DL_KERNELS flag to another function

* fixed fp8 issues (ROCm#894)

* fixed fp8 init; and reference gemm

* Update host_tensor_generator.hpp

* fixed convert

* fixed reference gemm

* fixed comments

* fixed comments

* fixed ci

* fixed computeType

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add grouped conv bwd weight dl instances and new layout (ROCm#897)

* Add grouped conv bwd weight dl instances and new layout

* Add M and N padding

* Remove todo comment

* Enable grouped conv fwd dl k,c=1 generic instance

* Comment fixes

* [Cmake] Set cmake default build type Release and path to /opt/rocm (ROCm#914)

* change the cmake update method (ROCm#918)

* Add  fp16/fp8 support into Grouped gemm FixedNK (ROCm#874)

* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* instance and client

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

* clean

* formatting

* clean

* clean

* fixed computeType

---------

Co-authored-by: Jing Zhang <[email protected]>

* Stylistic improvements for grouped convolution code

Remove unnecessary ignoring

Update test/grouped_convnd_bwd_weight/test_grouped_convnd_bwd_weight.cpp

* Add native conversions fp8<->fp32 (ROCm#908)

* Add native conversions

* Add bf8 conversions

* Fix DL GEMM instances with too large vector size (ROCm#901)

* Fix vector lengths of DL GEMM instances with padding
* Add checks for correctness of vector lenghts in DL GEMM

* fix the ckprofiler package build in a loop (ROCm#926)

* update to rocm5.7 by default (ROCm#925)

* update to rocm5.7 by default

* fix jenkinsfile syntax

* fix the building of the amd-stg-open compiler (ROCm#927)

* Refactoring cmake files to build data types separately. (ROCm#932)

* refactor cmake files for the tests

* refactor cmake files for examples

* fix cmake for gemm example

* fix the cmake file for all examples

* add splitting by data types in gemm_splitk instance header

* rename test to reflect only dl instances are used

* clean up CI workspace, update cmake for instances

* change the jenkinsfile syntax

* build all instances except DL on gfx11

* move workspace cleanup after stages

* clean up workspace after every stage

* isolate data types in grouped_conv_fwd header

* isolate dl instances for grouped_conv2d_fwd

* fix syntax

* fix cmake and batchnorm instances

* fix typo

* fix reduction instances

* fix grouped_conv headers

* fix syntax

* replace parsing logic for instances, replace bfp16 with bf16

* fix the client examples build

* clean up DTYPES from instances cmake files

* update the parsing logic in cmake files

* make an exception for reduction kernels

* update few remaining cmake files to handle DTYPES

* fix syntax

* fix cmake conflicts

* replace f8 with fp8 test name

* resolve conflicts for dpp instances

* Update naming (ROCm#937)

* Add 3d grouped conv fwd wmma instances (ROCm#935)

* Add 3d grouped conv fwd wmma instances

* Refactor fwd conv tests

* Split wmma instances for each specialization

* Minor stylistic fixes

* Resolve some data type issues and cmake policy. (ROCm#940)

* split the types in gemm_bilinear instances, add condition to cmake policy

* fix syntax

* split the data types in batchnorm examples

* fix the batchnorm_bwd test

* fix types in the batchnorm_bwd test

* split ckProfiler gfx9 package into gfx90 and gfx94 (ROCm#946)

* Add fp8 gemm instances (ROCm#920)

* Add fp8 gemm instances

* Update instance naming

* Fixed Gemmv2r3 kpad (ROCm#938)

* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

* fixed kpad

---------

Co-authored-by: Jing Zhang <[email protected]>

* Use lower case for ckprofiler package. (ROCm#948)

* split ckProfiler gfx9 package into gfx90 and gfx94

* use lower case for package names

* Add multiple A/B support (ROCm#906)

* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add column to image kernel (ROCm#930)

* Add column to image kernel

* Minor fixes for dtypes and client examples

* Disable tests for disabled dtypes

* Disable add instances functions for disabled data types

* Minor stylistic fixes

* Revert "Disable add instances functions for disabled data types"

This reverts commit 728b869.

* Instances reduction

* Add comments in device_column_to_image_impl

* Update changelog and Copyrights

* Improve changelog

* Handle type conversions to a const datatype (ROCm#944)

* Handle type conversions to a const datatype

* Review: Handle X being const data type as well

* Review: Remove typo

* Fix gemm_splitk test, add hip_check_error after kernel calls in kernel_launch. (ROCm#951)

* Added error check after kernel launch (ROCm#919)

Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>

* remove M=0 test cases for test_gemm_splitk

---------

Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>

* Add grouped convolution changes to changelog (ROCm#952)

* Add grouped convolution changes to changelog

* Fix 0.2.0 ck release rocm version

* Suggested CHANGELOG.md edits

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

---------

Co-authored-by: Lisa <[email protected]>

* Add grouped conv bwd data wmma (ROCm#950)

* Add grouped conv bwd data wmma

* Fix copyrights

* Add instances with smaller NPerBlock

* Update interface test

* Minor stylistic fixes

* Minor stylistic fixes

* Add support for mixed precision in contraction scale and bilinear (ROCm#936)

* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

* add gfx942 target to the daily ckprofiler package (ROCm#955)

* Contraction multi abd (ROCm#957)

* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

---------

Co-authored-by: Jing Zhang <[email protected]>

* get rid of gfx900/906, set rocm5.7 as default (ROCm#958)

* Add fp8 @ bf8 gemm support and example (ROCm#933)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* add generic instances (ROCm#947)

Co-authored-by: Jing Zhang <[email protected]>

* Fixed contraction issues (ROCm#960)

* add missing ComputeType

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <[email protected]>

* changed test for grouped_gemm to be random (ROCm#959)

Co-authored-by: Jing Zhang <[email protected]>

* 3d grouped conv fwd with input/output fp16 and comp fp8 (ROCm#931)

* add f8 comp instance

* fixed

* fixed comments

* rename

* fixed dtype

* format

* fixed CI

* fixed ci

* add missing ComputeType

* fixed cit

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (ROCm#945)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (ROCm#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

---------

Co-authored-by: Jing Zhang <[email protected]>

* Grouped conv bwd data with fp16 input and bf8fp8 comp (ROCm#962)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (ROCm#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

* added instance of f16_bf8f8, and client example

* fixed mfma selector

* format

---------

Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>

* remove example 60 (ROCm#963)

Co-authored-by: Jing Zhang <[email protected]>

* Revert "Add support for mixed precision in contraction scale and bilinear" (ROCm#967)

* Revert "Add support for mixed precision in contraction scale and bilinear (ROCm#936)"

This reverts commit f074850.

* revert commits ROCm#957 and ROCm#960

* Replace CMake `return` from later CMake (ROCm#970)

* Fixed f8_gemm NaN (ROCm#975)

* workaround nan problem by changing output to fp16

* enable f8/bf8 gemm tests on MI200

* workaround f16 to f8 conversion

---------

Co-authored-by: Jing Zhang <[email protected]>

* Fix MNKPadding in gridwise_gemm_xdlops_v2r3 (ROCm#981)

* Grouped Gemm with looping over the tiles. (ROCm#788)

* Introduce LocalBlockToCTileMap.

* Change the signature of CalculateBottomIndex() function which now does
not accept any argument. The B2C map which is already passed as an
argument to the kernel Run function is calculating block's local id
already outside at kernel entry point __global__ function.
The LocalB2C map stores as members local block ID.

* Use LocalBlockToCTile map in device ops.

* First draft of tile loop work distribution.

* Fix typo.

* Simplify kernel arguments.

Calculate descriptors & B2C maps on the device.

* Use looping kernel.

* Fix B2C constructor.

* Fix Navi21 errors.

* Calculate tile start/end in device kernel.

* Change Run API to accept user provided workspace buffer.

* Add new line at EOF.

* Move Gemm KernelArguments to device op interface.

* Remove unused code.

* Update API.

* Launch grid size which is min of occupancy vs tile count

* Get back to use constant memory for gemm descriptors.

* Remove unused code.

* Add default virtual method implementation.

* Update comments to conform with doxygen style.

* Fix doc style and unused parameters.

* Add thread cluster lengths to kernel name.

* Remove old splitk impl and replace it with tile looping one.

* Modify instances.

* set KPerBlock to 64
* maximize wherever possible vector load size.

* Fix instances cluster lengths.

* Change comment style.

* Use 128b store where possible in instances.

* Update test cases, since KPerBlock has doubled.

* Update output stream operator for Sequence.

* Add pipeline version to GroupedGEMM device op type string.

* Fix pipeline version type logging.

* Fix input tensors type after merge.

* Fix compiler error.

* Fix output stream operator for Pipeline version.

* Store using 128b.

* Set of instances with kpb 32/64

* Limit number of instances

* Remove commented out instances.

* Fix function name.

* Limit the number of instances.

Add pipline version to the regular instances

* Change thr cluster layout for reading B tensor.

* disabled failed instances

---------

Co-authored-by: Adam Osewski <[email protected]>
Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>

* Revert "Grouped Gemm with looping over the tiles. (ROCm#788)" (ROCm#982)

This reverts commit a4f72a3.

* simplified buffer_load/store (ROCm#971)

* simplified buffer_load/store

* add bfp8/fp8

* fixed

* fixed buffer_load

* fixed buffer_store

---------

Co-authored-by: Jing Zhang <[email protected]>

---------

Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: Bartlomiej Wroblewski <[email protected]>
Co-authored-by: Haocong WANG <[email protected]>
Co-authored-by: Chao Liu <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: samjwu <[email protected]>
Co-authored-by: Illia Silin <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Bartłomiej Kocot <[email protected]>
Co-authored-by: Jun Liu <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Lisa <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Lauren Wrubleski <[email protected]>
Co-authored-by: Adam Osewski <[email protected]>
Co-authored-by: Adam Osewski <[email protected]>
asroy added a commit that referenced this pull request Dec 1, 2023
* fix syntax (#890)

* Add contribution guidelines to the documentation (#843)

Add contribution guidelines to the documentation

* set warnings as errors in doxygen (#864)

* Fixed fp8 gemm (#882)

* add generic instances; fixed initi with fp8

* fixed comment

---------

Co-authored-by: Jing Zhang <[email protected]>

* added padding of K into gemm_v2r3 (#887)

* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

---------

Co-authored-by: Jing Zhang <[email protected]>

* Redesign the DPP8 GEMM kernel to use warp-wise component (#863)

* Redesign the DPP8 GEMM kernel to use warp-wise component

* Review: Improve error messages

* Review: Remove unnecessary empty lines

* Review: Fix M, N per thread names

* Review: Rename mfma_input_type to dpp_input_type

* Review: Fix tensor adaptor; remove unnecessary element

* Review: Remove calls to dpp_gemm's MakeCDescriptor

* Review: Add blockwise doc, change function names to include dimension names

* Review: Remove duplicated code; Move Block2CtileMap alias to the top of the file

* Review: Add __restrict__ keywords

* Review: Use MatrixPadder for padding A, B, C matrices

* Review: Remove hardcoded datatypes

* Review: Change names from FloatX to XDataType

* Review: Introduce AK0 and BK0 instead of a single K0

* Review: Remove construction of dpp_datatypes object

* Review: Rename DppInstrRunner to DppLanegroupGemm

* [Navi3x] Add fp16/int8 wmma conv forward instances (#746)

* fix wmma gemm int8; add grouped conv int8 example

* Add int8 gemm-bilinear instances

* compile sanity check unknown

* Sanity pass + clang-format

* add int8 conv profiler instances

* solve merge conflict

---------

Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Chao Liu <[email protected]>

* Enable DPP8 GEMM on Navi3 (#892)

* Add codeowners for documentation (#902)

Co-authored-by: samjwu <[email protected]>

* Add new instances and support for small cases in DPP8 GEMM (#896)

* clean up the workspace after every stage (#909)

* Refactor f8_t, add bf8_t (#792)

* Refactor f8_t to add bf8_t

* Add check_err impl for f8_t

* Update fp8 test

* Format

* Revert the fix

* Update vector_type implementation

* Add bf8 test

* Add bf8, use BitInt types

* Add bf8 conversion methods

* Update type_convert for fp8/bf8

* Add check_err fp8/bf8 support

* Add subnorm fp8 tests

* Add subnorm bf8 tests

* Fix conversion

* Add bf8 cmake bindings

* Add macros to enable build with disabled fp8/bf8

* Remove is_native method

* Update flag combination for mixed precision instances

* Add more flag checks

* Add another flag to a client example

* Add type traits, decouple f8/bf8 casting

* Clean up

* Decouple fp8 and bf8 flags

* Remove more redundant flags

* Remove leftover comments

* Add a switch to build DL kernels and build them with staging compiler. (#907)

* enable building DL kernels with the daily staging compiler

* move the DL_KERNELS flag to another function

* fixed fp8 issues (#894)

* fixed fp8 init; and reference gemm

* Update host_tensor_generator.hpp

* fixed convert

* fixed reference gemm

* fixed comments

* fixed comments

* fixed ci

* fixed computeType

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add grouped conv bwd weight dl instances and new layout (#897)

* Add grouped conv bwd weight dl instances and new layout

* Add M and N padding

* Remove todo comment

* Enable grouped conv fwd dl k,c=1 generic instance

* Comment fixes

* [Cmake] Set cmake default build type Release and path to /opt/rocm (#914)

* change the cmake update method (#918)

* Add  fp16/fp8 support into Grouped gemm FixedNK (#874)

* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* instance and client

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

* clean

* formatting

* clean

* clean

* fixed computeType

---------

Co-authored-by: Jing Zhang <[email protected]>

* Stylistic improvements for grouped convolution code

Remove unnecessary ignoring

Update test/grouped_convnd_bwd_weight/test_grouped_convnd_bwd_weight.cpp

* Add native conversions fp8<->fp32 (#908)

* Add native conversions

* Add bf8 conversions

* Fix DL GEMM instances with too large vector size (#901)

* Fix vector lengths of DL GEMM instances with padding
* Add checks for correctness of vector lenghts in DL GEMM

* fix the ckprofiler package build in a loop (#926)

* update to rocm5.7 by default (#925)

* update to rocm5.7 by default

* fix jenkinsfile syntax

* fix the building of the amd-stg-open compiler (#927)

* Refactoring cmake files to build data types separately. (#932)

* refactor cmake files for the tests

* refactor cmake files for examples

* fix cmake for gemm example

* fix the cmake file for all examples

* add splitting by data types in gemm_splitk instance header

* rename test to reflect only dl instances are used

* clean up CI workspace, update cmake for instances

* change the jenkinsfile syntax

* build all instances except DL on gfx11

* move workspace cleanup after stages

* clean up workspace after every stage

* isolate data types in grouped_conv_fwd header

* isolate dl instances for grouped_conv2d_fwd

* fix syntax

* fix cmake and batchnorm instances

* fix typo

* fix reduction instances

* fix grouped_conv headers

* fix syntax

* replace parsing logic for instances, replace bfp16 with bf16

* fix the client examples build

* clean up DTYPES from instances cmake files

* update the parsing logic in cmake files

* make an exception for reduction kernels

* update few remaining cmake files to handle DTYPES

* fix syntax

* fix cmake conflicts

* replace f8 with fp8 test name

* resolve conflicts for dpp instances

* Update naming (#937)

* Add 3d grouped conv fwd wmma instances (#935)

* Add 3d grouped conv fwd wmma instances

* Refactor fwd conv tests

* Split wmma instances for each specialization

* Minor stylistic fixes

* Resolve some data type issues and cmake policy. (#940)

* split the types in gemm_bilinear instances, add condition to cmake policy

* fix syntax

* split the data types in batchnorm examples

* fix the batchnorm_bwd test

* fix types in the batchnorm_bwd test

* split ckProfiler gfx9 package into gfx90 and gfx94 (#946)

* Add fp8 gemm instances (#920)

* Add fp8 gemm instances

* Update instance naming

* Fixed Gemmv2r3 kpad (#938)

* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

* fixed kpad

---------

Co-authored-by: Jing Zhang <[email protected]>

* Use lower case for ckprofiler package. (#948)

* split ckProfiler gfx9 package into gfx90 and gfx94

* use lower case for package names

* Add multiple A/B support (#906)

* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add column to image kernel (#930)

* Add column to image kernel

* Minor fixes for dtypes and client examples

* Disable tests for disabled dtypes

* Disable add instances functions for disabled data types

* Minor stylistic fixes

* Revert "Disable add instances functions for disabled data types"

This reverts commit 728b869.

* Instances reduction

* Add comments in device_column_to_image_impl

* Update changelog and Copyrights

* Improve changelog

* Handle type conversions to a const datatype (#944)

* Handle type conversions to a const datatype

* Review: Handle X being const data type as well

* Review: Remove typo

* Fix gemm_splitk test, add hip_check_error after kernel calls in kernel_launch. (#951)

* Added error check after kernel launch (#919)

Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>

* remove M=0 test cases for test_gemm_splitk

---------

Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>

* Add grouped convolution changes to changelog (#952)

* Add grouped convolution changes to changelog

* Fix 0.2.0 ck release rocm version

* Suggested CHANGELOG.md edits

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

---------

Co-authored-by: Lisa <[email protected]>

* Add grouped conv bwd data wmma (#950)

* Add grouped conv bwd data wmma

* Fix copyrights

* Add instances with smaller NPerBlock

* Update interface test

* Minor stylistic fixes

* Minor stylistic fixes

* Add support for mixed precision in contraction scale and bilinear (#936)

* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

* add gfx942 target to the daily ckprofiler package (#955)

* Contraction multi abd (#957)

* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

---------

Co-authored-by: Jing Zhang <[email protected]>

* get rid of gfx900/906, set rocm5.7 as default (#958)

* Add fp8 @ bf8 gemm support and example (#933)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* add generic instances (#947)

Co-authored-by: Jing Zhang <[email protected]>

* Fixed contraction issues (#960)

* add missing ComputeType

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <[email protected]>

* changed test for grouped_gemm to be random (#959)

Co-authored-by: Jing Zhang <[email protected]>

* 3d grouped conv fwd with input/output fp16 and comp fp8 (#931)

* add f8 comp instance

* fixed

* fixed comments

* rename

* fixed dtype

* format

* fixed CI

* fixed ci

* add missing ComputeType

* fixed cit

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (#945)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

---------

Co-authored-by: Jing Zhang <[email protected]>

* Grouped conv bwd data with fp16 input and bf8fp8 comp (#962)

* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

* added instance of f16_bf8f8, and client example

* fixed mfma selector

* format

---------

Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>

* remove example 60 (#963)

Co-authored-by: Jing Zhang <[email protected]>

* Revert "Add support for mixed precision in contraction scale and bilinear" (#967)

* Revert "Add support for mixed precision in contraction scale and bilinear (#936)"

This reverts commit f074850.

* revert commits #957 and #960

* Replace CMake `return` from later CMake (#970)

* Fixed f8_gemm NaN (#975)

* workaround nan problem by changing output to fp16

* enable f8/bf8 gemm tests on MI200

* workaround f16 to f8 conversion

---------

Co-authored-by: Jing Zhang <[email protected]>

* Fix MNKPadding in gridwise_gemm_xdlops_v2r3 (#981)

* Grouped Gemm with looping over the tiles. (#788)

* Introduce LocalBlockToCTileMap.

* Change the signature of CalculateBottomIndex() function which now does
not accept any argument. The B2C map which is already passed as an
argument to the kernel Run function is calculating block's local id
already outside at kernel entry point __global__ function.
The LocalB2C map stores as members local block ID.

* Use LocalBlockToCTile map in device ops.

* First draft of tile loop work distribution.

* Fix typo.

* Simplify kernel arguments.

Calculate descriptors & B2C maps on the device.

* Use looping kernel.

* Fix B2C constructor.

* Fix Navi21 errors.

* Calculate tile start/end in device kernel.

* Change Run API to accept user provided workspace buffer.

* Add new line at EOF.

* Move Gemm KernelArguments to device op interface.

* Remove unused code.

* Update API.

* Launch grid size which is min of occupancy vs tile count

* Get back to use constant memory for gemm descriptors.

* Remove unused code.

* Add default virtual method implementation.

* Update comments to conform with doxygen style.

* Fix doc style and unused parameters.

* Add thread cluster lengths to kernel name.

* Remove old splitk impl and replace it with tile looping one.

* Modify instances.

* set KPerBlock to 64
* maximize wherever possible vector load size.

* Fix instances cluster lengths.

* Change comment style.

* Use 128b store where possible in instances.

* Update test cases, since KPerBlock has doubled.

* Update output stream operator for Sequence.

* Add pipeline version to GroupedGEMM device op type string.

* Fix pipeline version type logging.

* Fix input tensors type after merge.

* Fix compiler error.

* Fix output stream operator for Pipeline version.

* Store using 128b.

* Set of instances with kpb 32/64

* Limit number of instances

* Remove commented out instances.

* Fix function name.

* Limit the number of instances.

Add pipline version to the regular instances

* Change thr cluster layout for reading B tensor.

* disabled failed instances

---------

Co-authored-by: Adam Osewski <[email protected]>
Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>

* Revert "Grouped Gemm with looping over the tiles. (#788)" (#982)

This reverts commit a4f72a3.

* simplified buffer_load/store (#971)

* simplified buffer_load/store

* add bfp8/fp8

* fixed

* fixed buffer_load

* fixed buffer_store

---------

Co-authored-by: Jing Zhang <[email protected]>

* Bump gitpython from 3.1.31 to 3.1.35 in /docs/sphinx (#898)

Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.35.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](gitpython-developers/GitPython@3.1.31...3.1.35)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* add vector_type support into thread_copy_v3r1 (#969)

* add vector_type support into thread_copy_v3r1

* remove unncessary type_convert

* fixed datatype

* fixed dataType

* changed API with is_packx_invocable

* changed example

* add missing cmake file

* fixed ci

* fixed cmake

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add splitk gemm fp16 @ fp16 with fp8 compute instances (#983)

* Add ComputeType

* Update for compatibility

* Add instances

* Update profiler api

* Add hipTensor build and test to CK CI. (#990)

* add a hipTensor test to CI

* use jenkins git plugin

* change hipTensor folder location in CI

* change the git method for hipTensor

* run tests usign ctest

* check the hipTensor contents

* only build hipTensor on MI100/200

* pull hipTensor as zip archive

* fix jenkins syntax

* add path to the CK installation

* combine build commands into one shell

* change jenkins syntax for CK installer path

* try different syntax

* allow unzip overwrite

* fix jenkins file syntax

* remove any old versions of hipTensor before building

* add option to select hipTensor branch for testing

* workaround with float (#992)

Co-authored-by: Jing Zhang <[email protected]>

* Add grouped conv bwd weight wmma (#985)

* Add grouped conv bwd weight wmma

* Update README, changelog, profiler

* Minor fixes

* Fix grouped conv bwd wei dl kernel

* Minor fixes

* Minor stylistic fixes

* added ab_elementwise_op support into splitK Gemm (#956)

* add ab_elementwise

* fixed ci

* fixed a merge issue

* fixed pr comments

* fixed a conflict

* remove 61_example

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add contraction_multi_abd (#972)

* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

* Update CMakeLists.txt

---------

Co-authored-by: Jing Zhang <[email protected]>

* Clean DTYPES conditions in CMake (#974)

* Add a condition to build fp8 instances

* simplified buffer_load/store

* add bfp8/fp8

* fixed

* remove all f8/bf8 condition include folder

* fixed cmake conditions

* fixed DTYPES=fp16/bfp16

* fix

* fixed buffer_load

* fixed buffer_store

* fix

* clean example cmake files

* fixed ci

* fixed cit

---------

Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>

* fixed math-ci error; suspend a warning (#996)

Co-authored-by: Jing Zhang <[email protected]>

* Layernorm and groupnorm support to save mean and inverse std in forward (#929)

* save mean and inverse std in normalization

* Save mean and inverse std in splitK

* Vector save mean and inv std

* Modify instance for save mean and std

* simplify the layernorm example

* Save mean and std in groupnorm example

* Save mean and inv std in ckProfiler and test

* Remove compute data type from base class

* Save mean and inv std in client example

* Add changelog

* clang format

* Fix compile error

* Refine naming

* Avoid error in bf16

* revert changelog

* Change 1d,2d,... to 1D,2D,... (#997)

* Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF (#961)

* Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF

* Remove compilation option variable MAX_ILP_OPTS

* Extend available elementwise operations with conv examples (#995)

* Extend available elementwise operations with conv examples

* Fixes

* Remove not needed convert

* Update CMakeFile and dir name

* Misc fixes  (#994)

* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers

* Add seed input to GeneratorTensor_4 for normal_distribution generator

* Add GetTypeString() for DeviceElementwiseImpl

* Add HIP_CHECK_ERROR macro

* Fix the DL kernel issues on Navi3x. (#998)

* apply the patch for dl kernels on gfx11

* build DL kernels on navi32 CI

* Fix bf8 conversion issues (#1003)

* Fix the conversion

* Add bf8 functionality

* Enable example on MI200 as well

* Fix cmake dtype check (#989)

* Fix instances dtype check

* Fix source dtypes seletor for examples and tests

* Sync with new cmakefile changes

* Remove not needed ifdefs

* Remove not needed ifdefs

* Enabled padding for regular gemm (#1004)

* add mnk padding for fp8

* add padding for row_col layout

* added padding for fp32

---------

Co-authored-by: Jing Zhang <[email protected]>

* Bump sphinxcontrib-bibtex from 2.5.0 to 2.6.1 in /docs/sphinx (#871)

Bumps [sphinxcontrib-bibtex](https://github.com/mcmtroffaes/sphinxcontrib-bibtex) from 2.5.0 to 2.6.1.
- [Changelog](https://github.com/mcmtroffaes/sphinxcontrib-bibtex/blob/develop/CHANGELOG.rst)
- [Commits](mcmtroffaes/sphinxcontrib-bibtex@2.5.0...2.6.1)

---
updated-dependencies:
- dependency-name: sphinxcontrib-bibtex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sam Wu <[email protected]>

* Fix the fp8 gemm for large tensors on MI300. (#1011)

* Fix the fp8 conversion

* Try clipping value before conversion

* Fix return

* Simplify with a const

* reduce the gemm input tensor values to reduce round-off error

* replace if-else with lambda

* fix syntax

---------

Co-authored-by: Rostyslav Geyyer <[email protected]>

* Enable sccache in the default docker and CI. (#1009)

* replace ccache with sccache, pin package versions

* put ccache back temporarily to avoid breaking other CI jobs

* add sccashe_wrapper.sh script

* fix the package version syntax

* fix the pymysql package issue

* run sccache_wrapper before build if ccache server found

* set the paths before calling the sccache_wrapper

* use /tmp instead of /usr/local for cache

* try using sccache --start-server instead of wrapper

* try using redis server with sccache

* define SCCACHE_REDIS

* add redis and ping packages, and redis port

* use the new sccache redis server

* do not use sccache with staging compiler

* fix the condition syntax

* add stunnel to redis

* add tunnel verification

* separate caches for different architectures

* fix syntax for the cache tag

* quse double brackets for conditions

* add bash line to the script

* add a switch for sccache and only use it in build stage

* run check_host function when enabling sccache

* fix the invocation tags for sccache

* fix groovy syntax

* set the invocation tag in groovy

* disable sccache in clang-format stage

* try another syntax for invocation tags

* use local sccache server if can't connect to redis

* fix script syntax

* update README

* refresh readme

* readme updates

* remove the timing and verification caveat from readme

---------

Co-authored-by: Lisa Delaney <[email protected]>

* Add support for groups in Img2Col/Col2Img (#1007)

* Add support for groups in Img2Col/Col2Img

* Fix interface test

* Fix interface test G to N

* Improve performance

* Change gemm layout to 3d

* Fixes

* Enable gfx941 & gfx942 support for DeviceGemmXdl<> device op (#1017)

* Enable gfx942 support for DeviceGemmXdl<> device op

* Enable gfx941 support for DeviceGemmXdl<> device op

* Disable the SLP vectorizer to prevent unnecessary wait (#1008)

* Disable the SLP vectorizer to prevent unnecessary wait

* Add comment to the reason of adding flag

* Fix wording

* handle the exception when cannot connect to redis server (#1019)

* Add ScaleAddScaleAddRelu post op for conv fwd (#1006)

* Add ScaleAddScaleAddRelu post op for conv fwd

* Fixes

* Fix instance file name

* Minor fix

* Bump rocm-docs-core from 0.24.0 to 0.26.0 in /docs/sphinx (#987)

Bumps [rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) from 0.24.0 to 0.26.0.
- [Release notes](https://github.com/RadeonOpenCompute/rocm-docs-core/releases)
- [Changelog](https://github.com/RadeonOpenCompute/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v0.24.0...v0.26.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add support for mixed precision in contraction scale and bilinear (#973)

* Add support for mixed precision in contraction scale and bilinear (#936)

* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

* Make the ComputeDataType an optional argument in instance

---------

Co-authored-by: Illia Silin <[email protected]>

* Add missing ComputeDatatype in contraction_multi_ABD_xdl_fp16 (#1024)

* switch the hipTensor testing from mainline to develop branch (#1025)

* Add compute type check for convolution instances (#1015)

* add compute type check for fp16 in forward convolution instances

* Add compute type check for default compute types

---------

Co-authored-by: Bartlomiej Kocot <[email protected]>

* Add Gemm instances for performance improvement (#1018)

* improve kpad

* more tuning parameters

* f16_f8_fp16

* cut test time

* add f16_f8_fp16

* add f16_f8_f16

* testing instances for skinny cases

* format

* clean

* add fp16_f8_fp16

* clang-format

* add grouped gemm instalces

* fixed profile grouped_gemm

* clean

* clean

* clean

* clean

* clean

* add missing instance func

* fixed inferface

---------

Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: root <[email protected]>

* Support fp64 contraction on gfx94x. (#1029)

* enable contraction fp64 on gfx94*

* fix the logic

* Layernorm4d (#1022)

* Rename folder

* Add layernorm 4d fwd example

* Rename original layernorm example

* Add layernorm 4d f16  test

* Add layernorm4d_fwd client example

* Support layernorm4D in ckProfiler

* Rename groupnorm to groupnorm fwd in example

* Rename layernorm and group fwd in test

* Rename normalization to normalization_fwd (instances)

* Add fwd to DeviceNormalization

* Rename external api header

* Rename folder, because we can also add bwd in this folder

* Add fwd in layernorm and groupnorm (profiler

* Fix compile error

---------

Co-authored-by: Po Yen Chen <[email protected]>

* Transpose 3d (#984)

* added working example for 5D input using 1D kernel

* example with 5D input tensor and 2d kernel - not working: issues with arguments

* added updated version of 3d device op - changed descriptors/dims

* added example file to check kernel

* fixed descriptor and isSupportedArgument stride problem

* added and modified kernel for 3d - updated tids/loop

* adding some more 5d example files

* fixed some issues

* changes made for testing

* working version: fixed error in stride for A, still a bit inefficient

* cleaned up formatting/comments

* updating formatting

* more formatting fixes

* fixing cmake, adding back gpu targets in cmake script

* adding client example

* added instances for client example

* fixed errors in client example

* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp

* removed extra files

* minor formatting and naming fixes

* adding test files and profiler

* fixing minor error

* minor fix

* removed unneccesary comments, renamed files

* updated instance list for client example, added different layout example

* removing instances

* fixed error in instance generation

* remove comments

* update profiler and client example tensor layouts

* fixed errors in test/profiler

* updated vector dim access to enable vector load

* updated test/profiler files

* updated example with 1d kernel

* updating profiler

* renamed files

---------

Co-authored-by: Jing Zhang <[email protected]>

* add linker script to QA builds (#1030)

* Backward of gamma and beta for layernorm and groupnorm (#1013)

* Add layernorm backward reference code

* Add groupnorm backward reference code

* Add example

* clang format

* Fixc bug of reference layernorm and groupnorm

* Fix naming

* Refine naming

* Add device op for normalization bwd gamma and beta

* Refine template parameter

* Add bwd gamma & beta of kernel

* 1. Add groupnorm example
2. Refine layernorm naming

* Narrow down the static check for performance

* Refine variable name

* Support multi AB for grouped conv fwd xdl (#1027)

* Support multi AB for grouped conv fwd xdl

* Add instances

* Add client example

* Add example

* Add interface test

* Minor fixes

Minor fixes

Minor fixes

* Comment fixes

* Fixes

* Reference fix

* Test xdl fixes

* Improve multi_ab interface test

* add more instances for bfp16 gemm (#1036)

* add more instances for bfp16

* reduce the gemm input values to prevent round-off errors

---------

Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: illsilin <[email protected]>

* Hip tensor permute (#1002)

* adding files for F32 example

* adding functioning implementation with scalar multiplication and unary operator support

* added fp 16 type check in unary square

* updating scalar multiplication as an operator

* functioning version with scalar operator

* changing strides for col major

* updated column major implementation

* working column major implementation

* cleaned up comments, rearranged/renamed files

* Add conv bwd weight client example (#1005)

* Add conv bwd weight client example

* Update instance selector

* Fake the conversion

* Bring the conversion back

* Introduce multiABD api and deprecate multiD (#1035)

* Introduce multiABD api and deprecate multiD

* Replace multiD with multiABD

* Mark structures as deprecated

* Change doxygen deprecated to note to avoid warnings

* Fix check for conv Fwd Filter1x1Pad0 (#1040)

* Fix check for conv Fwd Filter1x1Pad0

* Fix check for conv Fwd Filter1x1Pad0

* Log CDEBlockTransferScalarPerVector_NPerBlock in conv fwd multiD xdl (#1042)

* Log CDEBlockTransferScalarPerVector_NPerBlock in conv_fwd_multi_d_xdl implementation

* Log CDEBlockTransferScalarPerVector_NPerBlock in conv fwd multiD xdl

* Bump rocm-docs-core from 0.26.0 to 0.27.0 in /docs/sphinx (#1023)

Bumps [rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) from 0.26.0 to 0.27.0.
- [Release notes](https://github.com/RadeonOpenCompute/rocm-docs-core/releases)
- [Changelog](https://github.com/RadeonOpenCompute/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* [Hotfix] Remove unsed profile_transpose.cpp (#1046)

* Improve 4k gemm perf (#1047)

* improve 4k gemm perf

* add f8 instances

* format

---------

Co-authored-by: Jing Zhang <[email protected]>

* Add basic support for direct loads from global to LDS (#999)

* Add basic support for direct loads from global to LDS

* Clean the code and comments

* Add support for fp16

* Add comments

* Add check for thread cluster lengths

* Align non-direct-load fp16 example

* Small fixes

* Extend IsSupported to check for supported GPU gens

* Build examples only on the supported HW

* Do not throw when instance not supported in 04 example

* Review: Apply review suggestions

* Review: small fix

* Review: small fix

* Fix cluster length arrange order in fp16 GEMM example (#1055)

* Add missing check for K padding in XDL GEMM (#1056)

* Switch default f8 conversion to stochastic rounding (#1048)

* Switch default f8 conversion to stochastic rounding

* Refactor f8-related type_converts

* Add an element-wise op

* Split the static library into several files. (#1044)

* spolit the static library into several

* update lib paths and fix client example

* do not use device_mha_operarions for client examples

* use appropriate libs to link to client examples

* remove the gpu/transpose path from the list

* try fixing clinet examples 3,4,9

* add necessary libs for client examples

* fix the layernorm client example

* fix the client examples 23 and 24

* fix typo

* add interface library and refresh clang format

* recover default niter (#1064)

* Disable transpose device op for MI300 (#1050)

* added working example for 5D input using 1D kernel

* example with 5D input tensor and 2d kernel - not working: issues with arguments

* added updated version of 3d device op - changed descriptors/dims

* added example file to check kernel

* fixed descriptor and isSupportedArgument stride problem

* added and modified kernel for 3d - updated tids/loop

* adding some more 5d example files

* fixed some issues

* changes made for testing

* working version: fixed error in stride for A, still a bit inefficient

* cleaned up formatting/comments

* updating formatting

* more formatting fixes

* fixing cmake, adding back gpu targets in cmake script

* adding client example

* added instances for client example

* fixed errors in client example

* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp

* removed extra files

* minor formatting and naming fixes

* adding test files and profiler

* fixing minor error

* minor fix

* removed unneccesary comments, renamed files

* updated instance list for client example, added different layout example

* removing instances

* fixed error in instance generation

* remove comments

* update profiler and client example tensor layouts

* fixed errors in test/profiler

* updated vector dim access to enable vector load

* updated test/profiler files

* updated example with 1d kernel

* updating profiler

* renamed files

* disabled device op for MI300

* skip  elementwise_permute_2d on gfx94x

* Update CMakeLists.txt

* fixing CMake - disabling some GPU targets

---------

Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: zjing14 <[email protected]>

* Introduce wrapper for layout (#1054)

* Introduce wrapper for layout

* Extend functionality

* Fix for getLength

* Comment fixes

* Add comments and remove not needed getters

* update cmake

* fix bug

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Illia Silin <[email protected]>
Co-authored-by: Bartlomiej Wroblewski <[email protected]>
Co-authored-by: zjing14 <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>
Co-authored-by: Haocong WANG <[email protected]>
Co-authored-by: Sam Wu <[email protected]>
Co-authored-by: samjwu <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Bartłomiej Kocot <[email protected]>
Co-authored-by: Jun Liu <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Xiaodong Wang <[email protected]>
Co-authored-by: Lisa <[email protected]>
Co-authored-by: Rostyslav Geyyer <[email protected]>
Co-authored-by: Lauren Wrubleski <[email protected]>
Co-authored-by: Adam Osewski <[email protected]>
Co-authored-by: Adam Osewski <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: rocking <[email protected]>
Co-authored-by: Po Yen Chen <[email protected]>
Co-authored-by: Qianfeng <[email protected]>
Co-authored-by: Lisa Delaney <[email protected]>
Co-authored-by: Daming Feng <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: arai713 <[email protected]>
Co-authored-by: illsilin <[email protected]>
Co-authored-by: Jing Zhang <[email protected]>
@illsilin illsilin deleted the lwpck-911 branch December 8, 2023 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants