-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MODULE] Enable OpenCL and CUDA Modules #53
Merged
+476
−335
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
5 tasks
icemelon
approved these changes
Feb 26, 2017
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
tqchen
pushed a commit
to tqchen/tvm
that referenced
this pull request
May 26, 2018
tqchen
pushed a commit
to tqchen/tvm
that referenced
this pull request
Jul 6, 2018
sergei-mironov
pushed a commit
to sergei-mironov/tvm
that referenced
this pull request
Aug 8, 2018
jroesch
pushed a commit
to jroesch/tvm
that referenced
this pull request
Aug 29, 2018
* First pass at unifying type IDs and type quantifiers * Factor out shared TypeVar case * Clean up indices in quantifier case * Ensure TypeUnifier -> operator is not const as well * Add 'unify' convenience method on type unifier * Replace calls to visitor with the convenience method * Make the unifier and unionfind ordinary nodes, not value nodes * TypeIds should only unify with TypeIds, no need for id_map * Create type subst visitor * Easy enough to just make subst a method on unifier, reduces complexity * Added back eq_map to printer for unifier
zxy844288792
pushed a commit
to zxy844288792/tvm
that referenced
this pull request
Nov 13, 2019
* [relay][vm] Separate VM runtime with executable (apache#4100) * [relay][vm] Separate VM runtime with executable * Address comments * move ctx back to vm * make only vm related fields and methods protected * integrate seriliaztion/deserialization to executable * create stream * [Relay][Frontend][TF] Add tensor array ops (apache#3798) * [Relay][Frontend][TF] Add tensor array ops * rename * delete test * Move utility function * Refactor * fix tensor array ops * fix test * fix rebase * Fix serializer bug * Improve tf convert name lookup to use prelude api * Fix lint * Fix test * Fix typo (apache#4144) * [CI] Pin NNPack pthreadtools version (apache#4152) * [QNN][TFLite] Parsing QNN Add op. Adding MobilenetV2. (apache#4142) * Add lift_if_then_else pass (apache#3865) * Add LiftIfThenElse pass * Add more comments * Rename and refactor * Add description for internal data structure * Rename a test * Minor change * Address comments * Improve update_for * [CI] Update cpu docker (apache#4153) * [Refactor] Rename Datatype to ADT (apache#4156) We think it will reduce the confusion with the meaning. https://discuss.tvm.ai/t/discuss-consider-rename-vm-datatype/4339 * [Runtime] Enable option to use OpenMP thread pool (apache#4089) * [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. (apache#4161) * [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. This PR removes the original node system, and make node as a subclass of Object. This is a major refactor towards a better unified runtime object system. List of changes in the refactor: - We now hide data_ field, use Downcast explicitly to get a sub-class object. - Removed the node system FFI in python. - Removed the node C API, instead use PackedFunc for list and get attrs. - Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>(). - This change was necessary because of the new Object registration mechanism. - Subsequent changes to the op registrations - The change revealed a few previous problems that is now fixed. - Patched up a few missing node type registration. - Now we will raise an error if we register object that is not registered. - The original node.h and container.h are kept in the same location. - Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed. - IRFunctor now dispatches on ObjectRef. - Update to the new type checking API: is_type, derived_from are replaced by IsInstance. - Removed .hash member function, instead use C++ convention hasher functors. * Address review comments * [CI] Move golang tests to the end (apache#4164) * Add support for quantized multiply to Relay (apache#4141) This patch adds multiply operator for quantized tensors. The details of the quantized multiplication are outlined in the code. This builds on pull request 3927 and includes the changes Animesh mentions in the comments on that request. Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1 * Fix missspelling (apache#4166) FIX "After connecting he usb" with "After connecting the usb" * [Relay][Pass] Count MAC for BatchMatMul (apache#4157) * count MAC for BatchMatMul * update doc * [Relay][QNN] Add unit test for int8 (apache#4159) * [bugfix][codegen] fix casting bug in llvm codegen * update example * retrigger ci * check llvm version * [relay][vm] Reuse allocated device memory (apache#4170) * add missing gradient check to gradient pass (apache#4169) * merge extract_from_program and extract_from_multiple_progam (apache#4173) * [TOPI] Added support for Mali Bifrost target (apache#4047) * [Relay][Frontend][TF] Fix Size operator (apache#4175) * [Relay][Frontend][TF] Fix Size operator * Uncomment tests * [Pass] Remove dead code (apache#4177) * [rpc] use callback func to do send & recv (apache#4147) * [rpc] use callback func to do send & recv. don't get fd from sock as it is deprecated in java * fix java build * fix min/max macro define in windows * keep the old rpc setup for py * add doc for CallbackChannel * Add support and testing for tf.assert (as no-op) and tf.no_op to TF Relay frontend. (apache#4172) * [DOCS] Add TensorFlow frontend docs (apache#4154) * Start to update TF frontend docs * Add rst * Remove markdown * Update wording * Resolve comments * Revert "[Relay][QNN] Add unit test for int8 (apache#4159)" (apache#4192) This reverts commit 6f9d028. * [cmake][ANTLR] Support setting path to ANTLR jar (apache#4176) * Support setting path to ANTLR jar * Update comment * Split adaptive_pool2d_avg into sum and div (apache#4186) * [Documentation]Fix example code in comment of tvm.build_module.build() (apache#4195) * Fix example code in comment of tvm.build_module.build() * Update build_module.py * [relay] use time_evaluator for measurement (apache#4191) * Add parser support for SUM tflite operator (apache#4182) * [Relay] Fix memory leak in the interpreter (apache#4155) * save lint * address reviewer comment * [TOPI] Tunable Template for Conv2D HWCN on CUDA (apache#4168) * support conv2d HWCN in AutoTVM and Relay * fix lint * fix comments and unit tests * TensorCore Support using Intrinsic (apache#4136) * add tensor core support * avoid memory bank conflict * fix thread sync & better performance * better performance * add schedule test for conv2d * extend into BatchMatMul * support config fragment shape and layout using intrinsic * add TensorCore tutorial * add int support and fix lint * address comment * add 32*16*8 TensorCore test * fix wmma include logic * [NODE][REFACTOR] Refactor reflection system in node. (apache#4189) * [NODE][REFACTOR] Refactor reflection system in node. - Removed the old Node, Node is now just an alias of runtime::Object - Introduce ReflectionVTable, a new columnar dispatcher to support reflection - This allows us to remove vtable from most node objects - The VisitAttrs are registered via TVM_RESGITER_NODE_TYPE, they are no longer virtual. - Consolidated serialization and reflection features into node. * Explicit type qualification when calling destructor. * Fix SPIRV, more comments * hotfix the ci (apache#4199) * [TOPI][x86] Legalize - Support int8xint8 convolution to use VNNI instructions. (apache#4196) * [Relay] crossentropy_with_logits and its gradient (apache#4075) * save * lint * [hotfix] missing include headers (apache#4204) * [Relay][Training] Add checkpoint annotation for checkpointing memory optimization (apache#4146) * add checkpoint annotation for checkpointing memory optimization * add alpha-equivalence checkpoint test and fix gradient type issue * fix build issues * ignore checkpoint annotation when checking missing gradients * refactor, fix checkpoint compute for tuple and add tests * [Relay][Params] Add APIs for storing and retrieving parameters from individual functions. (apache#4194) * Add support for attaching params * Fix types * Fix test * [Relay][Frontend][ONNX] Add support for op Where (apache#4184) * Add support for op Where * Update impl version * [VTA][Chisel] TSIM VTA Source Refactor (apache#4163) * app init push * fix on readme * change name, add bit serial explanantion * rm serialLoadMM, change doc * syntax change for readme * add parallel test functionality * fix readme * add python doc * syntax * init commit * fix empty line * fix typo * [RUNTIME] Separate runtime related contrib into runtime/contrib (apache#4207) * Fix type var docs (apache#4208) * [Relay] Setting Legalize opt_level to 1. (apache#4198) * [TOPI] Fix flaky testcase for check round (apache#4211) * [Relay][Op] Enhance Upsample Operator to support float scales (apache#4206) * :add scale2 for upsample * update unit test for upsampling * support latest upsample op for multiple frontend * fix lint * fix lint * fix lint * fix lint * update scale description and rebase * [Relay][Quantize] Use fixed point mulplications (apache#4160) * Update have_int8 condition to run on compute capability 7.x devices (apache#4214) * Optimizing autotvm task extraction speed (apache#4138) * Optimize task extraction speed * correct pylint errors * Delete unused function * remove unnecessary argument * resolve code review comments * corrent cpp lint errors * remove one more graph_json return value * fix test bugs * [Relay] Add Python type functor and tests (apache#4209) * Add Python type functor and tests * Lint roller * Fix typo in packed_func.h (apache#4219) * Improve the lowering of Qnn Dense (apache#4213) * [QNN] Improving Dense lowering. * - Moving get_shape method to util - Finalizing the test cases and the code structure for optimized dense computation. * - Fixing cpplint. * - Addressing review comments. * - Renaming the variables correctly. * - Renaming the variables correctly. * [ARITH] Fix the rule y < x && x <= y (apache#4220) * [PYTHON] Add __init__ to the generated grammar so that it can be installed properly (apache#4223) * [Relay][Frontend][ONNX] New Operators and Opsets to Support BERT (apache#4197) * Added slice v10 * Added constantofshape operation and small refactor. * Finished one_hot implementation. * Reshape working across all bert layers. * Fixed constantofshape and removed code duplication. * onnx model fully ingested. * Working on improving onnx tests. * Changed onnx testing to use onnxruntime instead of caffe2, also formatted. * Add arbitrary output nodes to onnx frontend. * Added v6 tiling for bert squad 8 support. * Small syntax fixes * Reduced code duplication in split opset versions. * Added batch matmul test * Added unstack split testing. * Adde onehot test, needs a little cleanup probably. * Replaced deprecated constant fill with constantofshape and updated tests accordingly. * Added tests for new opset version of slice and tile. * lint clean up * Lint fixes * Changed onnx dependency * Went back to caffe2 runtime for CI integration. * Rebase and small typo/syntax changes. * Added hard casting of onehot attributes to int. * [Relay][Topi][TensorFlow][ONNX][Lang] Add support for Any op (apache#4205) * Add support for Any op * Support ONNX frontend * Add doc * Add to relay docs * Dummy change to retrigger CI * Update dmlc_tvm_commit_id.txt * Merge from upstream
MasterJH5574
pushed a commit
to MasterJH5574/tvm
that referenced
this pull request
Feb 26, 2022
MasterJH5574
pushed a commit
to MasterJH5574/tvm
that referenced
this pull request
Mar 3, 2022
vinx13
pushed a commit
to vinx13/tvm
that referenced
this pull request
Mar 9, 2022
rebased [TIR][Schedule] fix reorder/buffer_flatten & finish CPU demo (apache#59) [CPU DEMO] Update cpu gemm demo and fix bug (apache#58) * [TIR][Schedule] introduce parallel and fix bugs for cpu demo * [TIR][Schedule] update cpu demo * [TIR][Schedule] fix lint * [TIR][Schedule] fix rebased [TIR][Schedule] introduce reduction block and CPU demo (apache#53) * [TIR] reduction : split_reduction * [TIR] reduction : split_reduction * [TIR] reduction : fuse_reduction * [TIR] reduction : cpu demo * [TIR] reduction : fix * [TIR] reduction : pattern detect remains * [TIR] reduction : pattern detect remains * [TIR] reduction : pattern match done * [TIR] reduction : fix lint * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : rebased * [TIR] reduction : rebased [TIR][Schedule] introduce cache_read cache_write (apache#54) * [TIR][Schedule] introduce cache_read cache_write * [TIR][Schedule] add more comments * [TIR][Schedule] fix problem and add comments * [TIR][Schedule] address comments [TIR] schedule: introduce vectorize, unroll, loop validation (apache#47) * [TIR] vectorize : basically complete * [TIR] vectorize&unroll : update comments&unroll * [TIR] vectorize&unroll : rebased * [TIR] vectorize, unroll, cpu_demo: done * [TIR] vectorize, unroll, cpu_demo: simplify * [TIR] vectorize, unroll, cpu_demo: fix * [TIR] reduction : rebased * [TIR] reduction : fix [TIR][Schedule] fix sref and scopes problem during replace and compute_at (apache#50) * [TIR][Schedule] fix sref and scopes problem during replace and compute_at * [TIR][Schedule] fix * [TIR][Schedule] fix [TIR][Refactor] move function to ScheduleNode [TIR] Schedule: introduce primitive compute_at (apache#36) * [TIR] Schedule: introduce primitive compute_at * [TIR] Schedule: address comments * [TIR] Schedule: address comments * [TIR] Schedule: address comments * [TIR] Schedule: add check to compute_at * [TIR] Schedule: address comments * [TIR] Schedule: address comments [TIR] Schedule: introduce primitive reorder (apache#37) * [Schedule] debug * [TIR] Schedule: reorder, loop type detect remains * [TIR] reorder complete * [TIR] reorder complete * [TIR] fix * [TIR] reorder : rebased complete * [TIR] reorder : fix container.h * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : simplify * [TIR] reorder : simplify * [TIR] reorder : simplify * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : rebased * [TIR] reorder : rebased rebase [TIR] Schedule: introduce BlockRealize and Block SRef reuse(apache#39) * [TIR] BlockRealize: schedule refactor * [TIR] BlockRealize: debug * [TIR] BlockRealize finish * [TIR] BlockRealize finish * [TIR] BlockRealize fix * [TIR] BlockRealize update test * [TIR] BlockRealize: add loop var reuse * [TIR] BlockRealize: add loop var reuse * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix [TIR] compare for module (apache#38) * [TIR] compare for module * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix [Hybrid] Module init [Hybrid] Module print [Hybrid] Module print with meta [Hybrid] adjust [Hybrid] finished but without lint and comment check [Hybrid] fix lint [Hybrid] comments [Hybrid] fix script decoration API [Hybrid] using IRModule [Hybrid] fix [Hybrid] adjust API [Hybrid] fix [Hybrid] fix [Hybrid] fix [Hybrid] fix symbol table, adjust API, introduce meta_mutator and resolve import issue [Hybrid] fix lint [TIR] introduce pass BufferFlatten (apache#32) * [TIR] introduce pass BufferFlatten * [Tir] add comments & remove old TeLower * [TIR] split GatherRegion and BufferFlatten to two Visitor/Mutator * [TIR] address comments: Only consider stmt scope * [TIR] BufferFlatten: address comments * [TIR] BufferFlatten: fold BlockFlattener into BufferFlattener * [TIR] BufferFlatten: add asserts * [TIR] BufferFlatten: use Equal in testcase * [TIR] Equal Pass: Enhanced the pass * [TIR] Equal Pass: add comments [Hybrid] refactor using Doc, introduce annotation, enhance parser (apache#28) * [Hybrid] refactor printer, enhance parser * [Hybrid] refactor * [Hybrid] fix * [Hybrid] fix * [Hybrid] fix namespace issue * [Hybrid] compare using Equal [TIR] rebased [TE] fix replace again and add primitive fuse and split (apache#27) * [TE] add: schedule primitive fuse * [TE] add: schedule primitive split * [TE] address comments: add IRSubstitueInScope and other minor fix * [TE] address comments: Enhance Equal api and fix split by nparts * [TE] address comments [Hybrid] introduce printer (apache#25) * [Hybrid] substitute Block with SeqStmt, change block() syntax * [Hybrid] add printer, type declare intrin * [Hybrid] refactor * [Hybrid] meta * [Hybrid] refactor * [Hybrid] macro [TE] fix replace (apache#23) * [TE] fix replace * [TE] fix replace: add more tests * [TE] fix replace: add more tests [TE] rebased [Hybrid] python syntax parser (apache#20) * [Hybrid] python syntax parser * [Hybrid] add a testcase * [Hybrid] improve comments and fix bugs * [Hybrid] improve comments, refactor __internal_assert, add new testcases * [Hybrid] improve error report message, refactor intrin * [Hybrid] separate ScopeEmitter from parser * [Hybrid] refactor type check * [Hybrid] refactor intrin * [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase * [Hybrid] address comments, fix a bug in te/ir.h * [Hybrid] remove type check * [Hybrid] python syntax parser * [Hybrid] add a testcase * [Hybrid] improve comments and fix bugs * [Hybrid] improve comments, refactor __internal_assert, add new testcases * [Hybrid] improve error report message, refactor intrin * [Hybrid] separate ScopeEmitter from parser * [Hybrid] refactor type check * [Hybrid] refactor intrin * [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase * [Hybrid] address comments, fix a bug in te/ir.h * [Hybrid] remove type check * [Hybrid] refactor intrin, scope_handler, special_stmt * [Hybrid] address comments * [Hybrid] clean code, improve error reporting & testcase * [Hybrid] clean code * [Hybrid] clean code [IR] introduce dependency graph and write map [TE] refactor and clean codebase [TE] refactor IR [TE] introduce schedule, dependency graph and support fuse and split (apache#17) * fix lint * introduce dependency graph * enable create schedule * support get axes * fix lint * revert Set * add schedule primitive fuse * address comment * support split [IR] Introduce SeqStmt add TeLower pass and enable to run Te IR (apache#15) * add function data structure add TeLower pass to transform Te to current IR enable to run Te IR * address comments * unify terminology TensorIR data structure init (apache#14) * init te data structure * finish printer and enhanced ir_builder * address the comments Co-authored-by: Bohan Hou <[email protected]>
jinhongyii
pushed a commit
to jinhongyii/tvm
that referenced
this pull request
Jun 20, 2022
cyx-6
pushed a commit
to cyx-6/tvm
that referenced
this pull request
Jun 27, 2022
junrushao
added a commit
to cyx-6/tvm
that referenced
this pull request
Jul 4, 2022
cyx-6
pushed a commit
to cyx-6/tvm
that referenced
this pull request
Jul 13, 2022
Hzfengsy
pushed a commit
to Hzfengsy/tvm
that referenced
this pull request
Jul 30, 2022
Hzfengsy
pushed a commit
to Hzfengsy/tvm
that referenced
this pull request
Jul 30, 2022
areusch
pushed a commit
to areusch/tvm
that referenced
this pull request
Sep 20, 2022
gigiblender
pushed a commit
to gigiblender/tvm
that referenced
this pull request
Nov 3, 2022
MasterJH5574
pushed a commit
to MasterJH5574/tvm
that referenced
this pull request
Nov 20, 2022
yelite
pushed a commit
to yelite/tvm
that referenced
this pull request
Feb 17, 2023
mikeseven
pushed a commit
to mikeseven/tvm
that referenced
this pull request
Sep 27, 2023
[SWMLA-880]: Upload to artifactory Approved-by: Joey Chou
masahi
pushed a commit
to masahi/tvm
that referenced
this pull request
Mar 8, 2024
masahi
pushed a commit
to masahi/tvm
that referenced
this pull request
Mar 13, 2024
elvin-n
pushed a commit
to Deelvin/tvm
that referenced
this pull request
Mar 19, 2024
vinx13
added a commit
to vinx13/tvm
that referenced
this pull request
Mar 19, 2024
krishnaraj36
added a commit
to krishnaraj36/tvm_mainline
that referenced
this pull request
Aug 9, 2024
Fixed the opencl codegen for few operators - 1. Atomic add for float - opencl doesn't have support float atomic add, Enabled work-around for this operation with atomic_cmpexch() 2. fmodf - Opencl only support fmod for all floating point 3. nearbyint - Opencl doesn't have this function and henced replaced with roud function. --------- Co-authored-by: Siva <[email protected]> Co-authored-by: B, Siva Rama Krishna Reddy <[email protected]> Co-authored-by: krishnaraj36 <[email protected]>
LeiWang1999
added a commit
to LeiWang1999/tvm
that referenced
this pull request
Nov 8, 2024
* improve e4m3 decoding. * append fp16xint1 * Update submodule commit reference * chore: Update shared memory scope for float32 output dtype * BUGFIX: UINT8/INT8 Decoding * feat: Add rasterization options for roller module * Refactor tensorcore_legalization method to optimize tensor core usage * feat: Add function to collect variables from expression, improve for splitk * chore: Update typing import in __init__.py * chore: Refactor CPU execution of operators * Refactor matmul implementation for splitk layout * Refactor matmul implementation for splitk layout * Refactor matmul implementation for splitk layout * chore: Update version to 0.0.1.dev8 * chore: Enable debug output in bitblas.set_debug_level() * Refactor Linear module matmul implementation for splitk layout * Refactor matmul implementation for splitk layout * Refactor CUDA kernel launch string for dynamic symbolic set * Bumpt version to v0.0.1.dev9 * Refactor CUDA kernel launch string for dynamic symbolic set * Bump version to v0.0.1.dev10 * Refactor CUDA kernel launch string for dynamic symbolic set --------- Co-authored-by: LeiWang199 <leiwang199>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.