-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize quantize #1762
Merged
askhade
merged 87 commits into
askhade/quantization_and_caliberation
from
askhade/optimize_quantize
Sep 5, 2019
Merged
optimize quantize #1762
askhade
merged 87 commits into
askhade/quantization_and_caliberation
from
askhade/optimize_quantize
Sep 5, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Mention OrtCreateSessionFromArray in C API doc * c api changes after review (1) * updates... * fixes * Reorder include
…snet34 analysis (#1578) * A few performance improvements: - Make the iteration in NonZero more efficient by using a raw pointer and simplifying the increment logic - add another unit test to check the new logic works with 3 dimensional tensor - gains about 2% for ssd_mobilenet - Avoid floating point operations on each iteration on Concat - about 0.5% for ssd_mobilenet and ssd_resnet34 - Put common case first in ExecutionFrame::AllocateAsPerAllocationPlan to avoid unnecessary call to IsSparseTensor - about 0.05% for ssd_mobilenet - Minor tweak to put some ctors in the TensorShape header so they can be inlined more easily
Fix race condition issue in RNN/LSTM/GRU. Description: The filter_desc and rnn_desc could also be changed in compute which could be in multi-thread. It will cause race condition issue. Fix: create temperate cudnn descriptors cache cudnn_dropout_desc_ which won't change
* remove memory copy between CUDA and TRT * add info to RegisterExecutionProvider input * use new IDeviceAllocator for trt allocator * remove SetDefaultInputsMemoryType from TRT EP * remove onnx-tensorrt 5.0 * add submodule onnx-tensorrt branch 5.1 * remove redundancy * Update transformer_memcpy.cc * Update tensorrt_execution_provider.cc * switch to TensorRT 5.1.5.0 * update python binding * disable failed test case on TensorRT * Update activation_op_test.cc * upgrade to TensorRT container 19.06 * update according to feedback * add comments * remove tensorrt allocator and use cuda(gpu) allocator * update onnx-tensorrt submodule * change ci build cuda directory name
* For majority of nodes, we do not need to do fence check. Instead, we only need to do FenceCheck for CPU<->GPU mem sync node But we pay the Fence check cost for every single node and every single input and output. This change will minimize the Fence check to only do it when necessary.
* Update Dockerfile.openvino * Update Dockerfile.cuda * Update Dockerfile.cuda * Update Dockerfile.openvino * Update Dockerfile.cuda * added ThirdParty notice file to base image. * corrected license file name
* Implement new LabelEncoder in opset 2 in ML domain * Fix compilation error * Fix tests * Include ONNX's fix * Formatting and addressing a comment * Address a minor comment
* put all gemmlowp common code in one place * fix gpu build failures * minor update
* Update nGraph to 0.21 and adjust the EP * Share the graph initializers between custom ops * Update nGraph to 0.22 and exclude Gather entirely * Enable building on Windows with nGraph v0.21.1-rc.0 * Disable the unsigned input Shrink op tests for nGraph until the next update * Line-shortening code refactor * Fix for the master branch merge artifact * MKLDNN patches adjustment for Windows * Exclude MatMulInteger for non-const zero points * Exclude ConvInteger for non-const zero points * Enable full Cast op support * Use the v0.22.1 tag * Skip ConvTranspose_InvalidKernelShape test for ngraph provider * Create sub-graph ModelProto from fused_node
* Include io_win32.h only if builds on windows * looks like include order matters
* Mention OrtCreateSessionFromArray in C API doc * Fix perf test executable due to removal of certain C APIs * fix linux build * Avoid duplication * Fix mem leak
* Minor perf improvements. - Cache the vector sizes in IExecutionFrame and NodeIndexInfo to avoid calls to size(). - 2 instructions instead of 10 - Remove an unnecessary check in IExecutionFrame - add a check to the ctor so we guarantee it's unnecessary - Reserve memory for the vectors in BroadcastIterator - saves reallocs if more than one value is added - but rare with the mlperf models for multiple values to be added so benefit is limited. - slight tweak to the Broadcaster ctor code to make it more readable
* Model serialization * Removed duplicate symbol * Minor update * Review comments * add tests * Model serialization * Removed duplicate symbol * Minor update * Merged PR 1106437: Model Serialization in onnxruntime * Review comments * Merged PR 1107226: Review comments Review comments * add tests * Fixed merge conflict * Correct python tests * InferenceSesssion Refeed Test * Replace use of widechar const literal-L * Fixed failing tests * Updated comment * Removed unnecessary session options * Spell check on comments * Do not serialize when level 3 optimization specified * Updated error logs * Changed log severity to WARN
#1599) * Fix log message truncation and add unit test. On Windows vnsprintf_s returns -1 when truncating so we need to differentiate that from a real error.
* Remove copy of generator in Multinomial so that different values are generated each time. Add ability to test
* checking execution provider logic updated. * fix the logic of copy input and output. * update * update * update * update * update * update * fix ngraph failure. * fix comments
…h other APIs (#1570) - Updated SessionOptions API to use properties instead of setter/getter methods. - Added missing APIs. - Added RunOptions.
#1623) * Fix trtlogger segfault. re-enable SoftPlus unit test for TRT. add documentation for ORT_TENSORRT* env vars. * Update TensorRT-ExecutionProvider.md
* Mention OrtCreateSessionFromArray in C API doc * review changes * use enum for graph optimization level * Use explicit values for enums * updates... * Add friendly enum for graph optimization levels in C, C# and Python APIs. * Fix linux build * Fix build breakage due to master merge * PR comments
- Added python script for generating markdown doc from the registered opkernels. - Made some conditional changes in the pybind to expose necessary python API - Added some missing type-constraints in the op kernel registrations
* More changes * Fix NMS * nits
Added Sample Featurizer and Infrastructure Make featurizers and unit tests compile and run with GTest. Create definitions for the first featurizer kernel. Add new operator domain. Create datetime_transformer kernel and build. Move OPAQUE types definitions for featurizers kerneles out to a separate cc. Register them with the type system. Provide unit tests for new AutoML DateTimeTransformer kernel. Make necessary adjustments to the test infrastructure to make it run with new types.
* update onnx to latest commit * Disable and/or fix failing tests * disable not yet implemented tests for opset 11 * disable tests * fix bug in mkldnn fp16 graph check
- Fix the Windows end-to-end test in NuGet CI - Skip the TestModelSerialization, because it is failing on Linux. Must be fixed before API is released for use. Owner is notified.
* use mlas qgemm for u8u8_s32 gemms * update test
- Make the naming of properties in python SessionOptions and RunOptions consistent with other apis. - Remove unnecessary apis
* make gemmlowp default for arm * force use_gemmlowp in header for default case * remove unnecessary white space
* Updates * Remove preview texts * Update README.md * Updates * Update README.md * Update README.md * Minor wording update * Update README.md * Update doc on CUDA version * revert update * Update readme for issue #1558 * Clean up example section * Cosmetic updates - Add a index of build instructions for browsability - Update build CUDA version from 9.1 to 10 * Fix broken link * Update README to reflect upgrade to pip requirement * Update CuDNN version for Linux Python packages * Clean up content Updated ordering and add table of contents * Minor format fixes * Move Android NNAPI under EP section * Add link to operator support documentation * Fix typo * typo fix * remove todo section
Avoid the need for @PCGOTREL relocations by annotating MLAS global data shared with assembly modules with attribute(visibility("hidden")).
Fix the aarch64 kernel to build properly with the Android NDK (specifically clang).
…ame allocator device (#1715) as long as these providers use the same allocator device Description: Currently ORT throws error when one input is used in different EPs. The change removes that restriction Motivation and Context It is now possible to share inputs across EPs now that allocation are device-based, instead of EP based.
…tom op (#1391) Description: The change adds necessary quantization support on CPU with mixed int8/uint8, as well as int16 for matrix multiply operations that outputs int32 Motivation and Context Integer operations are critical for quantized model's performance Current MatMulInteger implementation in CPU only supports uint8 x uint8, while the spec supports int8 x uint8. Having a default CPU implementation that fully support the spec would help accuracy verification. Besides, some model may need to quantize to int16, but MatMulInteger op does not support that yet. A custom op of MatMulInteger16 is added to satisfy such models.
* Use exec form of ENTRYPOINT for docker server # Issue The entrypoint currently uses the shell form - this prevents users from passing in any cmdline arguments... also passing a model_path in means the server only works in the envvar is set... however this is not what the error message says! ``` $ docker run -v /home/rakelkar/try/onnxzoo/style:/mnt/models -it mcr.microsoft.com/onnxruntime/server --model_path /mnt/models/model.onnx Version: local_build Commit ID: default model_path must be the location of a valid file Allowed options: -h [ --help ] Shows a help message and exits --log_level arg (=info) Logging level. Allowed options (case sensitive): verbose, info, warning, error, fatal --model_path arg Path to ONNX model --address arg (=0.0.0.0) The base HTTP address --http_port arg (=8001) HTTP port to listen to requests --num_http_threads arg (=4) Number of http threads --grpc_port arg (=50051) GRPC port to listen to requests ``` # Fix 1. remove the env var 2. use the exec form * Update readme to use model_path arg
…1679) * Support bilinear mode with actual 2D inputs in Resize and upsample * Fix build break * Fix build break * Add test * CUDA changes * Resolve PR comments * Resolve comments
…in 0.5 release. (#1694) * Mention OrtCreateSessionFromArray in C API doc * Fix registration of Equal op causing one of the automl models to break in 0.5 release. * updates...
…which cause huge data copy. If the node's inputs are all initializer, we shouldn't fallback the node to CPU. (#1727) Fix an issue that CUDA EP fallback too much nodes to CPU for some case which cause huge data copy. #1675 Currently, if the node's inputs are all as initialier, CUDA EP will fallback it to CPU. And it will also fallback some nodes under it. It could cause some huge data copy. for the case reported by a user, it has several Slices with input from initializer, and a Concat op to concat the output from Slice output. The data is huge 16MB after concat, which make the data copy from CPU to GPU quite costly because it's a sync copy. Fix If the node's inputs are all initializer, we shouldn't fallback the node to CPU.
Update the docker file for OpenVINO which is used for AML
Fix typo in NMS code
* moved subgraph_index to MklDnn Execution Provider * code cleanup
* Implement Nuphar execution provider Nuphar execution provider is a TVM-based compilation provider. It has shown great speedups for RNN models using Scan. This PR is mainly for a preview of the shared codegen library for other TVM-based providers. * Fix submodules * Fix TVM submodule * Update Nuphar to latest and resolve confliction * Remove stale files caused by merge -X theirs * Revert heap buffer change to not introduce onnxruntime_framework into onnxruntime_perf_test * Fix bad merge * Merge from Nuphar * Fix warning treated as error, revert some unnecessary changes * Revert some more test changes * Some more test revert or comments to make review easier New tests could be added later * One more revert of unnecessary changes * More change revert. Test could be added back later.
* Mention OrtCreateSessionFromArray in C API doc * Enforce shape validation. * Update broken models
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description: Makinf following updates to quantization script:
Motivation and Context