Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge windowsai (winml layering) into master #2956

Merged
merged 160 commits into from
Feb 5, 2020
Merged

Merge windowsai (winml layering) into master #2956

merged 160 commits into from
Feb 5, 2020

Conversation

smk2007
Copy link
Member

@smk2007 smk2007 commented Jan 31, 2020

This PR is for comment on bringing the newly layered Windows ML components into the master branch for the ONNX runtime for windows.

This is anticipation of a beta release of these bits over the next couple of months. We will include more documentation on how to use , how the layers work, and the relationships of WinML and ORT and DML (which we introduced to master in ORT v1.0) .

Some of the things we have done in this PR:
- Added a top level directory "/winml"
- Contributed all of the windows inbox code from the Windows.AI.MachineLearning namespace into that directory . Making it available using the MIT license.
- Started a layering effort to have the new Windows.AI.MachineLearning.dll consume the onnxruntime.dll c_abi that we introduced in v1.0 .
- Added an "adapter" module that gets linked into the core onnxruntime.dll. This adapter is private to the WinML component, and provides ABI functionality required for the layering effort. this is not a new public ABI and is not support for developer to call.
- Made the WinML ABI fully available for public developers to call.
- You can now include both of these DLL's (WinML + ORT) in your projects that want to use the WinML ABI as a redist component.
- Added cmakery for all of this work. There is now a "use_winml" build flag in addition to the "use_dml" build flag
- Added google test unit tests for the newly added WinML ABI. these are under the top level "winml/test" folder.
- And several other things :)

Enjoy !

Adrian Tsai and others added 30 commits November 7, 2019 11:51
…g against kernel32.lib etc (#2346)

add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements.

We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything.
* update build instructions to include --build_shared_lib

* fix line breaks
* Task 23998197: add winml_lib_core into onnnxruntime.dll

* PR feedback
build break on perf_test
#2382)

this is a big PR.    we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile.

we are going to move this into the L3 so that ryan can start doing intergration testing.   

we will pause for a full code review and integration test result prior to going into the L2.

>>>> raw comments from previous commits >>> 

* LearningModelSession is cleaned up to use the adapter, and parts of binding are.
* moved everything in the winmladapter
made it all nano-com using, WRL to construct objects in the ORT side.
base interfaces for everythign for winml to call
cleaned up a bunch of winml to use the base interfaces.
* more pieces
* GetData across the abi.
* renamed some namepsace
cleaned up OrtValue
cleaned up Tensor
cleaned up custom ops.
everything *but* learnignmodel should be clean
* make sure it's building.   winml.dll is still a monolith.
everything builds clean.
step !
* model moved over.
everything builds clean.
step !

* weak ref comment
* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* add option to enable winml telemetry

* add option to enable winml telemetry

* clean logs while developping

* clean the log of GUID

* compile onnxruntime_common with winml telemetry

* use option for use_telemetry

* rename option winml_use_telemetry to onnxruntime_use_telemetry

* little change
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.
* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()
* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* fixed 2 more heap corruptions
* Add opset and IR check.
* Add test case for future opsets.

#2371
found a leak in nvidia driver, but skipped it.
all winmlapitests pass now
@snnn
Copy link
Member

snnn commented Jan 31, 2020

Overall it LGTM. There are some tiny things that must get fixed.
You may exclude "mask_rcnn_keras" from the test cases to make your build pass. I'll take care it later.

@@ -84,6 +84,7 @@ For other system requirements and other dependencies, please see [this section](
|**Build Shared Library**|--build_shared_lib||
|**Build Python wheel**|--build_wheel||
|**Build C# and C packages**|--build_csharp||
|**Build WindowsML**|--use_winml<br>--use_dml<br>--build_shared_lib|WindowsML depends on DirectML and the OnnxRuntime shared library.|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid users from typing --build_shared_lib and just infer it when --use_winml is specified? It's just one less thing to remember for users.

cmake/precompiled_header.cmake Show resolved Hide resolved
cmake/wil.cmake Show resolved Hide resolved
cmake/winml_cppwinrt.cmake Show resolved Hide resolved
cmake/winml_sdk_helpers.cmake Show resolved Hide resolved
docs/Privacy.md Outdated Show resolved Hide resolved
include/onnxruntime/core/common/exceptions.h Outdated Show resolved Hide resolved
onnxruntime/core/graph/schema_registry.cc Outdated Show resolved Hide resolved
onnxruntime/core/platform/telemetry.h Outdated Show resolved Hide resolved
onnxruntime/core/session/inference_session.h Outdated Show resolved Hide resolved
@pranavsharma
Copy link
Contributor

I haven't checked all the files under the winml/ folder. Please make sure licence headers and copyright notices are added in all these new files.

winml/adapter/winml_adapter_c_api.h Outdated Show resolved Hide resolved
winml/dll/winml.rc Show resolved Hide resolved
@smk2007 smk2007 requested a review from RyanUnderhill January 31, 2020 17:33
smk2007 and others added 4 commits February 3, 2020 16:36
* CR feedback

* fix weird formatting on privacy readme

* Add 'All rights reserved.' everywhere

* readd all rights reserved to winml_provider_factory.h

* remove extra space in comment

* remove extra whitespace
snnn
snnn previously approved these changes Feb 4, 2020
martinb35
martinb35 previously approved these changes Feb 4, 2020
@smk2007 smk2007 dismissed stale reviews from martinb35 and snnn via 6cd3221 February 4, 2020 22:27
@martinb35 martinb35 self-requested a review February 4, 2020 22:29
martinb35
martinb35 previously approved these changes Feb 4, 2020
@smk2007 smk2007 merged commit c32cedc into master Feb 5, 2020
weixingzhang pushed a commit that referenced this pull request Mar 23, 2020
Merge up to commit 4f4f4bc

There were several very large pull requests in public master:
#2956
#2958
#2961

**BERT-Large, FP16, seq=128:**
Batch = 66
Throughput = 189.049 ex/sec

**BERT-Large, FP16, seq=512:**
Batch = 10
Throughput = 36.6335 ex/sec

**BERT-Large, FP32, seq=128:**
Batch = 33
Throughput = 42.2642 ex/sec

**BERT-Large, FP32, seq=512:**
Batch = 5
Throughput = 9.32792 ex/sec

**BERT-Large LAMB convergence:**
![image.png](https://aiinfra.visualstudio.com/530acbc4-21bc-487d-8cd8-348ff451d2ff/_apis/git/repositories/adc1028e-6f04-44b7-a3cf-cb157be4fb65/pullRequests/5567/attachments/image.png)
`$ python watch_experiment.py --subscription='4aaa645c-5ae2-4ae9-a17a-84b9023bc56a' --resource_group='onnxtraining' --workspace='onnxtraining' --remote_dir='logs/tensorboard/' --local_dir='D:/tensorboard/bert-large/fp16/lamb/seq128/lr3e-3/wr0.2843/master/' --run='BERT-ONNX_1581120364_71872cef'`

**E2E**:  PASSED
https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=117300&view=results
@skottmckay skottmckay deleted the windowsai branch August 20, 2020 05:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants