This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Enable MKL-DNN FullyConnected backward #17318
Merged
Merged
Changes from 22 commits
Commits
Show all changes
27 commits
Select commit
Hold shift + click to select a range
0b33c8c
fix mkldnn fc bwd bug due to data inplace
rongzha1 2d74cc4
enable mkldnn fc bwd
rongzha1 f4e6557
Merge commit 'refs/pull/16890/head' of https://github.com/apache/incu…
TaoLv db88a23
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 6a98aa4
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv e17f70d
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 9c7596b
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv d015609
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv a103baa
fix cpp tests
TaoLv caceb9b
try: fix random seed
TaoLv 7bbbc7f
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 1bf97fa
fix cpp test
TaoLv b374889
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 2393889
loose rtol for fc cpp test
TaoLv 8a01fef
improve error message
TaoLv 979df7a
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 5525f71
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 2200653
limit the max value of test tensors
TaoLv faa1db8
fix lint
TaoLv 2a3ebb4
limit max value for mkldnn tensors
TaoLv 34549f2
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 93eb151
remove fixed random seed
TaoLv 38070d9
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 56d873f
address review comments
TaoLv 43905c3
Revert "address review comments"
TaoLv edef1c7
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv 464204f
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -63,24 +63,24 @@ struct TestArrayShapes { | |
}; | ||
|
||
// Init arrays with the default layout. | ||
inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) { | ||
inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int max = 50) { | ||
const TBlob &blob = arr->data(); | ||
mshadow::default_real_t *data = blob.dptr<mshadow::default_real_t>(); | ||
int size = blob.Size(); | ||
|
||
for (int i = 0; i < size; i++) | ||
if (is_rand) { | ||
data[i] = (std::rand() % 100) - 50; | ||
data[i] = (std::rand() % (max * 2)) - max; | ||
} else { | ||
data[i] = i % 100 - 50; | ||
data[i] = i % (max * 2) - max; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as above, how about change to something like |
||
} | ||
} | ||
|
||
|
||
// Init arrays with the specified layout. | ||
inline static void InitMKLDNNArray(NDArray *arr, const mkldnn::memory::desc &desc, | ||
bool is_rand = false) { | ||
InitDefaultArray(arr, is_rand); | ||
bool is_rand = false, int max = 50) { | ||
InitDefaultArray(arr, is_rand, max); | ||
arr->MKLDNNDataReorderAsync(desc); | ||
arr->WaitToRead(); | ||
} | ||
|
@@ -330,7 +330,7 @@ inline void PrintVerifyMsg(const NDArrayAttrs &arr1, const NDArrayAttrs &arr2) { | |
*/ | ||
inline std::vector<NDArrayAttrs> GetTestInputArrays( | ||
int types = ArrayTypes::All, bool rand = false, | ||
std::vector<float> scale = {1}, bool spatial_data_format = false) { | ||
std::vector<float> scale = {1}, bool spatial_data_format = false, int max = 50) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is the new parameter for better usability? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. See #17318 (comment). |
||
TestArrayShapes tas = GetTestArrayShapes(spatial_data_format); | ||
std::vector<mxnet::TShape> shapes = tas.shapes; | ||
std::vector<mkldnn::memory::desc> mds = tas.mds; | ||
|
@@ -349,14 +349,14 @@ inline std::vector<NDArrayAttrs> GetTestInputArrays( | |
// Type 1. | ||
NDArray arr(shape, Context()); | ||
if (types & ArrayTypes::Normal) { | ||
InitDefaultArray(&arr, rand); | ||
InitDefaultArray(&arr, rand, max); | ||
in_arrs.emplace_back(arr, "Normal NDArray"); | ||
} | ||
|
||
// Type 4 | ||
arr = NDArray(shape, Context()); | ||
if (types & ArrayTypes::NormalReshaped) { | ||
InitDefaultArray(&arr, rand); | ||
InitDefaultArray(&arr, rand, max); | ||
in_arrs.emplace_back(arr.Slice(slice_amount, arr.shape()[0] - slice_amount), | ||
"Reshaped Normal NDArray"); | ||
} | ||
|
@@ -379,19 +379,19 @@ inline std::vector<NDArrayAttrs> GetTestInputArrays( | |
if (shape.ndim() == md.data.ndims && IsSameShape(md, shape) | ||
&& types & ArrayTypes::MKLDNN) { | ||
desc_str = "MKLDNN NDArray"; | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr, desc_str); | ||
} else if (shape.ndim() == md.data.ndims && !IsSameShape(md, shape) | ||
&& types & ArrayTypes::MKLDNNDiffShape) { | ||
desc_str = "MKLDNN NDArray with different shape"; | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr, desc_str); | ||
} else if (shape.ndim() != md.data.ndims && types & ArrayTypes::MKLDNNDiffDim) { | ||
std::stringstream ss; | ||
ss << "MKLDNN NDArray with different dim " << | ||
shape.ndim() << "/" << md.data.ndims; | ||
desc_str = ss.str(); | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr, desc_str); | ||
} | ||
|
||
|
@@ -401,20 +401,20 @@ inline std::vector<NDArrayAttrs> GetTestInputArrays( | |
if (shape.ndim() == md.data.ndims && IsSameShape(md, shape) | ||
&& types & ArrayTypes::MKLDNNReshaped) { | ||
desc_str = "Reshaped MKLDNN NDArray"; | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr.Slice(slice_amount, arr.shape()[0] - slice_amount), desc_str); | ||
} else if (shape.ndim() == md.data.ndims && !IsSameShape(md, shape) | ||
&& types & ArrayTypes::MKLDNNReshapedDiffShape) { | ||
desc_str = "Reshaped MKLDNN NDArray with different shape"; | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr.Slice(slice_amount, arr.shape()[0] - slice_amount), desc_str); | ||
} else if (shape.ndim() != md.data.ndims | ||
&& types & ArrayTypes::MKLDNNReshapedDiffDim) { | ||
std::stringstream ss; | ||
ss << "MKLDNN NDArray with different dim " << | ||
shape.ndim() << "/" << md.data.ndims; | ||
desc_str = ss.str(); | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
in_arrs.emplace_back(arr.Slice(slice_amount, arr.shape()[0] - slice_amount), desc_str); | ||
} | ||
} | ||
|
@@ -445,7 +445,7 @@ inline std::vector<NDArrayAttrs> GetTestInputArrays( | |
inline std::vector<NDArrayAttrs> GetTestOutputArrays( | ||
const mxnet::TShape &shp, | ||
const std::vector<mkldnn::memory::desc> &mds, | ||
std::vector<float>scale = {1}, bool rand = true, int types = ArrayTypes::All) { | ||
std::vector<float>scale = {1}, bool rand = true, int types = ArrayTypes::All, int max = 50) { | ||
mxnet::TShape shape = shp; | ||
|
||
for (int dim = 0; dim < scale.size(); dim++) | ||
|
@@ -458,15 +458,15 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
|
||
if (types & ArrayTypes::Normal) { | ||
in_arrs.emplace_back(arr, "Normal NDArray"); | ||
InitDefaultArray(&in_arrs.back().arr, rand); | ||
InitDefaultArray(&in_arrs.back().arr, rand, max); | ||
} | ||
|
||
mxnet::TShape tmp_shape = shape; | ||
if (types & ArrayTypes::NormalReshaped) { | ||
// Type 4. | ||
tmp_shape[0] = shape[0] * 2; | ||
NDArray arr0(tmp_shape, Context()); | ||
InitDefaultArray(&arr0, rand); | ||
InitDefaultArray(&arr0, rand, max); | ||
in_arrs.emplace_back(arr0.Slice(1, shape[0] + 1), "Reshaped NDArray"); | ||
} | ||
|
||
|
@@ -477,7 +477,7 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
s[0] = shape.Size(); | ||
NDArray arr1(s, Context()); | ||
arr1 = arr1.AsArray(shape, arr1.dtype()); | ||
InitDefaultArray(&arr1, rand); | ||
InitDefaultArray(&arr1, rand, max); | ||
in_arrs.emplace_back(arr1, "Reused NDArray"); | ||
} | ||
|
||
|
@@ -486,7 +486,7 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
s[0] = shape.Size() * GetTypeSize(mshadow::default_type_flag); | ||
NDArray arr2(s, Context(), true, mshadow::kUint8); | ||
arr2 = arr2.AsArray(shape, mshadow::default_type_flag); | ||
InitDefaultArray(&arr2, rand); | ||
InitDefaultArray(&arr2, rand, max); | ||
in_arrs.emplace_back(arr2, "Reused NDArray with diff data type"); | ||
} | ||
|
||
|
@@ -496,7 +496,7 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
NDArray arr3(s, Context(), true, mshadow::kUint8); | ||
tmp_shape[0] = shape[0] * 2; | ||
arr3 = arr3.AsArray(tmp_shape, mshadow::default_type_flag); | ||
InitDefaultArray(&arr3, rand); | ||
InitDefaultArray(&arr3, rand, max); | ||
in_arrs.emplace_back(arr3.Slice(1, shape[0] + 1), "Reused+Reshaped NDArray"); | ||
} | ||
|
||
|
@@ -523,7 +523,7 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
if ((types & ArrayTypes::MKLDNN && shape.ndim() == md.data.ndims) || | ||
(types & ArrayTypes::MKLDNNDiffDim && shape.ndim() != md.data.ndims)) { | ||
in_arrs.emplace_back(arr, desc_str); | ||
InitMKLDNNArray(&in_arrs.back().arr, md, rand); | ||
InitMKLDNNArray(&in_arrs.back().arr, md, rand, max); | ||
} | ||
|
||
// Type 8, 9. | ||
|
@@ -532,7 +532,7 @@ inline std::vector<NDArrayAttrs> GetTestOutputArrays( | |
s[0] = shape.Size(); | ||
NDArray arr = NDArray(s, Context()); | ||
arr = arr.AsArray(shape, arr.dtype()); | ||
InitMKLDNNArray(&arr, md, rand); | ||
InitMKLDNNArray(&arr, md, rand, max); | ||
desc_str = "Reused MKLDNN NDArray"; | ||
if (shape.ndim() != md.data.ndims) { | ||
std::stringstream ss; | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about change to
data[i] = std::rand() * 1.0 / RAND_MAX - 0.5;
? Asmax = 1
will only generate two values: -1.0and 0.0 .There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I don't want to affect other test case which still use the default max=50 to generate integers in [50, 50). But For the FullyConnectedOp, I want to generate relative small numbers. With the given code, the elements will be -1 and 0. Any suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've no idea about why the range was set to [-50, 50) previously, and I can't figure out any specific reasons to use this range for the tests (any upper bound test?). It'll be great if you have any background for it.
But anyway, the tensors with only two values (-1 and 0, 50% are 0) might not be a good candidate for the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, i will change to generate float numbers in [-max, max) rather than integer numbers. Previously I thought sparse (say 50% zeros) is also a way to avoid float number accumulation error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ciyongch Failed to do so. There are cases doing bit correction check. It seems we cannot pass the tests with these floating numbers.
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17318/17/pipeline/
https://github.com/apache/incubator-mxnet/blob/master/tests/cpp/include/test_mkldnn.h#L605
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With
memcmp
to check the results, then the only choice is integer numbers. Is it reasonable to check the results byAssertEqual
within a small enough threshold like 1e-6, then we can keep the floating number with better distribution?Or we can just increase
max
to filling more different numbers other than only -1 and 0.What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have reverted the changes for floating numbers. Changing ``memcmp
to
AssertEqual` is out of the scope of this PR, so I will keep it as is.I was thinking about including number 2 into the generated tensor but found that with the given shapes, there still has chance to get error. That means for the worst case, the intermediate accumulation value will be > 2^24, so the 1 will be ignored when accumulating another 1 to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, let's keep it as is for now.