-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow batching in AmplitudeEmbedding
with Tensorflow
#4818
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #4818 +/- ##
==========================================
- Coverage 99.64% 99.64% -0.01%
==========================================
Files 381 381
Lines 34382 34117 -265
==========================================
- Hits 34260 33995 -265
Misses 122 122
☔ View full report in Codecov by Sentry. |
Confirmed to fix issue posted in discussion forum: https://discuss.pennylane.ai/t/importerror-cannot-import-name-shape/1383/13?u=isaacdevlugt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
Context:
The padding and normalization step in
AmplitudeEmbedding
previously used a manual for loop, making it incompatible with Tensorflowtf.function(jit_compile=True)
.Description of the Change:
The preprocessing now process the whole batch of features simultaneously, using numpy-like broadcasting instead.
(This also should be faster, usually).
Also, the tests have been complemented to test Tensorflow JITting with padding, and padding/non-padding tests have been combined into a single test function.
Benefits:
Support of Tensorflow,including JIT compilation in Tensorflow.
(probably) a speedup.
Possible Drawbacks:
Strictly speaking, the previous implementation allowed for different feature vector lengths across the batching dimension (because the padding happens before we rely on the input being a non-ragged array/tensor).
I think it is fine to constrain the input to the preprocessing already to non-ragged arrays.
Related GitHub Issues:
closes #2976