Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow batching in AmplitudeEmbedding with Tensorflow #4818

Merged
merged 6 commits into from
Nov 10, 2023

Conversation

dwierichs
Copy link
Contributor

@dwierichs dwierichs commented Nov 10, 2023

Context:
The padding and normalization step in AmplitudeEmbedding previously used a manual for loop, making it incompatible with Tensorflow tf.function(jit_compile=True).

Description of the Change:
The preprocessing now process the whole batch of features simultaneously, using numpy-like broadcasting instead.
(This also should be faster, usually).

Also, the tests have been complemented to test Tensorflow JITting with padding, and padding/non-padding tests have been combined into a single test function.

Benefits:
Support of Tensorflow,including JIT compilation in Tensorflow.
(probably) a speedup.

Possible Drawbacks:
Strictly speaking, the previous implementation allowed for different feature vector lengths across the batching dimension (because the padding happens before we rely on the input being a non-ragged array/tensor).
I think it is fine to constrain the input to the preprocessing already to non-ragged arrays.

Related GitHub Issues:
closes #2976

Copy link

codecov bot commented Nov 10, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (2c9a049) 99.64% compared to head (fe6a6f8) 99.64%.
Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4818      +/-   ##
==========================================
- Coverage   99.64%   99.64%   -0.01%     
==========================================
  Files         381      381              
  Lines       34382    34117     -265     
==========================================
- Hits        34260    33995     -265     
  Misses        122      122              
Files Coverage Δ
pennylane/__init__.py 100.00% <100.00%> (ø)
pennylane/operation.py 97.26% <ø> (-0.02%) ⬇️
pennylane/ops/functions/generator.py 100.00% <ø> (ø)
pennylane/ops/op_math/controlled.py 99.63% <ø> (ø)
pennylane/tape/qscript.py 100.00% <ø> (ø)
pennylane/templates/embeddings/amplitude.py 100.00% <100.00%> (+2.08%) ⬆️
pennylane/transforms/__init__.py 100.00% <ø> (ø)
pennylane/transforms/batch_transform.py 96.03% <ø> (ø)
pennylane/transforms/core/transform_dispatcher.py 100.00% <ø> (ø)
pennylane/transforms/op_transforms.py 100.00% <ø> (ø)
... and 2 more

... and 40 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@albi3ro
Copy link
Contributor

albi3ro commented Nov 10, 2023

Confirmed to fix issue posted in discussion forum: https://discuss.pennylane.ai/t/importerror-cannot-import-name-shape/1383/13?u=isaacdevlugt

Copy link
Contributor

@timmysilv timmysilv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@dwierichs dwierichs enabled auto-merge (squash) November 10, 2023 16:39
@dwierichs dwierichs merged commit 677fbe0 into master Nov 10, 2023
32 checks passed
@dwierichs dwierichs deleted the amplitude-tf-batch branch November 10, 2023 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow parameter broadcasting in AmplitudeEmbedding with TensorFlow
3 participants