Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

93 set up aws workflow for benchmarking #108

Merged
merged 6 commits into from
Dec 3, 2024

Conversation

jordandsullivan
Copy link
Collaborator

Created a Dockerfile which runs the benchmark shell script and an AWS CodeBuild project which is linked with our Github UCC repo and will automatically trigger once a week (this is adjustable).

jordandsullivan and others added 4 commits November 30, 2024 21:22
* Move and expand test for logical equivalence

Test moved to test_compile.py

* Delete test_circuit_equivalence.py

Test moved to test_compile.py

* Remove unnecessary qubit parameterizations

Remove unnecessary qubit parameterizations in `test_compiled_circuits_equivalent` and `test_compilation_retains_gateset`

* add basic expectation value benchmark (#96)

* add basic expectation value benchmark

* move compile function to `common.py`

* write data out to json

* make data pipeline more functional

* simplify real check

* Add bug label to new bug report issues

* Add feature tag to feature template issues

* 97 reorg files (#98)

* add basic expectation value benchmark

* move compile function to `common.py`

* write data out to json

* make data pipeline more functional

* simplify real check

* Update naming convention for different benchmarks

* Dry up save results into common function

* Fix init import issue

* Move gate_counters into scripts/common.py for simplicity

* Fix notebook errors, remove old data

* Remove duplicate funciton definitions

* Remove duplicate lines

* Add folder to docstring, call out alterntive benchmark name

---------

Co-authored-by: nate stemen <[email protected]>

---------

Co-authored-by: Misty-W <[email protected]>
Co-authored-by: nate stemen <[email protected]>
* Move and expand test for logical equivalence

Test moved to test_compile.py

* Delete test_circuit_equivalence.py

Test moved to test_compile.py

* Remove unnecessary qubit parameterizations

Remove unnecessary qubit parameterizations in `test_compiled_circuits_equivalent` and `test_compilation_retains_gateset`

* add basic expectation value benchmark (#96)

* add basic expectation value benchmark

* move compile function to `common.py`

* write data out to json

* make data pipeline more functional

* simplify real check

* Add bug label to new bug report issues

* Add feature tag to feature template issues

* 97 reorg files (#98)

* add basic expectation value benchmark

* move compile function to `common.py`

* write data out to json

* make data pipeline more functional

* simplify real check

* Update naming convention for different benchmarks

* Dry up save results into common function

* Fix init import issue

* Move gate_counters into scripts/common.py for simplicity

* Fix notebook errors, remove old data

* Remove duplicate funciton definitions

* Remove duplicate lines

* Add folder to docstring, call out alterntive benchmark name

---------

Co-authored-by: nate stemen <[email protected]>

---------

Co-authored-by: Misty-W <[email protected]>
Co-authored-by: nate stemen <[email protected]>
No longer needed, build-spec lives on AWS
@jordandsullivan jordandsullivan linked an issue Dec 3, 2024 that may be closed by this pull request
@jordandsullivan jordandsullivan merged commit c20a189 into main Dec 3, 2024
jordandsullivan added a commit that referenced this pull request Dec 3, 2024
jordandsullivan added a commit that referenced this pull request Dec 3, 2024
@jordandsullivan jordandsullivan deleted the 93-set-up-aws-workflow-for-benchmarking branch December 6, 2024 22:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Set up AWS workflow for benchmarking
1 participant