-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI polish to target 0.0.7 release #58
base: master
Are you sure you want to change the base?
Conversation
Is a separate workflow for doc publishing strictly necessary? Readthedocs automatically generates and hosts pages generated with sphinx. |
Probably not - though we've already got some published to github pages. Mostly I was hoping to fix some errors in the documentation (like the one in #57) and push them out with a tagged 0.0.7 release - so at least we have a starting point. I haven't used readthedocs - I'm game for whatever is easiest. |
Ah, having refreshed my memory, there are 2 separate documentation pages - gh-pages as a landing page/ quickstart tutorial, readthedocs for sphinx-generated API docs. That seems like a reasonable distinction to have - whatever simplicity would be gained from having a single host may be wiped out by the hoops you'd have to jump through for sphinx to deal with the landing page. |
Just read up on readthedocs and it looks like it handles tags nicely, so as long as we are reasonably committed to semver there shouldn't be a problem. |
figurefirst/svg_to_axes.py
Outdated
@@ -7,6 +7,9 @@ | |||
import time | |||
import sys | |||
import copy | |||
import sys | |||
if sys.version_info < (3, 6): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC dicts being ordered is an implementation detail of CPython 3.6, but doesn't become part of the language spec until 3.7. I'm sure the overwhelming use of fifi is from CPython but it's not an expensive switch to make.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -0,0 +1,26 @@ | |||
name: Publish to PyPI and TestPyPI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I right in thinking that it's totally personal preference whether you split into multiple workflows or just have one workflow with multiple jobs? There's little benefit either way? I'm pretty new to them.
In this particular case, can you have inter-workflow dependencies? It may be worthwhile to prove that the tests pass before pushing to pypi.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent point: it is simpler to implement the dependency on the tests by keeping them in the same workflow. As far as I know the on:
clause is workflow specific - so there might be some reasons for having separate workflows; my original thoughts were to have the build and deploy run on a release event, but I think having all the CI together is cleaner in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good!
The unused imports lints from __init__
can be fixed by listing the "exported" items' names like this: __all__ = ["svg_to_axes", "mpl_functions", ...]
.
I like the usage of the scm version in setup, but you need to be careful about build-time dependencies. setup_requires
doesn't actually install that dependency, it just checks whether it's there - I think the preferred way to do it is to add a pyproject.toml
which specifies the build system (setuptools) and the build time dependencies (setuptools and setuptool-scm with hard versions).
The flake8 config needs to be compatible with black (from a blog post)
[flake8]
ignore = E203, E266, E501, W503, F403, F401
max-line-length = 89
max-complexity = 18
select = B,C,E,F,W,T4,B9
This reverts commit 478c726.
layout.make_mplfigures() | ||
|
||
d = np.array([[144, 57], [138, 57], [138, 59], [141, 61], [141, 82], [138, 84], [138, 85], [142, 85], [147, 85], [147, 84], [144, 82], [144, 57], [144, 57], [155, 57], [149, 57], [149, 59], [152, 61], [152, 82], [149, 84], [149, 85], [153, 85], [158, 85], [158, 84], [155, 82], [155, 57], [155, 57], [273, 57], [267, 57], [267, 59], [270, 61], [270, 82], [267, 84], [267, 85], [271, 85], [276, 85], [276, 84], [273, 82], [273, 57], [273, 57], [295, 57], [289, 57], [289, 59], [292, 61], [292, 70], [287, 67], [278, 76], [287, 85], [292, 83], [292, 85], [298, 85], [298, 84], [295, 81], [295, 57], [295, 57], [90, 57], [90, 59], [91, 59], [94, 61], [94, 82], [91, 84], [90, 84], [90, 85], [96, 85], [102, 85], [102, 84], [101, 84], [98, 82], [98, 71], [110, 71], [110, 82], [107, 84], [106, 84], [106, 85], [112, 85], [118, 85], [118, 84], [117, 84], [113, 82], [113, 61], [117, 59], [118, 59], [118, 57], [112, 58], [106, 57], [106, 59], [107, 59], [110, 61], [110, 70], [98, 70], [98, 61], [101, 59], [102, 59], [102, 57], [96, 58], [90, 57], [90, 57], [193, 57], [193, 59], [197, 60], [205, 85], [205, 86], [206, 85], [213, 65], [219, 85], [220, 86], [221, 85], [229, 61], [233, 59], [233, 57], [229, 58], [224, 57], [224, 59], [228, 61], [227, 62], [221, 80], [215, 60], [215, 60], [218, 59], [218, 57], [213, 58], [208, 57], [208, 59], [211, 60], [212, 63], [207, 80], [200, 60], [200, 60], [203, 59], [203, 57], [198, 58], [193, 57], [193, 57], [128, 67], [120, 76], [129, 85], [135, 80], [135, 80], [134, 80], [129, 84], [125, 82], [123, 76], [134, 76], [135, 75], [128, 67], [128, 67], [169, 67], [160, 76], [169, 85], [178, 76], [169, 67], [169, 67], [240, 67], [231, 76], [240, 85], [249, 76], [240, 67], [240, 67], [257, 67], [251, 68], [251, 69], [254, 71], [254, 82], [251, 84], [251, 85], [256, 85], [261, 85], [261, 84], [260, 84], [257, 82], [257, 75], [262, 68], [262, 68], [261, 70], [263, 71], [265, 70], [262, 67], [257, 71], [257, 67], [257, 67], [128, 68], [133, 75], [123, 75], [128, 68], [128, 68], [169, 68], [173, 70], [174, 76], [173, 81], [169, 84], [164, 82], [163, 76], [164, 70], [169, 68], [169, 68], [240, 68], [244, 70], [246, 76], [245, 81], [240, 84], [235, 82], [234, 76], [235, 70], [240, 68], [240, 68], [287, 68], [292, 70], [292, 72], [292, 80], [292, 82], [287, 84], [283, 82], [281, 76], [283, 71], [287, 68], [287, 68]]) | ||
d = np.array( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this long literal be factored out into a CSV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, but it's sort of nice to have everything in a single file for an "example". On a side note, it would be great to have an actual test suite rather than using the examples as a smoke test.
run: | | ||
python setup.py sdist | ||
- name: Publish distribution 📦 to Test PyPI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this runs every time, will it fail a build if the same version is pushed repeatedly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should push something like figurefirst-0.0.6.dev{distance}
. Of course given this workflow where we are hitting test-pipy with every push on any branch, there is a potential collision if two branches get pushed at the same distance.
My original idea was to implement something like this:
https://github.com/scikit-hep/pyhf/tree/master/.github/workflows
Which has a separate build and publish workflow for merged branches. But as we discussed above, this might just add unnecessary complexity to the project given its size.
I've enabled continue on error for this step so that the failures in the rare cases of these collisions shouldn't keep us from pushing a release. I'm not sure it is actually necessary to set continue on error to true, but it does get rid of the annoying email updates, and we can still double check the console output before merging.
#import xml.etree.ElementTree as ET | ||
#ET.register_namespace('figurefirst', 'http://www.figurefirst.com') | ||
|
||
sys.path.append("/usr/share/inkscape/extensions") # or another path, as necessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These could probably be behind if statements to detect the platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree - but since this is mostly some CI improvements I would rather handle this in another PR. I've opened up an issue #60.
Cleans up some of the CI by moving all the tests to tox, also enables pypi deployment by pushing a tag (haven't tested this yet) and moves control of all versioning to vcs.