Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align style checks with jwst #383

Merged
merged 61 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
8e5cafb
add pre commit configuration and ruff
braingram Jan 29, 2025
5251fa0
ignore all failing checks
braingram Jan 29, 2025
9ac84a3
apply ruff format
braingram Jan 29, 2025
75e4527
update ci
braingram Jan 29, 2025
7332c1a
line length 100
braingram Jan 29, 2025
01f1dfe
fix tox
braingram Jan 29, 2025
34bce78
fix A001
braingram Jan 29, 2025
5e7faae
allow A002
braingram Jan 29, 2025
900612f
allow A004
braingram Jan 29, 2025
d2715fb
remove ARG checks
braingram Jan 29, 2025
c7349fb
allow B003
braingram Jan 29, 2025
61701d4
allow B006
braingram Jan 29, 2025
e159b26
allow B007
braingram Jan 29, 2025
7bf644f
allow B009
braingram Jan 29, 2025
b38a11c
allow B011
braingram Jan 29, 2025
fd63de3
allow B015
braingram Jan 29, 2025
e39144e
allow B018
braingram Jan 29, 2025
0830c14
allow B905
braingram Jan 29, 2025
d0dc8cc
allow C401
braingram Jan 29, 2025
d7761f1
allow C402
braingram Jan 29, 2025
74f5779
allow C405
braingram Jan 29, 2025
16fc206
allow C408
braingram Jan 29, 2025
6aa5ca2
allow C414
braingram Jan 29, 2025
8c82313
allow C416
braingram Jan 29, 2025
25b4b4c
allow C419
braingram Jan 29, 2025
861ef8b
allow E712
braingram Jan 29, 2025
1a16618
allow E713
braingram Jan 29, 2025
6382d3b
allow E721
braingram Jan 29, 2025
6bababe
allow INC001
braingram Jan 29, 2025
8c39257
allow INP001
braingram Jan 29, 2025
fb7aff8
allow N801
braingram Jan 29, 2025
54992ac
allow N803
braingram Jan 29, 2025
93b811d
allow N806
braingram Jan 29, 2025
ff3052b
allow N999
braingram Jan 29, 2025
8d2911b
allow PTH101
braingram Jan 29, 2025
0bca693
allow PTH110
braingram Jan 29, 2025
034dc76
allow PTH119
braingram Jan 29, 2025
4301c6c
allow PTH120
braingram Jan 29, 2025
0b3d276
allow PTH122
braingram Jan 29, 2025
79bc50b
ignore PTH123
braingram Jan 29, 2025
83d34da
move S101 to todo
braingram Jan 29, 2025
a2757ad
ignore SLF001
braingram Jan 29, 2025
a29e70d
allow TRY002
braingram Jan 29, 2025
819c2ba
allow TRY004
braingram Jan 29, 2025
e991e6e
allow UP009
braingram Jan 29, 2025
03919bb
allow UP024
braingram Jan 29, 2025
9ac2169
allow UP028
braingram Jan 29, 2025
c2ef405
allow UP030
braingram Jan 29, 2025
1e667b0
allow UP031
braingram Jan 29, 2025
1842684
allow UP032
braingram Jan 29, 2025
56ae1e0
ignore UP038
braingram Jan 29, 2025
ff30079
allow UP039
braingram Jan 29, 2025
0928005
allow NPY002
braingram Jan 30, 2025
73362c5
allow B904
braingram Jan 30, 2025
9517f97
allow B028
braingram Jan 30, 2025
582fc36
add changelog
braingram Jan 30, 2025
c45b8b0
enable codespell
braingram Jan 31, 2025
645a19c
Update src/stdatamodels/dynamicdq.py
braingram Jan 31, 2025
957eb66
disable B011 for tests
braingram Jan 31, 2025
17acc10
rename to filter_name
braingram Jan 31, 2025
0c91732
fix suggestion formatting
braingram Jan 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# automatically requests pull request reviews for files matching the given pattern; the last match takes precendence
# automatically requests pull request reviews for files matching the given pattern; the last match takes precedence

* @spacetelescope/stdatamodels-maintainers
12 changes: 7 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,13 @@ jobs:
crds_path: ${{ steps.crds_path.outputs.path }}
crds_server: ${{ steps.crds_server.outputs.url }}
check:
uses: OpenAstronomy/github-actions-workflows/.github/workflows/tox.yml@8c0fde6f7e926df6ed7057255d29afa9c1ad5320 # v1.16.0
with:
envs: |
- linux: check-style
- linux: check-security
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@42375524e23c412d93fb67b49958b491fce71c38 # v5.4.0
with:
python-version: '3.12'
- uses: pre-commit/action@2c7b3805fd2a0fd8c1884dcaebf91fc102a13ecd # v3.0.1
test:
uses: OpenAstronomy/github-actions-workflows/.github/workflows/tox.yml@8c0fde6f7e926df6ed7057255d29afa9c1ad5320 # v1.16.0
needs: [ data ]
Expand Down
35 changes: 35 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
exclude: ".*\\.asdf$"

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-added-large-files
- id: check-ast
- id: check-case-conflict
- id: check-yaml
args: ["--unsafe"]
- id: check-toml
- id: check-merge-conflict
- id: check-symlinks
- id: debug-statements
- id: detect-private-key
# - id: end-of-file-fixer
# - id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: 'v0.9.2'
hooks:
- id: ruff
args: ["--fix"]
- id: ruff-format
# - repo: https://github.com/numpy/numpydoc
braingram marked this conversation as resolved.
Show resolved Hide resolved
# rev: v1.8.0
# hooks:
# - id: numpydoc-validation
- repo: https://github.com/codespell-project/codespell
rev: v2.4.0
hooks:
- id: codespell
args: ["--write-changes"]
additional_dependencies:
- tomli
70 changes: 70 additions & 0 deletions .ruff.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
extend = "pyproject.toml"

exclude = [
".git",
"__pycache__",
"docs",
".eggs",
"build",
"dist",
".tox",
".eggs",
]
line-length = 100
target-version = "py310"
emolter marked this conversation as resolved.
Show resolved Hide resolved

[format]
quote-style = "double"
indent-style = "space"
docstring-code-format = true

[lint]
select = [
"F", # Pyflakes (part of default flake8)
"E", # pycodestyle (part of default flake8)
"W", # pycodestyle (part of default flake8)
#"D", # docstrings, see also numpydoc pre-commit action
"N", # pep8-naming (naming conventions)
"A", # flake8-builtins (prevent shadowing of builtins)
#"ARG", # flake8-unused-arguments (prevent unused arguments)
"B", # flake8-bugbear (miscellaneous best practices to avoid bugs)
"C4", # flake8-comprehensions (best practices for comprehensions)
"ICN", # flake8-import-conventions (enforce import conventions)
"INP", # flake8-no-pep420 (prevent use of PEP420, i.e. implicit name spaces)
"ISC", # flake8-implicit-str-concat (conventions for concatenating long strings)
"LOG", # flake8-logging
"NPY", # numpy-specific rules
"PGH", # pygrep-hooks (ensure appropriate usage of noqa and type-ignore)
"PTH", # flake8-use-pathlib (enforce using Pathlib instead of os)
"S", # flake8-bandit (security checks)
"SLF", # flake8-self (prevent using private class members outside class)
"SLOT", # flake8-slots (require __slots__ for immutable classes)
"T20", # flake8-print (prevent print statements in code)
"TRY", # tryceratops (best practices for try/except blocks)
"UP", # pyupgrade (simplified syntax allowed by newer Python versions)
"YTT", # flake8-2020 (prevent some specific gotchas from sys.version)
]
ignore = [
"D100", # missing docstring in public module
"E741", # ambiguous variable name (O/0, l/I, etc.)
"UP008", # use super() instead of super(class, self). no harm being explicit
"UP015", # unnecessary open(file, "r"). no harm being explicit
"TRY003", # prevents custom exception messages not defined in exception itself.
"ISC001", # single line implicit string concatenation. formatter recommends ignoring this.
"PTH123", # use Path.open instead of open
braingram marked this conversation as resolved.
Show resolved Hide resolved
"UP038", # isinstance with | instead of ,
braingram marked this conversation as resolved.
Show resolved Hide resolved
# longer term fix
"S101", # asserts are used in many non-test places
"SLF001", # private member access, this is overly restrictive
braingram marked this conversation as resolved.
Show resolved Hide resolved
]

[lint.pydocstyle]
convention = "numpy"

[lint.flake8-annotations]
ignore-fully-untyped = true # Turn of annotation checking for fully untyped code

[lint.per-file-ignores]
"**/test_*.py" = ["S101", "SLF001", "B011"]
"tests/**" = ["INP001"]
"src/stdatamodels/jwst/datamodels/darkMIRI.py" = ["N999"]
1 change: 1 addition & 0 deletions changes/383.misc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Apply style checks to code to match jwst.
2 changes: 1 addition & 1 deletion docs/source/asdf_in_fits.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,4 @@ Finally providing the tree and hdulist to

When read back with :func:`stdatamodels.asdf_in_fits.open` the data for
``sci`` and ``dq`` will be read from the HDUList instead of from the
ASDF data embeded in the HDUList.
ASDF data embedded in the HDUList.
2 changes: 1 addition & 1 deletion docs/source/jwst/kwtool/keyword_dictionary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ keyword definitions in the keyword dictionary:

- fits_keyword: the FITS keyword name
- fits_hdu: the FITS hdu name
- title: the title of the keyword defintion
- title: the title of the keyword definition
- type: one of "float", "integer", "string", "boolean"
- enum (optional): list of valid values

Expand Down
4 changes: 4 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,10 @@ select = [
"E722",
]

[tool.codespell]
skip = "*.fits, *.asdf, ./build, ./docs/_build, CHANGES.rst, *.schema.yaml"
ignore-words-list = "indx, delt, Shepard, fo"

[tool.pytest.ini_options]
minversion = "4.6"
doctest_plus = true
Expand Down
2 changes: 1 addition & 1 deletion src/stdatamodels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from . import _version


__all__ = ['DataModel', '__version__']
__all__ = ["DataModel", "__version__"]


__version__ = _version.version
14 changes: 6 additions & 8 deletions src/stdatamodels/asdf_in_fits.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,7 @@
from . import fits_support


__all__ = [
'write',
'open'
]
__all__ = ["write", "open"]


def write(filename, tree, hdulist=None, **kwargs):
Expand All @@ -30,7 +27,7 @@ def write(filename, tree, hdulist=None, **kwargs):
hdulist.writeto(filename, **kwargs)


def open(filename_or_hdu, **kwargs):
def open(filename_or_hdu, **kwargs): # noqa: A001
"""Read ASDF data embedded in a fits file

Parameters
Expand All @@ -46,14 +43,14 @@ def open(filename_or_hdu, **kwargs):
Returns
-------
af : :obj:`asdf.AsdfFile`
:obj:`asdf.AsdfFile` created from ASDF data embeded in the opened
:obj:`asdf.AsdfFile` created from ASDF data embedded in the opened
fits file.
"""

is_hdu = isinstance(filename_or_hdu, fits.HDUList)
hdulist = filename_or_hdu if is_hdu else fits.open(filename_or_hdu)
if 'ignore_missing_extensions' not in kwargs:
kwargs['ignore_missing_extensions'] = False
if "ignore_missing_extensions" not in kwargs:
kwargs["ignore_missing_extensions"] = False
af = fits_support.from_fits_asdf(hdulist, **kwargs)

if is_hdu:
Expand All @@ -65,6 +62,7 @@ def wrap_close(af, hdulist):
def close():
asdf.AsdfFile.close(af)
hdulist.close()

return close

af.close = wrap_close(af, hdulist)
Expand Down
5 changes: 2 additions & 3 deletions src/stdatamodels/basic_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,11 @@ def multiple_replace(string, rep_dict):
If the replacements where chained, the result would have been
'lamb lamb'

>>> multiple_replace('button mutton', {'but': 'mut', 'mutton': 'lamb'})
>>> multiple_replace("button mutton", {"but": "mut", "mutton": "lamb"})
'mutton lamb'

"""
pattern = re.compile(
"|".join([re.escape(k) for k in sorted(rep_dict, key=len, reverse=True)]),
flags=re.DOTALL
"|".join([re.escape(k) for k in sorted(rep_dict, key=len, reverse=True)]), flags=re.DOTALL
)
return pattern.sub(lambda x: rep_dict[x.group(0)], string)
38 changes: 17 additions & 21 deletions src/stdatamodels/dqflags.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
which provides 32 bits. Bits of an integer are most easily referred to using
the formula `2**bit_number` where `bit_number` is the 0-index bit of interest.
"""

from astropy.nddata.bitmask import interpret_bit_flags as ap_interpret_bit_flags
from stdatamodels.basic_utils import multiple_replace

Expand Down Expand Up @@ -46,10 +47,7 @@ def interpret_bit_flags(bit_flags, flip_bits=None, mnemonic_map=None):
raise TypeError("`mnemonic_map` is a required argument")
bit_flags_dm = bit_flags
if isinstance(bit_flags, str):
dm_flags = {
key: str(val)
for key, val in mnemonic_map.items()
}
dm_flags = {key: str(val) for key, val in mnemonic_map.items()}
bit_flags_dm = multiple_replace(bit_flags, dm_flags)

return ap_interpret_bit_flags(bit_flags_dm, flip_bits=flip_bits)
Expand All @@ -74,24 +72,26 @@ def dqflags_to_mnemonics(dqflags, mnemonic_map):

Examples
--------
>>> pixel = {'GOOD': 0, # No bits set, all is good
... 'DO_NOT_USE': 2**0, # Bad pixel. Do not use
... 'SATURATED': 2**1, # Pixel saturated during exposure
... 'JUMP_DET': 2**2, # Jump detected during exposure
... }

>>> group = {'GOOD': pixel['GOOD'],
... 'DO_NOT_USE': pixel['DO_NOT_USE'],
... 'SATURATED': pixel['SATURATED'],
... }
>>> pixel = {
... "GOOD": 0, # No bits set, all is good
... "DO_NOT_USE": 2**0, # Bad pixel. Do not use
... "SATURATED": 2**1, # Pixel saturated during exposure
... "JUMP_DET": 2**2, # Jump detected during exposure
... }

>>> group = {
... "GOOD": pixel["GOOD"],
... "DO_NOT_USE": pixel["DO_NOT_USE"],
... "SATURATED": pixel["SATURATED"],
... }

>>> dqflags_to_mnemonics(1, pixel)
{'DO_NOT_USE'}

>>> dqflags_to_mnemonics(7, pixel) #doctest: +SKIP
>>> dqflags_to_mnemonics(7, pixel) # doctest: +SKIP
{'JUMP_DET', 'DO_NOT_USE', 'SATURATED'}

>>> dqflags_to_mnemonics(7, pixel) == {'JUMP_DET', 'DO_NOT_USE', 'SATURATED'}
>>> dqflags_to_mnemonics(7, pixel) == {"JUMP_DET", "DO_NOT_USE", "SATURATED"}
True

>>> dqflags_to_mnemonics(1, mnemonic_map=pixel)
Expand All @@ -100,9 +100,5 @@ def dqflags_to_mnemonics(dqflags, mnemonic_map):
>>> dqflags_to_mnemonics(1, mnemonic_map=group)
{'DO_NOT_USE'}
"""
mnemonics = {
mnemonic
for mnemonic, value in mnemonic_map.items()
if (dqflags & value)
}
mnemonics = {mnemonic for mnemonic, value in mnemonic_map.items() if (dqflags & value)}
return mnemonics
21 changes: 13 additions & 8 deletions src/stdatamodels/dynamicdq.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import numpy as np

import logging

log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.addHandler(logging.NullHandler())
Expand Down Expand Up @@ -32,23 +33,27 @@ def dynamic_mask(input_model, mnemonic_map, inv=False):

dq_table = input_model.dq_def
# Get the DQ array and the flag definitions
if (dq_table is not None and
not np.isscalar(dq_table) and
len(dq_table.shape) and
len(dq_table)):
if (
(dq_table is not None)
and (not np.isscalar(dq_table))
and (len(dq_table.shape))
and (len(dq_table))
):
#
# Make an empty mask
dqmask = np.zeros(input_model.dq.shape, dtype=input_model.dq.dtype)
for record in dq_table:
bitplane = record['VALUE']
dqname = record['NAME'].strip()
bitplane = record["VALUE"]
dqname = record["NAME"].strip()

# Check that a flag in the 'dq_def' is a valid DQ flag.
try:
standard_bitvalue = mnemonic_map[dqname]
except KeyError:
log.warning('Keyword %s does not correspond to an existing '
'DQ mnemonic, so will be ignored' % (dqname))
log.warning(
f"Keyword {dqname} does not correspond to an "
"existing DQ mnemonic, so will be ignored"
)
continue

if not inv:
Expand Down
22 changes: 11 additions & 11 deletions src/stdatamodels/filetype.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,24 +15,24 @@ def check(init):
file_type: a string with the file type ("asdf", "asn", or "fits")

"""
supported = ('asdf', 'fits', 'json')
supported = ("asdf", "fits", "json")

if isinstance(init, str):
path, ext = os.path.splitext(init)
ext = ext.strip('.')
path, ext = os.path.splitext(init) # noqa: PTH122
braingram marked this conversation as resolved.
Show resolved Hide resolved
ext = ext.strip(".")

if not ext:
raise ValueError(f'Input file path does not have an extension: {init}')
raise ValueError(f"Input file path does not have an extension: {init}")

if ext not in supported: # Could be the file is zipped; try splitting again
path, ext = os.path.splitext(path)
ext = ext.strip('.')
path, ext = os.path.splitext(path) # noqa: PTH122
ext = ext.strip(".")

if ext not in supported:
raise ValueError(f'Unrecognized file type for: {init}')
raise ValueError(f"Unrecognized file type for: {init}")

if ext == 'json': # Assume json input is an association
return 'asn'
if ext == "json": # Assume json input is an association
return "asn"

return ext

Expand All @@ -43,10 +43,10 @@ def check(init):
if not magic or len(magic) < 5:
raise ValueError(f"Cannot get file type of {str(init)}")

if magic == b'#ASDF':
if magic == b"#ASDF":
return "asdf"

if magic == b'SIMPL':
if magic == b"SIMPL":
return "fits"

return "asn"
Expand Down
Loading
Loading