Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify AMOR/reflectometry workflow #107

Merged
merged 38 commits into from
Jan 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
0703e10
experiment simplifying the amor workflow
jokasimr Nov 28, 2024
3776e33
use transform_coords
jokasimr Nov 28, 2024
320ecd2
rename back to theta
jokasimr Nov 29, 2024
47dd79f
fix: re-introduce footprint correction
jokasimr Nov 29, 2024
74d6b66
docs: clarify meaning of argument
jokasimr Nov 29, 2024
0e17ad5
rename
jokasimr Nov 29, 2024
5f9c989
update notebook
jokasimr Nov 29, 2024
840dbab
fix: move resolution aggregation to normalization step
jokasimr Dec 2, 2024
a414155
fix: get resolution from ioq
jokasimr Dec 2, 2024
8fafbe3
fix: set list of corrections manually
jokasimr Dec 2, 2024
a613608
fix tests
jokasimr Dec 2, 2024
5015491
fix: default beam divergence limit
jokasimr Dec 2, 2024
9afa893
docs: update after refactor
jokasimr Dec 2, 2024
48d6313
fix: add kappad
jokasimr Dec 3, 2024
57217ab
docs: add docstrings and explanations
jokasimr Dec 6, 2024
acfcc05
Update src/ess/amor/load.py
jokasimr Dec 12, 2024
514df9b
fix: move natural incident constant to load function
jokasimr Dec 13, 2024
375b979
fix: split coords, masking and correction into separate functions
jokasimr Dec 16, 2024
459b459
Update src/ess/amor/conversions.py
jokasimr Dec 13, 2024
d8319fa
fix
jokasimr Dec 16, 2024
28ac9f7
add comment to explain why first value is used
jokasimr Dec 16, 2024
1d81812
fix: move correction to common
jokasimr Dec 16, 2024
afd5ce9
refactor: extract common parts to ess/reflectometry
jokasimr Dec 16, 2024
c0bef22
fix: move L1,L2 definitions to add_coords provider
jokasimr Dec 19, 2024
c0ce521
fix: no practical difference but easier to understand
jokasimr Dec 19, 2024
6eed68b
Update src/ess/amor/normalization.py
jokasimr Dec 19, 2024
59b5a0a
fix: name
jokasimr Dec 19, 2024
efe8434
fix: remove typos and commented lines
jokasimr Dec 19, 2024
c65c39e
Update src/ess/amor/load.py
jokasimr Dec 20, 2024
9b28032
fix: remove unit conversion
jokasimr Jan 16, 2025
c43c854
fix
jokasimr Jan 16, 2025
33d832b
Merge branch 'simplify-amor' of github.com:scipp/essreflectometry int…
jokasimr Jan 16, 2025
a9b25fd
fix: add defensive unit conversions
jokasimr Jan 16, 2025
a206792
add coordinate transformation graph provider
jokasimr Jan 16, 2025
f399b6a
type annotations
jokasimr Jan 16, 2025
3abbe62
fix: copy = false
jokasimr Jan 16, 2025
b76dadd
refactor: rename coordinate transformation graph
jokasimr Jan 16, 2025
b9eae74
refactor: rename coord transformation domain type
jokasimr Jan 16, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 24 additions & 46 deletions docs/user-guide/amor/amor-reduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,23 +44,7 @@
"## Create and configure the workflow\n",
"\n",
"We begin by creating the Amor workflow object which is a skeleton for reducing Amor data,\n",
"with pre-configured steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"workflow = amor.AmorWorkflow()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then need to set the missing parameters which are specific to each experiment:"
"with pre-configured steps, and then set the missing parameters which are specific to each experiment:"
]
},
{
Expand All @@ -69,20 +53,22 @@
"metadata": {},
"outputs": [],
"source": [
"workflow = amor.AmorWorkflow()\n",
"workflow[SampleSize[SampleRun]] = sc.scalar(10.0, unit='mm')\n",
"workflow[SampleSize[ReferenceRun]] = sc.scalar(10.0, unit='mm')\n",
"\n",
"workflow[ChopperPhase[ReferenceRun]] = sc.scalar(-7.5, unit='deg')\n",
"workflow[ChopperPhase[SampleRun]] = sc.scalar(-7.5, unit='deg')\n",
"\n",
"workflow[QBins] = sc.geomspace(dim='Q', start=0.005, stop=0.3, num=391, unit='1/angstrom')\n",
"workflow[WavelengthBins] = sc.geomspace('wavelength', 2.8, 12, 301, unit='angstrom')\n",
"workflow[WavelengthBins] = sc.geomspace('wavelength', 2.8, 12.5, 2001, unit='angstrom')\n",
"\n",
"# The YIndexLimits and ZIndexLimits define ranges on the detector where\n",
"# data is considered to be valid signal.\n",
"# They represent the lower and upper boundaries of a range of pixel indices.\n",
"workflow[YIndexLimits] = sc.scalar(11, unit=None), sc.scalar(41, unit=None)\n",
"workflow[ZIndexLimits] = sc.scalar(80, unit=None), sc.scalar(370, unit=None)"
"workflow[YIndexLimits] = sc.scalar(11), sc.scalar(41)\n",
"workflow[ZIndexLimits] = sc.scalar(80), sc.scalar(370)\n",
"workflow[BeamDivergenceLimits] = sc.scalar(-0.75, unit='deg'), sc.scalar(0.75, unit='deg')"
]
},
{
Expand Down Expand Up @@ -116,9 +102,9 @@
"# The sample rotation value in the file is slightly off, so we set it manually\n",
"workflow[SampleRotation[ReferenceRun]] = sc.scalar(0.65, unit='deg')\n",
"\n",
"reference_result = workflow.compute(IdealReferenceIntensity)\n",
"reference_result = workflow.compute(ReducedReference)\n",
"# Set the result back onto the pipeline to cache it\n",
"workflow[IdealReferenceIntensity] = reference_result"
"workflow[ReducedReference] = reference_result"
]
},
{
Expand Down Expand Up @@ -198,6 +184,7 @@
" },\n",
"}\n",
"\n",
"\n",
"reflectivity = {}\n",
"for run_number, params in runs.items():\n",
" workflow[Filename[SampleRun]] = params[Filename[SampleRun]]\n",
Expand Down Expand Up @@ -282,11 +269,20 @@
"for run_number, params in runs.items():\n",
" workflow[Filename[SampleRun]] = params[Filename[SampleRun]]\n",
" workflow[SampleRotation[SampleRun]] = params[SampleRotation[SampleRun]]\n",
" diagnostics[run_number] = workflow.compute((ReflectivityData, ThetaBins[SampleRun]))\n",
" diagnostics[run_number] = workflow.compute((ReflectivityOverZW, ThetaBins[SampleRun]))\n",
"\n",
"# Scale the results using the scale factors computed earlier\n",
"for run_number, scale_factor in zip(reflectivity.keys(), scale_factors, strict=True):\n",
" diagnostics[run_number][ReflectivityData] *= scale_factor"
" diagnostics[run_number][ReflectivityOverZW] *= scale_factor"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"diagnostics['608'][ReflectivityOverZW].hist().flatten(('blade', 'wire'), to='z').plot(norm='log')"
]
},
{
Expand All @@ -306,7 +302,7 @@
"from ess.amor.figures import wavelength_theta_figure\n",
"\n",
"wavelength_theta_figure(\n",
" [result[ReflectivityData] for result in diagnostics.values()],\n",
" [result[ReflectivityOverZW] for result in diagnostics.values()],\n",
" theta_bins=[result[ThetaBins[SampleRun]] for result in diagnostics.values()],\n",
" q_edges_to_display=(sc.scalar(0.018, unit='1/angstrom'), sc.scalar(0.113, unit='1/angstrom'))\n",
")"
Expand Down Expand Up @@ -336,7 +332,7 @@
"from ess.amor.figures import q_theta_figure\n",
"\n",
"q_theta_figure(\n",
" [res[ReflectivityData] for res in diagnostics.values()],\n",
" [res[ReflectivityOverZW] for res in diagnostics.values()],\n",
" theta_bins=[res[ThetaBins[SampleRun]] for res in diagnostics.values()],\n",
" q_bins=workflow.compute(QBins)\n",
")"
Expand Down Expand Up @@ -364,11 +360,11 @@
"workflow[Filename[SampleRun]] = runs['608'][Filename[SampleRun]]\n",
"workflow[SampleRotation[SampleRun]] = runs['608'][SampleRotation[SampleRun]]\n",
"wavelength_z_figure(\n",
" workflow.compute(FootprintCorrectedData[SampleRun]),\n",
" workflow.compute(Sample),\n",
" wavelength_bins=workflow.compute(WavelengthBins),\n",
" grid=False\n",
") + wavelength_z_figure(\n",
" reference_result,\n",
" workflow.compute(Reference),\n",
" grid=False\n",
")"
]
Expand Down Expand Up @@ -455,24 +451,6 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To support tracking provenance, we also list the corrections that were done by the workflow and store them in the dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iofq_dataset.info.reduction.corrections = orso.find_corrections(\n",
" workflow.get(orso.OrsoIofQDataset)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
11 changes: 4 additions & 7 deletions docs/user-guide/amor/compare-to-eos.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -85,25 +85,22 @@
"workflow[SampleSize[ReferenceRun]] = sc.scalar(10.0, unit='mm')\n",
"workflow[ChopperPhase[SampleRun]] = sc.scalar(-7.5, unit='deg')\n",
"\n",
"# In Jochens workflow:\n",
"# * divergence mask: [-0.7, 0.7]\n",
"# * note, divergence relative to beam center\n",
"workflow[WavelengthBins] = sc.geomspace('wavelength', 2.8, 12.5, 301, unit='angstrom')\n",
"workflow[QBins] = sc.geomspace(\n",
" dim='Q', start=0.00505, stop=2.93164766e-01, num=391, unit='1/angstrom'\n",
")\n",
"workflow[YIndexLimits] = sc.scalar(11, unit=None), sc.scalar(41, unit=None)\n",
"workflow[ZIndexLimits] = sc.scalar(80, unit=None), sc.scalar(370, unit=None)\n",
"workflow[YIndexLimits] = sc.scalar(11), sc.scalar(41)\n",
"workflow[ZIndexLimits] = sc.scalar(80), sc.scalar(370)\n",
jokasimr marked this conversation as resolved.
Show resolved Hide resolved
"\n",
"# Chopper phase value in the file is wrong, so we set it manually\n",
"workflow[ChopperPhase[ReferenceRun]] = sc.scalar(-7.5, unit='deg')\n",
"# The sample rotation value in the file is slightly off, so we set it manually\n",
"workflow[SampleRotation[ReferenceRun]] = sc.scalar(0.65, unit='deg')\n",
"workflow[Filename[ReferenceRun]] = amor.data.amor_reference_run()\n",
"\n",
"reference_result = workflow.compute(IdealReferenceIntensity)\n",
"reference_result = workflow.compute(ReducedReference)\n",
"# Set the result back onto the pipeline to cache it\n",
"workflow[IdealReferenceIntensity] = reference_result"
"workflow[ReducedReference] = reference_result"
]
},
{
Expand Down
24 changes: 17 additions & 7 deletions src/ess/amor/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,30 @@
import sciline
import scipp as sc


from ..reflectometry import providers as reflectometry_providers
from ..reflectometry import supermirror
from ..reflectometry.types import (
BeamSize,
DetectorSpatialResolution,
Gravity,
NeXusDetectorName,
RunType,
SamplePosition,
BeamDivergenceLimits,
)
from . import conversions, load, orso, resolution, utils, figures
from . import (
conversions,
load,
orso,
resolution,
utils,
figures,
normalization,
workflow,
)
from .instrument_view import instrument_view
from .types import (
AngularResolution,
Chopper1Position,
Chopper2Position,
ChopperFrequency,
ChopperPhase,
QThetaFigure,
Expand All @@ -42,10 +49,11 @@
*reflectometry_providers,
*load.providers,
*conversions.providers,
*resolution.providers,
*normalization.providers,
*utils.providers,
*figures.providers,
*orso.providers,
*workflow.providers,
)
"""
List of providers for setting up a Sciline pipeline.
Expand All @@ -61,12 +69,14 @@ def default_parameters() -> dict:
supermirror.Alpha: sc.scalar(0.25 / 0.088, unit=sc.units.angstrom),
BeamSize[RunType]: 2.0 * sc.units.mm,
DetectorSpatialResolution[RunType]: 0.0025 * sc.units.m,
Gravity: sc.vector(value=[0, -1, 0]) * sc.constants.g,
SamplePosition[RunType]: sc.vector([0, 0, 0], unit="m"),
NeXusDetectorName[RunType]: "detector",
ChopperPhase[RunType]: sc.scalar(7.0, unit="deg"),
ChopperFrequency[RunType]: sc.scalar(8.333, unit="Hz"),
BeamDivergenceLimits: (sc.scalar(-0.7, unit='deg'), sc.scalar(0.7, unit='deg')),
BeamDivergenceLimits: (
sc.scalar(-0.75, unit='deg'),
sc.scalar(0.75, unit='deg'),
),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not very user friendly. Ideally, the code that uses these should convert to the unit it needs.

Copy link
Contributor Author

@jokasimr jokasimr Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand it's messy and somewhat error prone to have unit conversion code all over the package (forgetting a unit conversion somewhere might lead to UnitErrors and having extraneous unit conversions in every provider obscures the intention of the code, which is also bad user experience) 🤔

It's easiest to have unit conversion code on the boundaries of the application and only work with a set of predetermined units in the "internal" code.

One way to achieve that would be to create a separate provider just for converting the unit of the input to the common unit. But doing that for every input would clutter the workflow graph and make it considerably less readable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's easiest to have unit conversion code on the boundaries of the application and only work with a set of predetermined units in the "internal" code.

I disagree. It is often very difficult to understand which units are required by any given piece of code. And we don't encode them in our interfaces.

forgetting a unit conversion somewhere might lead to UnitErrors

Yes, the same as using an incorrect unit in a parameter. Except that the latter UnitErrors are user errors and it is typically very difficulty to figure out which unit was wrong because those errors can happen after many processing steps.

having extraneous unit conversions in every provider obscures the intention of the code, which is also bad user experience

True. That is a tradeoff. But we accepted that tradeoff in ScippNeutron in favour of reducing the number of user-facing UnitErrors. In fact, we took great care to make coord transform kernels work with any units and do conversions in an optimal way.

I don't expect the same amount of care in all code. But I do think that users should not have to know which units are required. Especially given that the requirements are not document anywhere.

}


Expand Down
Loading