-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combine runs #99
Combine runs #99
Conversation
e83f950
to
c6a012f
Compare
src/ess/reflectometry/workflow.py
Outdated
def _set_first_if_lost(coord, da, das): | ||
if coord not in da.coords: | ||
da.coords[coord] = das[0].coords[coord] | ||
|
||
|
||
def _concatenate_event_lists(*das): | ||
da = sc.reduce(das).bins.concat() | ||
_set_first_if_lost('position', da, das) | ||
_set_first_if_lost('sample_rotation', da, das) | ||
_set_first_if_lost('detector_rotation', da, das) | ||
return da |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def _set_first_if_lost(coord, da, das): | |
if coord not in da.coords: | |
da.coords[coord] = das[0].coords[coord] | |
def _concatenate_event_lists(*das): | |
da = sc.reduce(das).bins.concat() | |
_set_first_if_lost('position', da, das) | |
_set_first_if_lost('sample_rotation', da, das) | |
_set_first_if_lost('detector_rotation', da, das) | |
return da | |
def _concatenate_event_lists(*das): | |
return sc.reduce(das).bins.concat().assign_coords(das[0].coords) |
but shouldn't we check that they are at lost close?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but shouldn't we check that they are at lost close?
No I don't think we can do that in any reasonable way. What is "close" depends on the context.
In this code we should just expect that what we're given is reasonable, and then we can add a validation layer at a higher level that takes the context into account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll use this suggestion 👍
with the minor modification that I'll only propagate the coords that are explicitly listed,
to avoid propagating something that actually should be dropped.
src/ess/reflectometry/workflow.py
Outdated
wf[OrsoSampleFilenames] = mapped[OrsoSampleFilenames].reduce( | ||
index=axis_name, func=lambda *x: list(chain(*x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment explaining the nested list handling here, it is not really obvious.
result2 = pipeline.compute(ReflectivityOverQ).hist() | ||
assert_allclose( | ||
2 * sc.values(result.data), sc.values(result2.data), rtol=sc.scalar(1e-6) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very weird that we need this factor here, can you explain / comment / link, since this needs to change once reflectivity is normalized correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not weird, for the reason you alluded to, the reflectivity is not normalized by the length of the run (or more precisely, by the total intensity over the period of the run).
There's an issue for this here: #78
Therefore there is currently generally no way for the workflow to distinguish the following two situations:
- There is a large number of events because the sample reflectivity is high.
- There is a large number of events because the run was longer than the reference run.
this needs to change once reflectivity is normalized correctly
Yes that is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the name "reflectivity" misleading / wrong then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. It's just that the data is not congruent with the model for how the data was produced.
2c3f999
to
147b433
Compare
147b433
to
a70a7cc
Compare
Fixes #58
Draft because:needs testsusage example will be added to docs<- will be done later