Skip to content

Commit

Permalink
Merge pull request #463 from scipp/no-attrs-in-tutorials
Browse files Browse the repository at this point in the history
Use no attrs in tutorials
  • Loading branch information
jl-wynen authored Oct 24, 2023
2 parents 13fe74f + 27972df commit f460751
Show file tree
Hide file tree
Showing 6 changed files with 109 additions and 90 deletions.
3 changes: 2 additions & 1 deletion docs/reference/free-functions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ Mantid Compatibility
.. autosummary::
:toctree: ../generated/functions

array_from_mantid
from_mantid
load_with_mantid
to_mantid
fit

Expand Down Expand Up @@ -45,5 +47,4 @@ Loading Nexus files
.. autosummary::
:toctree: ../generated/functions

load
load_nexus
127 changes: 59 additions & 68 deletions docs/tutorials/1_exploring-data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,14 @@
"# Exploring data\n",
"\n",
"When working with a dataset, the first step is usually to understand what data and metadata it contains.\n",
"In this chapter we explore how scipp supports this.\n",
"In this chapter we explore how Scipp supports this.\n",
"\n",
"This tutorial contains exercises, but solutions are included directly.\n",
"We encourage you to download this notebook and run through it step by step before looking at the solutions.\n",
"We recommend to use a recent version of *JupyterLab*:\n",
"The solutions are included as hidden cells and shown only on demand.\n",
"\n",
"First, in addition to importing `scipp`, we import `scippneutron` since this is required for loading Nexus files.\n",
"We also enable the interactive backend (`widget`) for plots."
"First, in addition to importing `scipp`, we import `scippneutron` for neutron-science routines."
]
},
{
Expand Down Expand Up @@ -51,7 +50,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In practice, we would use [scippneutron.load](../generated/functions/scippneutron.load.rst#scippneutron.load) or [scippneutron.load_nexus](../generated/functions/scippneutron.load_nexus.rst#scippneutron.load_nexus) to load the data from a NeXus file, but the tutorial data comes bundled with scippneutron to make it easily available.\n",
"In practice, we would use [scippneutron.load_with_mantid](../generated/functions/scippneutron.load_with_mantid.rst) or [scippneutron.load_nexus](../generated/functions/scippneutron.load_nexus.rst) to load the data from a NeXus file, but the tutorial data comes bundled with ScippNeutron to make it easily available.\n",
"See [Tutorial and Test Data](../developer/getting-started.rst#tutorial-and-test-data) for a way to customize where the data is stored.\n",
"\n",
"Note that the exercises in the following are fictional and do not represent the actual SANS data reduction workflow."
Expand All @@ -78,6 +77,16 @@
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"detector = data['detector']\n",
"detector"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -93,7 +102,7 @@
"source": [
"## Step 2: Plot the data\n",
"\n",
"Scipp objects (variables, data arrays, or datasets) can be plotted using the `plot()` method.\n",
"Scipp objects (variables, data arrays, datasets, or data groups) can be plotted using the `plot()` method.\n",
"Alternatively `sc.plot(obj)` can be used, e.g., when `obj` is a Python `dict` of scipp data arrays.\n",
"Since this is neutron-scattering data, we can also use the \"instrument view\", provided by `scn.instrument_view(obj)` (assuming `scippneutron` was imported as `scn`).\n",
"\n",
Expand All @@ -115,7 +124,7 @@
},
"outputs": [],
"source": [
"data.sum('spectrum').plot()"
"detector.sum('spectrum').plot()"
]
},
{
Expand All @@ -128,36 +137,18 @@
},
"outputs": [],
"source": [
"scn.instrument_view(data.sum('tof'), norm='log')"
"scn.instrument_view(detector.sum('tof'), norm='log')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3: Exploring meta data\n",
"## Step 3: Exploring metadata\n",
"\n",
"Above we saw that many attributes are scalar variables with `dtype=DataArray`.\n",
"The single value in a scalar variable is accessed using the `value` property.\n",
"Compare:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.attrs['proton_charge_by_period']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.attrs['proton_charge_by_period'].value"
"Above we saw that the input data group contains a number of metadata items in addition to the the main 'detector'.\n",
"Some items are simple strings while others are data arrays or variables.\n",
"These have various meanings and we now want to explore them."
]
},
{
Expand All @@ -168,13 +159,13 @@
"source": [
"### Exercise\n",
"\n",
"1. Find some attributes of `data` with `dtype=DataArray` and plot their `value`.\n",
" Also try `sc.table(attr.value)` to show a table representation (where `attr` is an attribute of your choice).\n",
"1. Find some data array items of `data` and plot them.\n",
" Also try `sc.table(item)` to show a table representation (where `item` is an item of your choice).\n",
"2. Find and plot a monitor.\n",
"3. Try to normalize `data` to monitor 1.\n",
"3. Try to normalize `detector` to monitor 1.\n",
" Why does this fail?\n",
"4. Plot all the monitors on the same plot.\n",
" Note that `sc.plot()` can be used with a Python `dict` for this purpose: `sc.plot({'a':something, 'b':else})`.\n",
" Note that `sc.plot()` can be used with a data group.\n",
"5. Convert all the monitors from `'tof'` to `'wavelength'` using, e.g.,\n",
" ```python\n",
" wavelength_graph_monitor = {\n",
Expand All @@ -199,26 +190,25 @@
},
"outputs": [],
"source": [
"sc.table(data.attrs['DCMagField2'].value)"
"sc.table(data['DCMagField2'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": [
"solution"
"solution",
"raises-exception"
]
},
"outputs": [],
"source": [
"try:\n",
" data / data.attrs['monitor1'].value\n",
"except sc.DatasetError:\n",
" print(\n",
" \"\"\"Data and monitor are in unit TOF, but pixels and monitors\n",
"are at different position, so data is not comparable.\"\"\"\n",
" )"
"detector / data['monitors']['monitor1']"
]
},
{
Expand Down Expand Up @@ -254,7 +244,7 @@
},
"outputs": [],
"source": [
"mon1 = data.attrs['monitor1'].value\n",
"mon1 = data['monitors']['monitor1']\n",
"mon1.transform_coords('wavelength', graph=wavelength_graph_monitor)"
]
},
Expand All @@ -268,8 +258,7 @@
},
"outputs": [],
"source": [
"monitors = {f'monitor{i}': data.attrs[f'monitor{i}'].value for i in [1, 2, 3, 4, 5]}\n",
"sc.plot(monitors, norm='log')"
"sc.plot(data['monitors'], norm='log')"
]
},
{
Expand All @@ -283,7 +272,7 @@
"outputs": [],
"source": [
"converted_monitors = {\n",
" f'monitor{i}': data.attrs[f'monitor{i}'].value.transform_coords(\n",
" f'monitor{i}': data['monitors'][f'monitor{i}'].transform_coords(\n",
" 'wavelength', graph=wavelength_graph_monitor\n",
" )\n",
" for i in [1, 2, 3, 4, 5]\n",
Expand All @@ -301,12 +290,12 @@
"\n",
"### Exercise\n",
"\n",
"Consider the following (hypothetical) problems with the metadata stored in `data`:\n",
"Consider the following (hypothetical) problems with the metadata stored in `detector`:\n",
"\n",
"1. The `sample_position` coord (`data.coords['sample_position']`) is wrong, shift the sample by `delta = sc.vector(value=np.array([0.01,0.01,0.04]), unit='m')`.\n",
"1. The `sample_position` coord (`detector.coords['sample_position']`) is wrong, shift the sample by `delta = sc.vector(value=np.array([0.01,0.01,0.04]), unit='m')`.\n",
"2. Because of a glitch in the timing system the time-of-flight has an offset of $2.3~\\mu s$.\n",
" Fix the corresponding coordinate.\n",
"3. Use the HTML view of `data` to verify that you applied the corrections/calibrations there, rather than in a copy.\n",
"3. Use the HTML view of `detector` to verify that you applied the corrections/calibrations there, rather than in a copy.\n",
"\n",
"### Solution"
]
Expand All @@ -321,9 +310,9 @@
},
"outputs": [],
"source": [
"data.coords['sample_position'] += sc.vector(value=[0.01, 0.01, 0.04], unit='m')\n",
"data.coords['tof'] += 2.3 * sc.Unit('us') # note how we forgot to fix the monitor's TOF\n",
"data"
"detector.coords['sample_position'] += sc.vector(value=[0.01, 0.01, 0.04], unit='m')\n",
"detector.coords['tof'] += 2.3 * sc.Unit('us') # note how we forgot to fix the monitor's TOF\n",
"detector"
]
},
{
Expand All @@ -341,16 +330,18 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": [
"solution"
"solution",
"raises-exception"
]
},
"outputs": [],
"source": [
"try:\n",
" data.coords['tof'] += 2.3\n",
"except sc.UnitError as e:\n",
" print(e)"
"detector.coords['tof'] += 2.3"
]
},
{
Expand Down Expand Up @@ -388,7 +379,7 @@
},
"outputs": [],
"source": [
"data.coords['sample_position'].fields.z += 0.001 * sc.Unit('m')"
"detector.coords['sample_position'].fields.z += 0.001 * sc.Unit('m')"
]
},
{
Expand All @@ -410,7 +401,7 @@
},
"outputs": [],
"source": [
"counts = data.sum('tof')"
"counts = detector.sum('tof')"
]
},
{
Expand All @@ -423,7 +414,7 @@
"\n",
"1. Create a plot of `counts` and also try the instrument view.\n",
"2. How many counts are there in total, in all spectra combined?\n",
"3. Plot a single spectrum of `data` as a 1-D plot using the slicing syntax `array[dim_name, integer_index]` to access the spectrum.\n",
"3. Plot a single spectrum of `detector` as a 1-D plot using the slicing syntax `array[dim_name, integer_index]` to access the spectrum.\n",
"\n",
"### Solution"
]
Expand Down Expand Up @@ -465,8 +456,8 @@
},
"outputs": [],
"source": [
"# counts.sum('spectrum') # would be another solution\n",
"data.sum().value"
"# detector.sum('spectrum') # would be another solution\n",
"detector.sum().value"
]
},
{
Expand All @@ -479,7 +470,7 @@
},
"outputs": [],
"source": [
"data['spectrum', 10000].plot()"
"detector['spectrum', 10000].plot()"
]
},
{
Expand All @@ -501,7 +492,7 @@
},
"outputs": [],
"source": [
"z = data.coords['position'].fields.z\n",
"z = detector.coords['position'].fields.z\n",
"near = z.min()\n",
"far = z.max()\n",
"layer = ((z - near) * 400).astype('int32')\n",
Expand All @@ -518,8 +509,8 @@
"### Exercise\n",
"\n",
"- Change the magic parameter `400` in the cell above until pixels fall cleanly into layers, either 4 layers of tubes or 12 layers of straws.\n",
"- Store `layer` as a new coord in `data`.\n",
"- Use `data.groupby(group='layer').sum('spectrum')` to group spectra into layers.\n",
"- Store `layer` as a new coord in `detector`.\n",
"- Use `detector.groupby(group='layer').sum('spectrum')` to group spectra into layers.\n",
"- Inspect and understand the HTML view of the result.\n",
"- Plot the result.\n",
" There are two options:\n",
Expand Down Expand Up @@ -547,8 +538,8 @@
"# - set magic factor to, e.g., 40 to group by tube layer\n",
"layer = ((z - near) * 150).astype(sc.DType.int32)\n",
"layer.unit = ''\n",
"data.coords['layer'] = layer\n",
"grouped = data.groupby(group='layer').sum('spectrum')\n",
"detector.coords['layer'] = layer\n",
"grouped = detector.groupby(group='layer').sum('spectrum')\n",
"pp.slicer(grouped)"
]
},
Expand Down
12 changes: 11 additions & 1 deletion docs/tutorials/2_working-with-masks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,17 @@
"metadata": {},
"outputs": [],
"source": [
"data = scn.data.tutorial_dense_data()\n",
"dg = scn.data.tutorial_dense_data()\n",
"dg"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = dg['detector'] # the actual measured counts\n",
"counts = data.sum('tof') # used later\n",
"data"
]
Expand Down
Loading

0 comments on commit f460751

Please sign in to comment.