-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1st Data Package: 01 HC Layers #8
Comments
yes
Postgis import can be script automated, trasformations can be done in PostgreSQL/PostGIS PL/PgSQL procedures, Geoserver publication can be scripted by means of the Geoserver REST API. |
Answer in agreement with @luis-meteogrid
Not only direct Hazard indexes but also, the CSIS needs to have the Local Effects in the Hazard indexes layers. As agreed in the Plenary Meeting, Meteogrid is responsible of doing this calculation.
We intend to do the calculations in the original grid and, as a last step, to transform everything to the agreed upon grid. ZAMG's original grid has EPSG 4326 instead of 3035, as expected by AIT. We can re-project and fit the results to the expected resolution.
We were planning on using our instance of GeoServer located in the server in which we are going to perform the calculations for the Local Effects. Please, be aware that the Local Effects might need to be re-calculated on-the-fly if the user modifies the distribution of Urban Elements.
This might be done at the visualization level or be attached to the layer if someone is able to provide the thresholds beforehand. We have a tool that already does this running on a django app.
From our POV, the main purpose is calculation of Local Effects as well as visualization. So, yes.
A transformation is necessary, the original data do not have the correct projection, grid nor the resolution.
The main goal is to automate most of the process, we'll have to see if it is achievable at a 100%. For the Local Effects calculations we are using our own framework to achieve this automation, both for the calculations, re-projection and interpolation.
We can take charge of the first steps, up until the Local Effects generation. AIT seems to be responsible from that step onward. If there's any problem with visualization we can agree on some sort of common solution.
We understood that report generation was going to fall on Drupal's side. As for the other two, it will depend on what this might entail, we already have some tools for map visualization, if they can be reused for this we could try and see how to provide some sort of communication between them, it would also depend on the ease of communication with drupal or/and emikat or any other tools. Just to be clear, we are not going to develop on drupal, but we are willing to help with other tools to access data (tables), layers or maps. |
Ok, Thanks! In this issue, I care mainly about the original hazard Data, Local Effects will be addressed / discussed separately. So AIT & ATOS are not directly involved in the "plain" HC task. The demo Hazard Layers on ATOS server will not be used in CSIS or for any HC-LE or RA/IA calculation. Agreed @bernhardsk @humerh & @negroscuro ? In summary, the responsibilities are
Open Issues:
|
Hello, I have uploaded 2 zip files here: Sorry if it is not the new way to upload files, but I didn’t have time to look into the CKAN/ATOS ftp server thingy. These files are in the native netcdf EURO-CORDEX rotated grid projection. @luis-meteogrid you said it was no problem for you to convert it into a useable projection. All of this data is to do with the climate index “Tx75p” which is a measure for heat wave duration. In detail, the index is: The file HazardMap_Tx75p.zip contains 9 layers for Tx75p for:
The file also contains a text file describing the rule I used to define the low/medium/high hazard levels. The raw index values of Tx75p are in the file Index_Tx75p.zip. These values are the actual index values and would/could be used when implementing the method created by PLINIVS. I hope this helps! |
We've also prepared an updated example for the tabular visualisation of hazards: In particular, we need to make sure that their is a clear distinction between the reference period (baseline) and the future scenarios, mainly because the respective hazard scales are defined differently. A pull-down menu could be provided to let the user switch between different future periods, as suggested by Denis. |
Hm, the 'original' indexes range from 5,433333 to 41,63334 (days). IMHO we don't need the additional hazard maps with 'normalised' indexes (1 to 3, low, medium, high), but just the thresholds (e.g. x = 8 and y = 14) to perform the normalisation just in time when we render the maps and the tables. @clarity-h2020/data-processing-team With help a of a styled layer descriptor it would be possible to map the original indexes (5,4 - 41,6) to 1,2,3 (green, yellow, red) when absolute values for x and y are know, right? On the other hand, I think we should show the original indexes in the map and the normalised indexes just in the table. Reason: The user can recognise slight differences between adjacent grid cells, especially when taking local effects in to account (500x500m grid). If we just show values ranging from 1-3 in the map, probably no difference between HC and HC-LE would be recognisable. @clarity-h2020/mathematical-models-implementation-team: for impact calculation you would need the original indexes anyway? So at the moment I don't see a need to store them twice in the database / geoserver. Map comparison: orginal vs normalised:@ghilbrae Could you please transform the ncfd files (sftp://5.79.69.49/clarityftp/europe/hazard_indices/heat/heat-wave-duration/Index_Tx75p/) to the normalised grid and make it available in your geoserver instance? |
Yes, that is true. It would also save some work from my side. :-) @clarity-h2020/data-processing-team One point to add is that the thresholds I used weren't fixed values, but rather, values relative to the baseline (1971-2000). So, if at one place (e.g. Vienna) the baseline maximum heatwave length was 10 days and at another place (Naples) the baseline maximum heatwave length was 15 days, then a "high" hazard level for those places in the future would be >20 days and >30 days, respectively (i.e. increase >100% as I had defined it). This gets around the problem of latitudinal variations in the climate. However, in keeping with @p-a-s-c-a-l 's idea, in this case one would just need to define the appropriate percentage values for low/medium/high and then have the software calculate e.g. The percentage values one uses to define low/medium/high would be different according to the index (e.g. for the flood hazard, the 5-day precipitation index would have 10% and 20% as a better choices for the different hazard levels). At the moment my choice of these percentage values is not really scientifically based :-( but rather based on producing a map which is not completely everywhere "low" or everywhere "high". |
Thanks for the clarification, yes defining the percentage relative to baseline makes more sense. |
@p-a-s-c-a-l Good question! To date I have just focused on the hazard scale for the future periods. My personal opinion is that we omit this classification for the baseline period and just show an absolute value. My reason for this is following - what happens if a user sees that their hazard level for the current climate is "high", but life is going on without problem, buildings are not crumbling down, infrastructure is functioning, and everything is good. Then, if the hazard level for the future period (e.g. 2041-2070) period is also "high", how does the user react? I would guess they would be of the opinion that they don't need to do anything, as the current hazard level is "high" and nothing bad is happening. |
Ok, agreed! @luis-meteogrid |
@p-a-s-c-a-l
One thing to note is that this will only work with indices defined with non-constant threshold values! For example, using our Tx75p example which uses a percentile value as a threshold is OK, as this value varies according to location. Here is a plot showing the absolute values of Tx75p. Just by eyeballing it, the colours deep-orange/red would be “high”, white-yellow “medium” and blues “low” hazard level. However, using the index "number of summer days" will not work, since a summer day is defined as a day with the maximum temperature > 25°C (i.e. a fixed value) - in this case southern (northern) Europe will always register a high (low) hazard level.
|
I want to refer to my comment at: If the hazard data are provided in the way, which were shown in the example "clarity:consecutive_max_heat" at the Geoserver "http://5.79.69.33:8080/geoserver" we are very lucky. This example demonstarates also, how to extract only the cell data of the Project area. I hope, that Atos and Meteogrid can provide all Hazard data layers in this way. |
This comment has been minimized.
This comment has been minimized.
@humerh Please don't use the ATOS instance (5.79.69.33) anymore for HC & EE (we might still use it for "background layers" that are input for to HC-LE), please use the METEOGRID instance instead: https://clarity.meteogrid.com/geoserver/ @negroscuro Please remove the HC layers to avoid further confusion @luis-meteogrid please provide the HC and EE layers on your geoserver instance Thanks! |
Example of access via WCS (GetCapabilities): |
The solution then is to create new legends with the following criteria: This may apply to several "type legends" which in turn apply to a set of layers. However, each layer cannot have a specific legend since it would be necessary to create 300 legends. If this is a possible solution, I will proceed with it as follows |
Not sure what the story with 300 legends is, but it's IMO a good idea to
keep the scale for a group of similar layers the same. Otherwise they can't
be visually compared.
…On Fri, 11 Oct 2019 at 14:36, LauraMTG ***@***.***> wrote:
Ok @DenoBeno <https://github.com/DenoBeno> and @RobAndGo
<https://github.com/RobAndGo>
The solution then is to create new legends with the following criteria:
1-More intervals for the set of low values.
2-New colors associated to each interval.
3- Lower legend maxima, e.g. 75% of the data range.
This may apply to several "type legends" which in turn apply to a set of
layers. However, each layer cannot have a specific legend since it would be
necessary to create 300 legends.
If this is a possible solution, I will proceed with it as follows
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8?email_source=notifications&email_token=AAWTC7QKO2WTZ7M4X2ER3LTQOBXNLA5CNFSM4GHYS4O2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEA73MWQ#issuecomment-541046362>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAWTC7VZNVDJMKZZPXRE42DQOBXNLANCNFSM4GHYS4OQ>
.
|
@DenoBeno is right and is something I should have mentioned earlier When comparing the scenarios one needs to use the same scale/legend. As a suggestion, one could fix the colour scale to that from the RCP45 2041-2070 scenario. This is approximately the middle of the possibilities, whereby the results from RCP26 and earlier time periods will appear lower, while those results from RCP85 and later time periods will appear higher. |
We are in the process of correcting the mask of all HI layers. This can alter some values that are presented so I suggest to fix the legends later. |
I have opened a new issue for data color coding/map scale. Please see #53. |
HC Heat Layers available for 1st data package. |
We have to add Pluvial Flood HC resources. See clarity-h2020/local-effects#9 (comment) |
A draft template resource for pluvial flood that uses variables is available here. Map Visualisation isn't 100% working yet as we had to introduce a lot of accidental complexity into the codebase due to incoherent variable meanings. |
Including this one, there are now three issues regarding the Pluvial Floods Input layers (#54 and clarity-h2020/local-effects#9), so closing this one. |
Create Layers for Hazard Indices calculated by ZAMG and make them available in CSIS.
This encompasses the following steps (to be confirmed / updated by @clarity-h2020/data-processing-team):
Questions:
Additionally, Map Visualisation, Table Visualisation and Report Generation have to be implemented.
The text was updated successfully, but these errors were encountered: