-
Notifications
You must be signed in to change notification settings - Fork 8
/
Copy pathtesterFile.Rmd
670 lines (502 loc) · 21.9 KB
/
testerFile.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
---
title: "Climate Futures Toolbox"
output:
html_document:
df_print: paged
---
# Welcome to the Climate Futures Toolbox
This vignette provides a walk-through of a common use case of the cft package:
understanding climate futures for a region of interest.
We'll use Yellowstone National Park, located in Wyoming, USA as a case study.
### What you'll learn
This vignette will show you how to:
- Access climate data for a spatial region of interest
- Produce a `data.frame` containing climate data
- Visualize historical and future data
- Generate and analyze new climate variables
### What you'll need
To get the most out of this vignette, we assume you have:
- At least 500 MB of disk space
- Some familiarity with ggplot2
- Some familiarity with dplyr (e.g., `filter()`, `group_by()`, and `summarize()`)
## About the data
Global Circulation Models (GCMs) provide estimates of historical and future
climate conditions.
The complexity of the climate system has lead to a large number GCMs and it is
common practice to examine outputs from many different models, treating each as
one plausible future.
Most GCMs are spatially coarse (often 1 degree), but downscaling provides finer
scale estimates. The cft package uses one downscaled climate model called MACA
(Multivariate Adaptive Climate Analog) Version 2
([details here](http://www.climatologylab.org/maca.html)).
### Acquiring and subsetting data within National Park Service boundaries
This package was originally written with the National Park Service in mind, so
it has the option to use the name of any park (or monument, preserve, etc.) within
the NPS. Use the `cftdata()` function to specify a range of years, a set of models,
a set of parameters, and a set of representative concentration pathways to return.
Leaving these arguments empty will results in a download of all available data
for that location.
# Loading the cft package from github
```{r install cft, warning=FALSE, message=FALSE, eval=FALSE}
library(devtools)
install_github("earthlab/cft")
```
## Attach cft and check the list of available functions
```{r}
library(cft)
ls(pos="package:cft")
```
## Look at the documentation for those functions
```{r}
?available_data
```
# Use read-only mode to find available data without initiating a full download.
```{r available data, cache=TRUE}
start_time <- Sys.time()
inputs <- cft::available_data()
end_time <- Sys.time()
end_time - start_time
levels(as.factor(inputs$variable_names$Variable))
levels(as.factor(inputs$variable_names$`Variable abbreviation`))
levels(as.factor(inputs$variable_names$Scenario))
levels(as.factor(inputs$variable_names$`Scenario abbreviation`))
levels(as.factor(inputs$variable_names$Scenario))
levels(as.factor(inputs$variable_names$`Scenario abbreviation`))
levels(as.factor(inputs$variable_names$Model))
levels(as.factor(inputs$variable_names$`Model abbreviation`))
```
# Filter the results from available_data() to specify which data to actually download.
Downloads from the API are limited to 500mb per request. A request for a large area of interest combined with a long time series will return a cryptic runtime error informing you that your request was too large.
This error may look like this:
"CURL Error: Transferred a partial file
Error in Rsx_nc4_get_vara_int: NetCDF: DAP failure
Var: pr_CCSM4_r6i1p1_rcp85 Ndims: 3 Start: 0,444,511 Count: 34333,2,3
Error in ncvar_get_inner(ncid2use, varid2use, nc$var[[li]]$missval, addOffset, :
C function Rsx_nc4_get_var_int returned error"
The solution to this problem is to subset your requests to make them fit within the boundaries of the API. You can achieve this by balancing the size of your requested spatial extent and the length of your requested time period. For a small national park, it's possible to pull the entire time series from the API but larger parks will require you to request shorter time window or stitch multiple time windows together.
## Filter variable names
```{r filter variables, cache=TRUE}
input_variables <- inputs$variable_names %>%
filter(Variable %in% c("Maximum Relative Humidity",
"Maximum Temperature",
"Minimum Relative Humidity",
"Minimum Temperature",
"Northward Wind",
"Precipitation")) %>%
filter(Scenario %in% c( "RCP 4.5", "RCP 8.5")) %>%
filter(Model %in% c(
"Beijing Climate Center - Climate System Model 1.1",
"Beijing Normal University - Earth System Model",
"Canadian Earth System Model 2",
"Centre National de Recherches Météorologiques - Climate Model 5",
"Commonwealth Scientific and Industrial Research Organisation - Mk3.6.0",
"Community Climate System Model 4",
"Geophysical Fluid Dynamics Laboratory - Earth System Model 2 Generalized Ocean Layer Dynamics",
"Geophysical Fluid Dynamics Laboratory - Earth System Model 2 Modular Ocean",
"Hadley Global Environment Model 2 - Climate Chemistry 365 (day) ",
"Hadley Global Environment Model 2 - Earth System 365 (day)",
"Institut Pierre Simon Laplace (IPSL) - Climate Model 5A - Low Resolution",
"Institut Pierre Simon Laplace (IPSL) - Climate Model 5A - Medium Resolution",
"Institut Pierre Simon Laplace (IPSL) - Climate Model 5B - Low Resolution",
"Institute of Numerical Mathematics Climate Model 4",
"Meteorological Research Institute - Coupled Global Climate Model 3",
"Model for Interdisciplinary Research On Climate - Earth System Model",
"Model for Interdisciplinary Research On Climate - Earth System Model - Chemistry",
"Model for Interdisciplinary Research On Climate 5",
"Norwegian Earth System Model 1 - Medium Resolution" )) %>%
pull("Available variable")
input_variables
```
# Establish area of interst (AOI) by bounding box
```{r bounding box, cache=TRUE}
bb <- getbb("hot springs")
my_boundary <- opq(bb) %>%
add_osm_feature(key = "boundary", value = "national_park") %>%
osmdata_sf()
my_boundary
```
```{r pulled bounding box, cache=TRUE}
boundaries <- my_boundary$osm_multipolygons
pulled_bb <- st_bbox(boundaries)
pulled_bb
```
```{r plot of area of interest, cache=TRUE, warning=FALSE, fig.height=8}
basemap <- ggplot(data = boundaries) +
geom_sf(fill = "cornflowerblue") +
geom_sf_text(aes(label = boundaries$name))
basemap
```
# Download full time series from a single point
```{r}
center_point <- st_centroid(boundaries) %>% st_bbox(center_point)
start_time <- Sys.time()
Pulled_data <- inputs$src %>%
hyper_filter(lat = lat <= c(center_point[4]+0.05) & lat >= c(center_point[2]-0.05)) %>%
hyper_filter(lon = lon <= c(center_point[3]+0.05) & lon >= c(center_point[1]-0.05)) %>%
hyper_tibble(select_var = input_variables
) %>%
st_as_sf(coords = c("lon", "lat"), crs = 4326, agr = "constant")
end_time <- Sys.time()
end_time - start_time
head(Pulled_data)
tail(Pulled_data)
```
```{r}
ggplot() +
geom_sf(data = boundaries, fill = "cornflowerblue") +
geom_sf(data = st_centroid(boundaries), color = "red", size=0.5) +
coord_sf(crs = 4326)
```
## Filter time
```{r filter time, cache=TRUE}
# Year 2034
time_min <- 38716
time_max <- 73048
input_times <- inputs$available_times %>%
add_column(index = 0) %>%
add_column(first_half = 0) %>%
add_column(second_half = 0)
input_times[which(inputs$available_times[,1] >= time_min & inputs$available_times[,1] <= time_max ),3] <- 1
med <- median(row_number(input_times[,3]))
input_times[which(as.numeric(row.names(input_times)) <= med),4] <- 1
input_times[which(as.numeric(row.names(input_times)) > med),5] <- 1
head(input_times)
tail(input_times)
```
##Pull and stitch
```{r stitch_pull, cache=TRUE}
start_time <- Sys.time()
Pulled_data_sub1 <- Pulled_data <- inputs$src %>%
hyper_filter(lat = lat <= c(center_point[4]+0.05) & lat >= c(center_point[2]-0.05)) %>%
hyper_filter(lon = lon <= c(center_point[3]+0.05) & lon >= c(center_point[1]-0.05)) %>%
hyper_filter(time = input_times[,4] == 1) %>%
hyper_tibble(select_var = input_variables
)
Pulled_data_sub1 <- Pulled_data <- inputs$src %>%
hyper_filter(lat = lat <= c(center_point[4]+0.05) & lat >= c(center_point[2]-0.05)) %>%
hyper_filter(lon = lon <= c(center_point[3]+0.05) & lon >= c(center_point[1]-0.05)) %>%
hyper_filter(time = input_times[,5] == 1) %>%
hyper_tibble(select_var = input_variables
)
end_time <- Sys.time()
end_time - start_time
tail(Pulled_data_sub1)
tail(Pulled_data_sub2)
```
# Download data by AOI, filtered times, and filtered variable list
```{r pulled data, cache=TRUE}
start_time <- Sys.time()
Pulled_data <- inputs$src %>%
hyper_filter(lat = lat <= c(pulled_bb[4]+0.05) & lat >= c(pulled_bb[2]-0.05)) %>%
hyper_filter(lon = lon <= c(pulled_bb[3]+0.05) & lon >= c(pulled_bb[1]-0.05)) %>%
hyper_tibble(select_var = input_variables
) %>%
st_as_sf(coords = c("lon", "lat"), crs = 4326, agr = "constant")
end_time <- Sys.time()
end_time - start_time
head(Pulled_data)
tail(Pulled_data)
```
### Time extent plot
Check this plot to make sure you downloaded your entire time series.
```{r}
plot(Pulled_data$time, Pulled_data$`pr_HadGEM2-ES365_r1i1p1_rcp85`)
```
### Spatial extent plot
Check here to make sure you downloaded the proper spatial extent.
```{r check pulled data, cache=TRUE, fig.height=8}
check_filter <- Pulled_data %>% filter(time == min(Pulled_data$time))
ggplot() +
geom_sf(data = boundaries, fill = "cornflowerblue") +
geom_sf(data = check_filter, color = "red", size=0.5) +
coord_sf(crs = 4326)
```
## If you encounter an error suggesting that you are pulling too much data, you will need to stitch a few requests together to keep each call below the 500mb limit.
```{r bounding box, cache=TRUE}
bb <- getbb("yellowstone")
bb_manual <- bb
bb_manual[1,1] <- -111.15594815937659
bb_manual[1,2] <- -109.8305463801207
bb_manual[2,1] <- 44.12354048271325
bb_manual[2,2] <- 45.11911641599412
my_boundary <- opq(bb_manual) %>%
add_osm_feature(key = "boundary", value = "national_park") %>%
osmdata_sf()
my_boundary
```
```{r pulled bounding box, cache=TRUE}
boundaries <- my_boundary$osm_multipolygons
pulled_bb <- st_bbox(boundaries)
pulled_bb
```
```{r stitch_pull, cache=TRUE}
start_time <- Sys.time()
Pulled_data_sub1 <- inputs$src %>%
hyper_filter(lat = lat <= c(pulled_bb[4]+0.05) & lat >= c(pulled_bb[2]-0.05)) %>%
hyper_filter(lon = lon <= c(pulled_bb[3]+0.05) & lon >= c(pulled_bb[1]-0.05)) %>%
hyper_filter(time = input_times[,4] == 1) %>%
hyper_tibble(select_var = input_variables
)
#%>%
# st_as_sf(coords = c("lon", "lat"), crs = 4326, agr = "constant")
## should time be in here?
Pulled_data_sub2 <- inputs$src %>%
hyper_filter(lat = lat <= c(pulled_bb[4]+0.05) & lat >= c(pulled_bb[2]-0.05)) %>%
hyper_filter(lon = lon <= c(pulled_bb[3]+0.05) & lon >= c(pulled_bb[1]-0.05)) %>%
hyper_filter(time = input_times[,5] == 1) %>%
hyper_tibble(select_var = input_variables
) #%>%
# st_as_sf(coords = c("lon", "lat"), crs = 4326, agr = "constant")
## should time be in here?
end_time <- Sys.time()
end_time - start_time
tail(Pulled_data_sub1)
tail(Pulled_data_sub2)
```
```{r rbind multi results}
start_time <- Sys.time()
Pulled_data_stitch <- bind_rows(Pulled_data_sub1, Pulled_data_sub2)
end_time <- Sys.time()
end_time - start_time
```
### Time extent plot
Check this plot to make sure you downloaded your entire time series.
```{r, cache=TRUE}
plot(Pulled_data_stitch$time, Pulled_data_stitch$`pr_HadGEM2-ES365_r1i1p1_rcp85`)
```
### Spatial extent plot
Check here to make sure you downloaded the proper spatial extent.
```{r check pulled data, cache=TRUE, fig.height=8}
check_filter <- Pulled_data_stitch %>% filter(time == min(Pulled_data_stitch$time))
ggplot() +
geom_sf(data = boundaries, fill = "cornflowerblue") +
geom_sf(data = check_filter, color = "red", size=0.5) +
coord_sf(crs = 4326)
```
# Melt downloaded points into a raster before aggretation
```{r rasterize with stars, cache=TRUE, fig.height=7}
rast <- st_rasterize(Pulled_data)
plot(rast)
#Pulled_data %>% as.data.frame() %>% brick()
```
# GridMET data
```{r GridMet view}
param_meta$gridmet
```
```{r GridMet, eval=FALSE,fig.height=8}
subed_times <- input_times %>% filter(index == 1)
GM <- getGridMET(st_as_sfc(st_bbox(boundaries)), "tmax", startDate = "1997-04-06", endDate = "1999-12-30")
SM_stars <- GM$gridmet_tmax %>% brick() %>% st_as_stars()
#st_set_dimensions(SM_stars, 3, values = X1997.04.06, names = "tmax")
ggplot() +
geom_sf(data = SM_stars, fill = "cornflowerblue") +
geom_sf(data = check_filter, color = "red", size=0.5) +
coord_sf(crs = 4326)
```
```{r, eval=FALSE}
st_get_dimension_values(rast)
#combo <- st_extract(rast, SM_stars) %>% st_as_sf()
combo <- c(SM_stars,rast , along_crs=TRUE, along=c(1,2))
class(combo)
class(SM_stars)
plot(combo$X2000.01.01)
```
```{r, eval=FALSE}
plot(combo$X2000.01.01)
#extracted_GridMET <- st_extract(rast, SM_stars) %>% st_as_sf()
ggplot(data=combo) +
geom_sf(aes(fill = X2000.01.01)) +
scale_fill_continuous(low="thistle2", high="darkred",
guide="colorbar",na.value="white")+
coord_sf(crs = 4326)
```
#Aggregate downloaded data to different spatial objects
## Aggregate to polygon (faster method)
```{r aggregate to polygon, cache=TRUE}
extracted <- st_extract(rast, boundaries$geometry) %>% st_as_sf()
names(extracted)[1] <- "nn"
ggplot(data=extracted) +
geom_sf(aes(fill = nn)) +
scale_fill_continuous(low="thistle2", high="darkred",
guide="colorbar",na.value="white")+
coord_sf(crs = 4326)
```
```{r, eval=FALSE}
cube <- src_slc %>% hyper_tbl_cube(select_var = c("pr_HadGEM2-ES365_r1i1p1_rcp85"))
cube
```
## Clip raster with polygon (slower method)
```{r cut out raster with polygon, cache=TRUE}
small_pulled <- Pulled_data %>%
filter(time == 72049)
intersection <- st_intersection(x = small_pulled, y = boundaries$geometry)
names(intersection)[1:2] <- c("Precipitation","b")
```
```{r plot of raster mask, cache=TRUE}
library(ggthemes)
ggplot() +
geom_sf(data = intersection, aes(color=Precipitation)) +
scale_color_continuous(low="thistle2", high="darkred",
guide="colorbar",na.value="white")+
geom_sf(data = boundaries, fill = NA, color = "white") +
theme_tufte()+
labs(title = "YELLOWSTONE NATIONAL PARK", subtitle = "Temperture in 2050")
```
## Aggregate to River segment
```{r pull river data, fig.height=10, cache=TRUE}
river <- opq(bb_manual) %>%
add_osm_feature(key = "waterway", value = "river") %>%
osmdata_sf()
river
```
```{r aggregate data to river lines, cache=TRUE}
river_sub <- st_buffer(river$osm_lines, 2200)
extracted_river <- st_extract(rast, river_sub$geometry ) %>% st_as_sf()
head(extracted_river)
#colnames(extracted_river)[1] <- "var1"
```
```{r plot river aggregation, fig.height=8, cache=TRUE}
ggplot(data=extracted_river) +
geom_sf(aes(fill = pr_HadGEM2.ES365_r1i1p1_rcp85), size=0) +
coord_sf(crs = 4326, xlim = c(pulled_bb[1], pulled_bb[3]),
ylim = c(pulled_bb[2], pulled_bb[4]),
expand = FALSE) +
scale_fill_continuous(low="thistle2", high="darkred",
guide="colorbar",na.value="white")+
labs(title = "Rivers of Yellowstone",
subtitle = "Projected humidity in 2040",
caption = "Data Source: Climate Futures...") +
theme_tufte()
```
## Aggregate to road segment
```{r pull road data, fig.height=8, cache=TRUE}
roads <- opq(bb_manual) %>%
add_osm_feature(key = 'highway', value = 'primary') %>%
add_osm_feature(key = 'highway', value = 'secondary') %>%
osmdata_sf()
roads_sub <- st_buffer(roads$osm_lines, 2200)
extracted_roads <- st_extract(rast, roads_sub$geometry ) %>% st_as_sf()
extracted_roads
```
```{r plot road aggregation, fig.height=8, cache=TRUE}
ggplot(data=extracted_roads) +
geom_sf(aes(fill = pr_HadGEM2.ES365_r1i1p1_rcp85), size=0) +
coord_sf(crs = 4326) +
scale_fill_continuous(low="thistle2", high="darkred",
guide="colorbar",na.value="white")+
labs(title = "Roads of Yellowstone",
subtitle = "Projected humidity in 2040",
caption = "Data Source: Climate Futures...") +
theme_tufte()
```
### Computing new daily climate variables
Now that we have all of the climate parameters for our study region, we can
compute functions of those variables.
For example, it is common to compute the midpoint of the maximum and minimum
daily temperature, which we can do using the `mutate` function:
```{r temp-midpoint, eval=FALSE}
df <- df %>%
mutate(tasmid = (tasmax + tasmin) / 2)
```
Now we have a new column called `tasmid` that is the midpoint of the maximum
and minumum daily temperature!
Wind speed provides another example of a derived parameter that can be computed
for each day.
By default, we have two wind-related parameters: the eastward wind component
(called `uas`) and the northward wind component (called `vas`), both in units of
meters per second (you can get this information from `cft::argument_reference`).
Wind speed can be computed from `vas` and `uas` using the Pythagorean theorem:
$\text{Wind speed} = \sqrt{v_{as}^2 + u_{as}^2}.$
In code:
```{r wind-speed, eval=FALSE}
df <- df %>%
mutate(wind_speed = sqrt(vas^2 + uas^2))
```
### Computing new climate variable summaries
Sometimes, there are new climate variables that summarize daily data.
For example, you may want to compute:
- Last Day of Frost (i.e., last day in spring when min. air temp. < 0 C)
- First Day of Frost (i.e., first day in fall when min. air temp. < 0 C)
- Number of days above or below some threshold (e.g., days with max. air temperature over 40 C, or days with > 1mm of precipitation)
- Growing season length (# days with air temperature > 0 C)
All of these quantities summarize daily data, and require some aggregation time interval which in many cases will be one year.
As an example, we will compute the growing season length for Wind Cave National Park across all models and emissions scenarios.
To do this, we first need to define a new column for year, which we will use as a grouping variable:
```{r get-year, eval=FALSE}
df <- df %>%
mutate(year = year(date))
```
Now, we want to compute growing season length for each year, model, emissions scenario combination.
```{r grow-season, eval=FALSE}
growing_seasons <- df %>%
group_by(rcp, model, year, ensemble) %>%
summarize(season_length = sum(tasmid > 273.15)) %>%
ungroup
```
Notice that we used our derived temperature midpoint column `tasmid`, and computed the total (`sum()`) number of days for each group where the temperature midpoint was greater than 0 C (or, 273.15 Kelvin, which are the units of the temperature data).
```{r glimpse-grow-season, eval=FALSE}
growing_seasons
```
Let's visualize the growing season over time for each model and emission scenario:
```{r plot-grow-season, fig.height = 5, fig.width = 6, eval=FALSE}
growing_seasons %>%
ggplot(aes(year, season_length, color = rcp, group = model)) +
geom_line(alpha = .3) +
facet_wrap(~rcp, ncol = 1) +
xlab("Year") +
ylab("Growing season length (days)") +
scale_color_manual(values = c("dodgerblue", "red")) +
theme(legend.position = "none")
```
## Comparing climate in two time periods
Use the tibble object that is returned from `cft_df()` as an input to
`compare_periods()` to compare climate between a reference and target period. You
may specify the function with which to aggregate your chosen variable as well
as the yearly time period months of the year to include in this calculation.
```{r comps, eval=FALSE}
comps <- compare_periods(df,
var1 = "pr",
var2 = "tasmax",
agg_fun = "mean",
target_period = c(2025, 2030),
reference_period = c(2020, 2024),
months1 = 5:8,
months2 = 5:8,
scenarios = c("rcp45", "rcp85"))
```
This provides a data frame that can be used to compare the values in the target
and reference period.
```{r glimpse-comps, eval=FALSE}
glimpse(comps)
```
One useful plot shows the difference in the two variables between reference and
target periods:
```{r plot-comps, fig.height = 6, fig.width = 9, eval=FALSE}
title <- paste("Change from the historical vs. reference period:",
comps$reference_period, comps$target_period, sep= " vs " )[1]
comps %>%
dplyr::select(parameter, rcp, model, reference_period, target_period, difference) %>%
pivot_wider(names_from = parameter, values_from = difference) %>%
ungroup %>%
mutate(rcp = ifelse(rcp == "rcp45", "RCP 4.5", "RCP 8.5")) %>%
ggplot(aes(pr, tasmax, color = rcp)) +
ggtitle(title) +
geom_point() +
geom_hline(yintercept = 0, alpha = .2) +
geom_vline(xintercept = 0, alpha = .2) +
geom_text_repel(aes(label = model), segment.size = .3, size = 3) +
xlab("Difference in mean daily precipitation (mm)") +
ylab("Difference in mean daily max. temperature (C)") +
scale_color_manual(values = c("dodgerblue", "red"),
"Greenhouse gas\ntrajectory")
```
So, nearly all model runs indicate warming, but the amount of warming varies by
model and emissions scenario.
Precipitation increases and decreases are predicted by different models.
## Why write the cft package?
The amount of data generated by downscaled GCMs can be quite large
(e.g., daily data at a few km spatial resolution).
The Climate Futures Toolbox was developed to help users access and use
smaller subsets.
Data is acquired from the [Northwest Knowledge Server of the University of
Idaho](http://thredds.northwestknowledge.net:8080/thredds/reacch_climate_CMIP5_macav2_catalog2.html).