Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: ElastiCache (Redis) snapshot window and retention limits #3707

Merged
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions builtin/providers/aws/resource_aws_elasticache_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,24 @@ func resourceAwsElasticacheCluster() *schema.Resource {
},
},

"snapshot_window": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},

"snapshot_retention_limit": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
value := v.(int)
if value > 35 {
es = append(es, fmt.Errorf(
"snapshot retention limit cannot be more than 35 days"))
}
return
},
},

"tags": tagsSchema(),

// apply_immediately is used to determine when the update modifications
Expand Down Expand Up @@ -187,6 +205,16 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{
req.CacheParameterGroupName = aws.String(v.(string))
}

if !strings.Contains(d.Get("node_type").(string), "cache.t2") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is the approach we should take. We should document that snapshotting does not work with cache.t2's , and we should bubble up the error if a user uses t2's anyway.

Only applying these params in this block would (especially without any message) would be misleading the user, as without an error, anything in their config that doesn't throw an error in applying will be written to state, so from Terraforms view, they are applied. But in reality, they are not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you thibk I should just log that snapshotting isn't available if the user specifies t2 and then just let the values be set? Or should I specifically check for snapahotting in conjunction with t2 (using a check like above) and throw an error?

if v, ok := d.GetOk("snapshot_retention_limit"); ok {
req.SnapshotRetentionLimit = aws.Int64(int64(v.(int)))
}

if v, ok := d.GetOk("snapshot_window"); ok {
req.SnapshotWindow = aws.String(v.(string))
}
}

if v, ok := d.GetOk("maintenance_window"); ok {
req.PreferredMaintenanceWindow = aws.String(v.(string))
}
Expand Down Expand Up @@ -261,6 +289,12 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{})
d.Set("security_group_ids", c.SecurityGroups)
d.Set("parameter_group_name", c.CacheParameterGroup)
d.Set("maintenance_window", c.PreferredMaintenanceWindow)
if c.SnapshotWindow != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d.Set can handle nils, so this guard isn't necessary. It would only be necessary if you're wanting to log our their value

d.Set("snapshot_window", c.SnapshotWindow)
}
if c.SnapshotRetentionLimit != nil {
d.Set("snapshot_retention_limit", c.SnapshotRetentionLimit)
}
if c.NotificationConfiguration != nil {
if *c.NotificationConfiguration.TopicStatus == "active" {
d.Set("notification_topic_arn", c.NotificationConfiguration.TopicArn)
Expand Down Expand Up @@ -343,6 +377,17 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{
req.EngineVersion = aws.String(d.Get("engine_version").(string))
requestUpdate = true
}
if !strings.Contains(d.Get("node_type").(string), "cache.t2") {
if d.HasChange("snapshot_window") {
req.SnapshotWindow = aws.String(d.Get("snapshot_window").(string))
requestUpdate = true
}

if d.HasChange("snapshot_retention_limit") {
req.SnapshotRetentionLimit = aws.Int64(int64(d.Get("snapshot_retention_limit").(int)))
requestUpdate = true
}
}

if d.HasChange("num_cache_nodes") {
req.NumCacheNodes = aws.Int64(int64(d.Get("num_cache_nodes").(int)))
Expand Down
133 changes: 133 additions & 0 deletions builtin/providers/aws/resource_aws_elasticache_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,70 @@ func TestAccAWSElasticacheCluster_basic(t *testing.T) {
})
}

func TestAccAWSElasticacheCluster_snapshots(t *testing.T) {
var ec elasticache.CacheCluster

ri := genRandInt()
config := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshots, ri)

resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSElasticacheClusterDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: config,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"),
testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_window", "05:00-09:00"),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_retention_limit", "3"),
),
},
},
})
}
func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) {
var ec elasticache.CacheCluster

ri := genRandInt()
preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshots, ri)
postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshotsUpdated, ri)

resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSElasticacheClusterDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: preConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"),
testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_window", "05:00-09:00"),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_retention_limit", "3"),
),
},

resource.TestStep{
Config: postConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"),
testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_window", "07:00-09:00"),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_retention_limit", "7"),
),
},
},
})
}

func TestAccAWSElasticacheCluster_vpc(t *testing.T) {
var csg elasticache.CacheSubnetGroup
var ec elasticache.CacheCluster
Expand Down Expand Up @@ -152,6 +216,75 @@ resource "aws_elasticache_cluster" "bar" {
}
`, genRandInt(), genRandInt(), genRandInt())

var testAccAWSElasticacheClusterConfig_snapshots = `
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "bar" {
name = "tf-test-security-group"
description = "tf-test-security-group-descr"
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_elasticache_security_group" "bar" {
name = "tf-test-security-group"
description = "tf-test-security-group-descr"
security_group_names = ["${aws_security_group.bar.name}"]
}

resource "aws_elasticache_cluster" "bar" {
cluster_id = "tf-test-%03d"
engine = "redis"
node_type = "cache.m1.small"
num_cache_nodes = 1
port = 6379
parameter_group_name = "default.redis2.8"
security_group_names = ["${aws_elasticache_security_group.bar.name}"]
snapshot_window = "05:00-09:00"
snapshot_retention_limit = 3
}
`

var testAccAWSElasticacheClusterConfig_snapshotsUpdated = `
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "bar" {
name = "tf-test-security-group"
description = "tf-test-security-group-descr"
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_elasticache_security_group" "bar" {
name = "tf-test-security-group"
description = "tf-test-security-group-descr"
security_group_names = ["${aws_security_group.bar.name}"]
}

resource "aws_elasticache_cluster" "bar" {
cluster_id = "tf-test-%03d"
engine = "redis"
node_type = "cache.m1.small"
num_cache_nodes = 1
port = 6379
parameter_group_name = "default.redis2.8"
security_group_names = ["${aws_elasticache_security_group.bar.name}"]
snapshot_window = "07:00-09:00"
snapshot_retention_limit = 7
apply_immediately = true
}
`

var testAccAWSElasticacheClusterInVPCConfig = fmt.Sprintf(`
resource "aws_vpc" "foo" {
cidr_block = "192.168.0.0/16"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,15 @@ names to associate with this cache cluster
Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3.
Example: `arn:aws:s3:::my_bucket/snapshot1.rdb`

* `snapshot_window` - (Optional) The daily time range (in UTC) during which ElastiCache will
begin taking a daily snapshot of your cache cluster. Can only be used for the Redis engine. Example: 05:00-09:00

* `snapshow_retention_limit` - (Optional) The number of days for which ElastiCache will
retain automatic cache cluster snapshots before deleting them. For example, if you set
SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retained for 5 days
before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.
Can only be used for the Redis engine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another gem: snapshotting is not enabled for t2 instance types, like the ones I've been testing with (explaining much 😆 )

Redis backup/restore is not supported for t2 instances.

* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an
SNS topic to send ElastiCache notifications to. Example:
`arn:aws:sns:us-east-1:012345678999:my_sns_topic`
Expand Down