From eb646a3cf0e74b5931f9250ea5bacd60d7934448 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 27 Jun 2024 10:40:33 -0700 Subject: [PATCH] chore(schema): update (#3621) Co-authored-by: github-actions Co-authored-by: Aayush thapa <84202325+aaythapa@users.noreply.github.com> --- samtranslator/schema/schema.json | 86 +++++---- schema_source/cloudformation-docs.json | 213 +++++++++++++++++------ schema_source/cloudformation.schema.json | 86 +++++---- 3 files changed, 242 insertions(+), 143 deletions(-) diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index 5fd9505c2..16c7fa5fb 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -39475,12 +39475,12 @@ "type": "array" }, "CloudWatchLogsLogGroupArn": { - "markdownDescription": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nNot required unless you specify `CloudWatchLogsRoleArn` .", + "markdownDescription": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", "title": "CloudWatchLogsLogGroupArn", "type": "string" }, "CloudWatchLogsRoleArn": { - "markdownDescription": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.", + "markdownDescription": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", "title": "CloudWatchLogsRoleArn", "type": "string" }, @@ -42030,7 +42030,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only.", + "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only.", "title": "Type", "type": "string" } @@ -46501,7 +46501,7 @@ "type": "string" }, "DefaultRedirectURI": { - "markdownDescription": "The default redirect URI. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nSee [OAuth 2.0 - Redirection Endpoint](https://docs.aws.amazon.com/https://tools.ietf.org/html/rfc6749#section-3.1.2) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", + "markdownDescription": "The default redirect URI. In app clients with one assigned IdP, replaces `redirect_uri` in authentication requests. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nFor more information, see [Default redirect URI](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html#cognito-user-pools-app-idp-settings-about) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", "title": "DefaultRedirectURI", "type": "string" }, @@ -69962,7 +69962,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -72375,7 +72375,7 @@ "type": "string" }, "PreserveClientIp": { - "markdownDescription": "Indicates whether your client's IP address is preserved as the source. The value is `true` or `false` .\n\n- If `true` , your client's IP address is used when you connect to a resource.\n- If `false` , the elastic network interface IP address is used when you connect to a resource.\n\nDefault: `true`", + "markdownDescription": "Indicates whether the client IP address is preserved as the source. The following are the possible values.\n\n- `true` - Use the client IP address as the source.\n- `false` - Use the network interface IP address as the source.\n\nDefault: `false`", "title": "PreserveClientIp", "type": "boolean" }, @@ -73063,7 +73063,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -73326,7 +73326,7 @@ "type": "array" }, "UserData": { - "markdownDescription": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) (Linux) or [Work with instance user data](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instancedata-add-user-data.html) (Windows) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* .", + "markdownDescription": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Amazon EC2 instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* .", "title": "UserData", "type": "string" } @@ -77583,7 +77583,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -77951,7 +77951,7 @@ "type": "string" }, "IamFleetRole": { - "markdownDescription": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide for Linux Instances* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", + "markdownDescription": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", "title": "IamFleetRole", "type": "string" }, @@ -84316,7 +84316,7 @@ "title": "EphemeralStorage" }, "ExecutionRoleArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. The task execution IAM role is required depending on the requirements of your task. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the *Amazon Elastic Container Service Developer Guide* .", + "markdownDescription": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "title": "ExecutionRoleArn", "type": "string" }, @@ -84388,7 +84388,7 @@ "type": "array" }, "TaskRoleArn": { - "markdownDescription": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see [Amazon ECS Task Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIAM roles for tasks on Windows require that the `-EnableTaskIAMRole` option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code to use the feature. For more information, see [Windows IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows_task_IAM_roles.html) in the *Amazon Elastic Container Service Developer Guide* .", + "markdownDescription": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "title": "TaskRoleArn", "type": "string" }, @@ -91473,7 +91473,7 @@ "title": "CacheUsageLimits" }, "DailySnapshotTime": { - "markdownDescription": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis only.", + "markdownDescription": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis and Serverless Memcached only.", "title": "DailySnapshotTime", "type": "string" }, @@ -91534,7 +91534,7 @@ "type": "array" }, "SnapshotRetentionLimit": { - "markdownDescription": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis only.", + "markdownDescription": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis and Serverless Memcached only.", "title": "SnapshotRetentionLimit", "type": "number" }, @@ -91815,7 +91815,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The list of tags.", "title": "Tags", "type": "array" }, @@ -91924,7 +91924,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The list of tags.", "title": "Tags", "type": "array" }, @@ -105398,7 +105398,7 @@ "type": "object" }, "ConnectionType": { - "markdownDescription": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", + "markdownDescription": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `SALESFORCE` - Designates a connection to Salesforce using OAuth authencation.\n\n- Requires the `AuthenticationConfiguration` member to be configured.\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", "title": "ConnectionType", "type": "string" }, @@ -105416,13 +105416,13 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the connection. Connection will not function as expected without a name.", + "markdownDescription": "The name of the connection.", "title": "Name", "type": "string" }, "PhysicalConnectionRequirements": { "$ref": "#/definitions/AWS::Glue::Connection.PhysicalConnectionRequirements", - "markdownDescription": "A map of physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection.", + "markdownDescription": "The physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection.", "title": "PhysicalConnectionRequirements" } }, @@ -105435,7 +105435,7 @@ "additionalProperties": false, "properties": { "AvailabilityZone": { - "markdownDescription": "The connection's Availability Zone. This field is redundant because the specified subnet implies the Availability Zone to be used. Currently the field must be populated, but it will be deprecated in the future.", + "markdownDescription": "The connection's Availability Zone.", "title": "AvailabilityZone", "type": "string" }, @@ -108766,7 +108766,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .", + "markdownDescription": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .\n\n*Allowed Values* : `AWS_SSO | SAML`", "title": "AuthenticationProviders", "type": "array" }, @@ -108807,7 +108807,7 @@ "items": { "type": "string" }, - "markdownDescription": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.", + "markdownDescription": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.\n\n*AllowedValues* : `SNS`", "title": "NotificationDestinations", "type": "array" }, @@ -113148,7 +113148,7 @@ "type": "array" }, "Name": { - "markdownDescription": "Name of the feature.", + "markdownDescription": "Name of the feature. For a list of allowed values, see [DetectorFeatureConfiguration](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorFeatureConfiguration.html#guardduty-Type-DetectorFeatureConfiguration-name) in the *GuardDuty API Reference* .", "title": "Name", "type": "string" }, @@ -113232,12 +113232,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "The tag value.", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "The tag key.", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113431,7 +113431,7 @@ "properties": { "Criterion": { "additionalProperties": false, - "markdownDescription": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, API and CFN use the following input list for the condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) .\n- type\n- updatedAt\n\nType: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value", + "markdownDescription": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, the API and AWS CLI use the following input list for the `FindingCriteria` condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) in the *Amazon GuardDuty User Guide* .\n- type\n- updatedAt\n\nType: ISO 8601 string format: `YYYY-MM-DDTHH:MM:SS.SSSZ` or `YYYY-MM-DDTHH:MM:SSZ` depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.dnsRequestAction.domainWithSuffix\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV6\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.kubernetesApiCallAction.namespace\n- service.action.kubernetesApiCallAction.remoteIpDetails.organization.asn\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.kubernetesApiCallAction.statusCode\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.localIpDetails.ipAddressV6\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- service.malwareScanDetails.threats.name\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value", "patternProperties": { "^[a-zA-Z0-9]+$": { "$ref": "#/definitions/AWS::GuardDuty::Filter.Condition" @@ -113447,12 +113447,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113563,12 +113563,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113620,7 +113620,7 @@ "type": "string" }, "InvitationId": { - "markdownDescription": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by using the ListInvitation action of the GuardDuty API.", + "markdownDescription": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by running the [ListInvitations](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_ListInvitations.html) in the *GuardDuty API Reference* .", "title": "InvitationId", "type": "string" }, @@ -113849,12 +113849,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -133739,12 +133739,12 @@ "type": "object" }, "KeySpec": { - "markdownDescription": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (China Regions only)\n\n- `SM2`", + "markdownDescription": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs (encryption and decryption *or* signing and verification)\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs (signing and verification *or* deriving shared secrets)\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs (signing and verification)\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (encryption and decryption *or* signing and verification *or* deriving shared secrets)\n\n- `SM2` (China Regions only)", "title": "KeySpec", "type": "string" }, "KeyUsage": { - "markdownDescription": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the property or specify `ENCRYPT_DECRYPT` .\n- For asymmetric KMS keys with RSA key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with ECC key material, specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 (China Regions only) key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For HMAC KMS keys, specify `GENERATE_VERIFY_MAC` .", + "markdownDescription": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the parameter or specify `ENCRYPT_DECRYPT` .\n- For HMAC KMS keys (symmetric), specify `GENERATE_VERIFY_MAC` .\n- For asymmetric KMS keys with RSA key pairs, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with NIST-recommended elliptic curve key pairs, specify `SIGN_VERIFY` or `KEY_AGREEMENT` .\n- For asymmetric KMS keys with `ECC_SECG_P256K1` key pairs specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 key pairs (China Regions only), specify `ENCRYPT_DECRYPT` , `SIGN_VERIFY` , or `KEY_AGREEMENT` .", "title": "KeyUsage", "type": "string" }, @@ -227410,7 +227410,7 @@ "type": "object" }, "NodeType": { - "markdownDescription": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `ds2.xlarge` | `ds2.8xlarge` | `dc1.large` | `dc1.8xlarge` | `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", + "markdownDescription": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", "title": "NodeType", "type": "string" }, @@ -227425,7 +227425,7 @@ "type": "string" }, "Port": { - "markdownDescription": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with ds2 or dc2 nodes - Select a port within the range `1150-65535` .", + "markdownDescription": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with dc2 nodes - Select a port within the range `1150-65535` .", "title": "Port", "type": "number" }, @@ -228340,7 +228340,7 @@ }, "TargetAction": { "$ref": "#/definitions/AWS::Redshift::ScheduledAction.ScheduledActionType", - "markdownDescription": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ds2.8xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \".", + "markdownDescription": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ra3.4xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \".", "title": "TargetAction" } }, @@ -236349,7 +236349,7 @@ "additionalProperties": false, "properties": { "PartitionDateSource": { - "markdownDescription": "Specifies the partition date source for the partitioned prefix. PartitionDateSource can be EventTime or DeliveryTime.", + "markdownDescription": "Specifies the partition date source for the partitioned prefix. `PartitionDateSource` can be `EventTime` or `DeliveryTime` .\n\nFor `DeliveryTime` , the time in the log file names corresponds to the delivery time for the log files.\n\nFor `EventTime` , The logs delivered are for a specific day only. The year, month, and day correspond to the day on which the event occurred, and the hour, minutes and seconds are set to 00 in the key.", "title": "PartitionDateSource", "type": "string" } @@ -240995,7 +240995,7 @@ "type": "object" }, "RedrivePolicy": { - "markdownDescription": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", + "markdownDescription": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is received by a consumer of the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", "title": "RedrivePolicy", "type": "object" }, @@ -254527,18 +254527,12 @@ "additionalProperties": false, "properties": { "BlockPublicPolicy": { - "markdownDescription": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", - "title": "BlockPublicPolicy", "type": "boolean" }, "ResourcePolicy": { - "markdownDescription": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", - "title": "ResourcePolicy", "type": "object" }, "SecretId": { - "markdownDescription": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN.", - "title": "SecretId", "type": "string" } }, diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 5ed262dac..4731847ec 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -3451,6 +3451,70 @@ "LogGroupName": "The CloudWatch log group name to be associated with the monitored log.", "PatternSet": "The log pattern set." }, + "AWS::ApplicationSignals::ServiceLevelObjective": { + "Description": "An optional description for this SLO.", + "Goal": "This structure contains the attributes that determine the goal of an SLO. This includes the time period for evaluation and the attainment threshold.", + "Name": "A name for this SLO.", + "Sli": "A structure containing information about the performance metric that this SLO monitors.", + "Tags": "A list of key-value pairs to associate with the SLO. You can associate as many as 50 tags with an SLO. To be able to associate tags with the SLO when you create the SLO, you must have the cloudwatch:TagResource permission.\n\nTags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values." + }, + "AWS::ApplicationSignals::ServiceLevelObjective CalendarInterval": { + "Duration": "Specifies the duration of each calendar interval. For example, if `Duration` is `1` and `DurationUnit` is `MONTH` , each interval is one month, aligned with the calendar.", + "DurationUnit": "Specifies the calendar interval unit.", + "StartTime": "The date and time when you want the first interval to start. Be sure to choose a time that configures the intervals the way that you want. For example, if you want weekly intervals starting on Mondays at 6 a.m., be sure to specify a start time that is a Monday at 6 a.m.\n\nWhen used in a raw HTTP Query API, it is formatted as be epoch time in seconds. For example: `1698778057`\n\nAs soon as one calendar interval ends, another automatically begins." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Dimension": { + "Name": "The name of the dimension. Dimension names must contain only ASCII characters, must include at least one non-whitespace character, and cannot start with a colon ( `:` ). ASCII control characters are not supported as part of dimension names.", + "Value": "The value of the dimension. Dimension values must contain only ASCII characters and must include at least one non-whitespace character. ASCII control characters are not supported as part of dimension values." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Goal": { + "AttainmentGoal": "The threshold that determines if the goal is being met. An *attainment goal* is the ratio of good periods that meet the threshold requirements to the total periods within the interval. For example, an attainment goal of 99.9% means that within your interval, you are targeting 99.9% of the periods to be in healthy state.\n\nIf you omit this parameter, 99 is used to represent 99% as the attainment goal.", + "Interval": "The time period used to evaluate the SLO. It can be either a calendar interval or rolling interval.\n\nIf you omit this parameter, a rolling interval of 7 days is used.", + "WarningThreshold": "The percentage of remaining budget over total budget that you want to get warnings for. If you omit this parameter, the default of 50.0 is used." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Interval": { + "CalendarInterval": "If the interval is a calendar interval, this structure contains the interval specifications.", + "RollingInterval": "If the interval is a rolling interval, this structure contains the interval specifications." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Metric": { + "Dimensions": "An array of one or more dimensions to use to define the metric that you want to use. For more information, see [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension) .", + "MetricName": "The name of the metric to use.", + "Namespace": "The namespace of the metric. For more information, see [Namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Namespace) ." + }, + "AWS::ApplicationSignals::ServiceLevelObjective MetricDataQuery": { + "AccountId": "The ID of the account where this metric is located. If you are performing this operation in a monitoring account, use this to specify which source account to retrieve this metric from.", + "Expression": "This field can contain a metric math expression to be performed on the other metrics that you are retrieving within this `MetricDataQueries` structure.\n\nA math expression can use the `Id` of the other metrics or queries to refer to those metrics, and can also use the `Id` of other expressions to use the result of those expressions. For more information about metric math expressions, see [Metric Math Syntax and Functions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html#metric-math-syntax) in the *Amazon CloudWatch User Guide* .\n\nWithin each `MetricDataQuery` object, you must specify either `Expression` or `MetricStat` but not both.", + "Id": "A short name used to tie this object to the results in the response. This `Id` must be unique within a `MetricDataQueries` array. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the metric math expression. The valid characters are letters, numbers, and underscore. The first character must be a lowercase letter.", + "MetricStat": "A metric to be used directly for the SLO, or to be used in the math expression that will be used for the SLO.\n\nWithin one `MetricDataQuery` object, you must specify either `Expression` or `MetricStat` but not both.", + "ReturnData": "Use this only if you are using a metric math expression for the SLO. Specify `true` for `ReturnData` for only the one expression result to use as the alarm. For all other metrics and expressions in the same `CreateServiceLevelObjective` operation, specify `ReturnData` as `false` ." + }, + "AWS::ApplicationSignals::ServiceLevelObjective MetricStat": { + "Metric": "The metric to use as the service level indicator, including the metric name, namespace, and dimensions.", + "Period": "The granularity, in seconds, to be used for the metric. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60. For high-resolution metrics that are collected at intervals of less than one minute, the period can be 1, 5, 10, 30, 60, or any multiple of 60. High-resolution metrics are those metrics stored by a `PutMetricData` call that includes a `StorageResolution` of 1 second.", + "Stat": "The statistic to use for comparison to the threshold. It can be any CloudWatch statistic or extended statistic. For more information about statistics, see [CloudWatch statistics definitions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Statistics-definitions.html) .", + "Unit": "If you omit `Unit` then all data that was collected with any unit is returned, along with the corresponding units that were specified when the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified. If you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions." + }, + "AWS::ApplicationSignals::ServiceLevelObjective RollingInterval": { + "Duration": "Specifies the duration of each rolling interval. For example, if `Duration` is `7` and `DurationUnit` is `DAY` , each rolling interval is seven days.", + "DurationUnit": "Specifies the rolling interval unit." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Sli": { + "ComparisonOperator": "The arithmetic operation to use when comparing the specified metric to the threshold.", + "MetricThreshold": "The value that the SLI metric is compared to.", + "SliMetric": "Use this structure to specify the metric to be used for the SLO." + }, + "AWS::ApplicationSignals::ServiceLevelObjective SliMetric": { + "KeyAttributes": "If this SLO is related to a metric collected by Application Signals, you must use this field to specify which service the SLO metric is related to. To do so, you must specify at least the `Type` , `Name` , and `Environment` attributes.\n\nThis is a string-to-string map. It can include the following fields.\n\n- `Type` designates the type of object this is.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.", + "MetricDataQueries": "If this SLO monitors a CloudWatch metric or the result of a CloudWatch metric math expression, use this structure to specify that metric or expression.", + "MetricType": "If the SLO is to monitor either the `LATENCY` or `AVAILABILITY` metric that Application Signals collects, use this field to specify which of those metrics is used.", + "OperationName": "If the SLO is to monitor a specific operation of the service, use this field to specify the name of that operation.", + "PeriodSeconds": "The number of seconds to use as the period for SLO evaluation. Your application's performance is compared to the SLI during each period. For each period, the application is determined to have either achieved or not achieved the necessary performance.", + "Statistic": "The statistic to use for comparison to the threshold. It can be any CloudWatch statistic or extended statistic. For more information about statistics, see [CloudWatch statistics definitions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Statistics-definitions.html) ." + }, + "AWS::ApplicationSignals::ServiceLevelObjective Tag": { + "Key": "A string that you can use to assign a value. The combination of tag keys and values can help you organize and categorize your resources.", + "Value": "The value for the specified tag key." + }, "AWS::Athena::CapacityReservation": { "CapacityAssignmentConfiguration": "Assigns Athena workgroups (and hence their queries) to capacity reservations. A capacity reservation can have only one capacity assignment configuration, but the capacity assignment configuration can be made up of multiple individual assignments. Each assignment specifies how Athena queries can consume capacity from the capacity reservation that their workgroup is mapped to.", "Name": "The name of the capacity reservation.", @@ -4491,7 +4555,6 @@ "AWS::Batch::JobDefinition NodeRangeProperty": { "Container": "The container details for the node range.", "EcsProperties": "This is an object that represents the properties of the node range for a multi-node parallel job.", - "EksProperties": "", "InstanceTypes": "The instance types of the underlying host infrastructure of a multi-node parallel job.\n\n> This parameter isn't applicable to jobs that are running on Fargate resources.\n> \n> In addition, this list object is currently limited to one element.", "TargetNodes": "The range of nodes, using node index values. A range of `0:3` indicates nodes with index values of `0` through `3` . If the starting range value is omitted ( `:n` ), then `0` is used to start the range. If the ending range value is omitted ( `n:` ), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes ( `0:n` ). You can nest node ranges (for example, `0:10` and `4:5` ). In this case, the `4:5` range properties override the `0:10` properties." }, @@ -4609,6 +4672,7 @@ "CustomerEncryptionKeyArn": "The Amazon Resource Name (ARN) of the AWS KMS key that encrypts the agent.", "Description": "The description of the agent.", "FoundationModel": "The foundation model used for orchestration by the agent.", + "GuardrailConfiguration": "Details about the guardrail associated with the agent.", "IdleSessionTTLInSeconds": "The number of seconds for which Amazon Bedrock keeps information about a user's conversation with the agent.\n\nA user interaction remains active for the amount of time specified. If no conversation occurs during this time, the session expires and Amazon Bedrock deletes any data provided before the timeout.", "Instruction": "Instructions that tell the agent what it should do and how it should interact with users.", "KnowledgeBases": "The knowledge bases associated with the agent.", @@ -4648,6 +4712,10 @@ "AWS::Bedrock::Agent FunctionSchema": { "Functions": "A list of functions that each define an action in the action group." }, + "AWS::Bedrock::Agent GuardrailConfiguration": { + "GuardrailIdentifier": "The identifier for the guardrail.", + "GuardrailVersion": "The version of the guardrail." + }, "AWS::Bedrock::Agent InferenceConfiguration": { "MaximumLength": "The maximum number of tokens allowed in the generated response.", "StopSequences": "A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.", @@ -6144,8 +6212,8 @@ }, "AWS::CloudTrail::Trail": { "AdvancedEventSelectors": "Specifies the settings for advanced event selectors. You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either `AdvancedEventSelectors` or `EventSelectors` , but not both. If you apply `AdvancedEventSelectors` to a trail, any existing `EventSelectors` are overwritten. For more information about advanced event selectors, see [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .", - "CloudWatchLogsLogGroupArn": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nNot required unless you specify `CloudWatchLogsRoleArn` .", - "CloudWatchLogsRoleArn": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.", + "CloudWatchLogsLogGroupArn": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", + "CloudWatchLogsRoleArn": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", "EnableLogFileValidation": "Specifies whether log file validation is enabled. The default is false.\n\n> When you disable log file integrity validation, the chain of digest files is broken after one hour. CloudTrail does not create digest files for log files that were delivered during a period in which log file integrity validation was disabled. For example, if you enable log file integrity validation at noon on January 1, disable it at noon on January 2, and re-enable it at noon on January 10, digest files will not be created for the log files delivered from noon on January 2 to noon on January 10. The same applies whenever you stop CloudTrail logging or delete a trail.", "EventSelectors": "Use event selectors to further specify the management and data event settings for your trail. By default, trails created without specific event selectors will be configured to log all read and write management events, and no data events. When an event occurs in your account, CloudTrail evaluates the event selector for all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.\n\nYou can configure up to five event selectors for a trail.\n\nYou cannot apply both event selectors and advanced event selectors to a trail.", "IncludeGlobalServiceEvents": "Specifies whether the trail is publishing events from global services such as IAM to the log files.", @@ -6558,7 +6626,7 @@ "AWS::CodeBuild::Project WebhookFilter": { "ExcludeMatchedPattern": "Used to indicate that the `pattern` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the `pattern` triggers a build. If false, then a webhook event that matches the `pattern` triggers a build.", "Pattern": "For a `WebHookFilter` that uses `EVENT` type, a comma-separated string that specifies one or more events. For example, the webhook filter `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` allows all push, pull request created, and pull request updated events to trigger a build.\n\nFor a `WebHookFilter` that uses any of the other filter types, a regular expression pattern. For example, a `WebHookFilter` that uses `HEAD_REF` for its `type` and the pattern `^refs/heads/` triggers a build when the head reference is a branch with a reference name `refs/heads/branch-name` .", - "Type": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only." + "Type": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only." }, "AWS::CodeBuild::ReportGroup": { "DeleteReports": "When deleting a report group, specifies if reports within the report group should be deleted.\n\n- **true** - Deletes any reports that belong to the report group before deleting the report group.\n- **false** - You must delete any reports in the report group. This is the default value. If you delete a report group that contains one or more reports, an exception is thrown.", @@ -7258,7 +7326,7 @@ "AuthSessionValidity": "Amazon Cognito creates a session token for each API request in an authentication flow. `AuthSessionValidity` is the duration, in minutes, of that session token. Your user pool native user must respond to each authentication challenge before the session expires.", "CallbackURLs": "A list of allowed redirect (callback) URLs for the IdPs.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nSee [OAuth 2.0 - Redirection Endpoint](https://docs.aws.amazon.com/https://tools.ietf.org/html/rfc6749#section-3.1.2) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", "ClientName": "The client name for the user pool client you would like to create.", - "DefaultRedirectURI": "The default redirect URI. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nSee [OAuth 2.0 - Redirection Endpoint](https://docs.aws.amazon.com/https://tools.ietf.org/html/rfc6749#section-3.1.2) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", + "DefaultRedirectURI": "The default redirect URI. In app clients with one assigned IdP, replaces `redirect_uri` in authentication requests. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nFor more information, see [Default redirect URI](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html#cognito-user-pools-app-idp-settings-about) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", "EnablePropagateAdditionalUserContextData": "Activates the propagation of additional user context data. For more information about propagation of user context data, see [Adding advanced security to a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html) . If you don\u2019t include this parameter, you can't send device fingerprint information, including source IP address, to Amazon Cognito advanced security. You can only activate `EnablePropagateAdditionalUserContextData` in an app client that has a client secret.", "EnableTokenRevocation": "Activates or deactivates token revocation. For more information about revoking tokens, see [RevokeToken](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_RevokeToken.html) .\n\nIf you don't include this parameter, token revocation is automatically activated for the new user pool client.", "ExplicitAuthFlows": "The authentication flows that you want your user pool client to support. For each app client in your user pool, you can sign in your users with any combination of one or more flows, including with a user name and Secure Remote Password (SRP), a user name and password, or a custom authentication process that you define with Lambda functions.\n\n> If you don't specify a value for `ExplicitAuthFlows` , your user client supports `ALLOW_REFRESH_TOKEN_AUTH` , `ALLOW_USER_SRP_AUTH` , and `ALLOW_CUSTOM_AUTH` . \n\nValid values include:\n\n- `ALLOW_ADMIN_USER_PASSWORD_AUTH` : Enable admin based user password authentication flow `ADMIN_USER_PASSWORD_AUTH` . This setting replaces the `ADMIN_NO_SRP_AUTH` setting. With this authentication flow, your app passes a user name and password to Amazon Cognito in the request, instead of using the Secure Remote Password (SRP) protocol to securely transmit the password.\n- `ALLOW_CUSTOM_AUTH` : Enable Lambda trigger based authentication.\n- `ALLOW_USER_PASSWORD_AUTH` : Enable user password-based authentication. In this flow, Amazon Cognito receives the password in the request instead of using the SRP protocol to verify passwords.\n- `ALLOW_USER_SRP_AUTH` : Enable SRP-based authentication.\n- `ALLOW_REFRESH_TOKEN_AUTH` : Enable authflow to refresh tokens.\n\nIn some environments, you will see the values `ADMIN_NO_SRP_AUTH` , `CUSTOM_AUTH_FLOW_ONLY` , or `USER_PASSWORD_AUTH` . You can't assign these legacy `ExplicitAuthFlows` values to user pool clients at the same time as values that begin with `ALLOW_` ,\nlike `ALLOW_USER_SRP_AUTH` .", @@ -10991,7 +11059,7 @@ "InstanceGenerations": "Indicates whether current or previous generation instance types are included. The current generation instance types are recommended for use. Current generation instance types are typically the latest two to three generations in each instance family. For more information, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide* .\n\nFor current generation instance types, specify `current` .\n\nFor previous generation instance types, specify `previous` .\n\nDefault: Current and previous generation instance types", "LocalStorage": "Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, [Amazon EC2 instance store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) in the *Amazon EC2 User Guide* .\n\n- To include instance types with instance store volumes, specify `included` .\n- To require only instance types with instance store volumes, specify `required` .\n- To exclude instance types with instance store volumes, specify `excluded` .\n\nDefault: `included`", "LocalStorageTypes": "The type of local storage that is required.\n\n- For instance types with hard disk drive (HDD) storage, specify `hdd` .\n- For instance types with solid state drive (SSD) storage, specify `ssd` .\n\nDefault: `hdd` and `ssd`", - "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "MemoryGiBPerVCpu": "The minimum and maximum amount of memory per vCPU, in GiB.\n\nDefault: No minimum or maximum limits", "MemoryMiB": "The minimum and maximum amount of memory, in MiB.", "NetworkBandwidthGbps": "The minimum and maximum amount of baseline network bandwidth, in gigabits per second (Gbps). For more information, see [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -11353,7 +11421,7 @@ }, "AWS::EC2::InstanceConnectEndpoint": { "ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.", - "PreserveClientIp": "Indicates whether your client's IP address is preserved as the source. The value is `true` or `false` .\n\n- If `true` , your client's IP address is used when you connect to a resource.\n- If `false` , the elastic network interface IP address is used when you connect to a resource.\n\nDefault: `true`", + "PreserveClientIp": "Indicates whether the client IP address is preserved as the source. The following are the possible values.\n\n- `true` - Use the client IP address as the source.\n- `false` - Use the network interface IP address as the source.\n\nDefault: `false`", "SecurityGroupIds": "One or more security groups to associate with the endpoint. If you don't specify a security group, the default security group for your VPC will be associated with the endpoint.", "SubnetId": "The ID of the subnet in which to create the EC2 Instance Connect Endpoint.", "Tags": "The tags to apply to the EC2 Instance Connect Endpoint during creation." @@ -11474,7 +11542,7 @@ "InstanceGenerations": "Indicates whether current or previous generation instance types are included. The current generation instance types are recommended for use. Current generation instance types are typically the latest two to three generations in each instance family. For more information, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide* .\n\nFor current generation instance types, specify `current` .\n\nFor previous generation instance types, specify `previous` .\n\nDefault: Current and previous generation instance types", "LocalStorage": "Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, [Amazon EC2 instance store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) in the *Amazon EC2 User Guide* .\n\n- To include instance types with instance store volumes, specify `included` .\n- To require only instance types with instance store volumes, specify `required` .\n- To exclude instance types with instance store volumes, specify `excluded` .\n\nDefault: `included`", "LocalStorageTypes": "The type of local storage that is required.\n\n- For instance types with hard disk drive (HDD) storage, specify `hdd` .\n- For instance types with solid state drive (SSD) storage, specify `ssd` .\n\nDefault: `hdd` and `ssd`", - "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "MemoryGiBPerVCpu": "The minimum and maximum amount of memory per vCPU, in GiB.\n\nDefault: No minimum or maximum limits", "MemoryMiB": "The minimum and maximum amount of memory, in MiB.", "NetworkBandwidthGbps": "The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).\n\nDefault: No minimum or maximum limits", @@ -11525,7 +11593,7 @@ "SecurityGroupIds": "The IDs of the security groups. You can specify the IDs of existing security groups and references to resources created by the stack template.\n\nIf you specify a network interface, you must specify any security groups as part of the network interface instead.", "SecurityGroups": "The names of the security groups. For a nondefault VPC, you must use security group IDs instead.\n\nIf you specify a network interface, you must specify any security groups as part of the network interface instead of using this parameter.", "TagSpecifications": "The tags to apply to the resources that are created during instance launch.\n\nTo tag a resource after it has been created, see [CreateTags](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTags.html) .\n\nTo tag the launch template itself, use [TagSpecifications](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-launchtemplate.html#cfn-ec2-launchtemplate-tagspecifications) .", - "UserData": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) (Linux) or [Work with instance user data](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instancedata-add-user-data.html) (Windows) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* ." + "UserData": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Amazon EC2 instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* ." }, "AWS::EC2::LaunchTemplate LaunchTemplateElasticInferenceAccelerator": { "Count": "The number of elastic inference accelerators to attach to the instance.\n\nDefault: 1", @@ -12186,7 +12254,7 @@ "InstanceGenerations": "Indicates whether current or previous generation instance types are included. The current generation instance types are recommended for use. Current generation instance types are typically the latest two to three generations in each instance family. For more information, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide* .\n\nFor current generation instance types, specify `current` .\n\nFor previous generation instance types, specify `previous` .\n\nDefault: Current and previous generation instance types", "LocalStorage": "Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, [Amazon EC2 instance store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) in the *Amazon EC2 User Guide* .\n\n- To include instance types with instance store volumes, specify `included` .\n- To require only instance types with instance store volumes, specify `required` .\n- To exclude instance types with instance store volumes, specify `excluded` .\n\nDefault: `included`", "LocalStorageTypes": "The type of local storage that is required.\n\n- For instance types with hard disk drive (HDD) storage, specify `hdd` .\n- For instance types with solid state drive (SSD) storage, specify `ssd` .\n\nDefault: `hdd` and `ssd`", - "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "MemoryGiBPerVCpu": "The minimum and maximum amount of memory per vCPU, in GiB.\n\nDefault: No minimum or maximum limits", "MemoryMiB": "The minimum and maximum amount of memory, in MiB.", "NetworkBandwidthGbps": "The minimum and maximum amount of baseline network bandwidth, in gigabits per second (Gbps). For more information, see [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -12265,7 +12333,7 @@ "AllocationStrategy": "The strategy that determines how to allocate the target Spot Instance capacity across the Spot Instance pools specified by the Spot Fleet launch configuration. For more information, see [Allocation strategies for Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-allocation-strategy.html) in the *Amazon EC2 User Guide* .\n\n- **priceCapacityOptimized (recommended)** - Spot Fleet identifies the pools with the highest capacity availability for the number of instances that are launching. This means that we will request Spot Instances from the pools that we believe have the lowest chance of interruption in the near term. Spot Fleet then requests Spot Instances from the lowest priced of these pools.\n- **capacityOptimized** - Spot Fleet identifies the pools with the highest capacity availability for the number of instances that are launching. This means that we will request Spot Instances from the pools that we believe have the lowest chance of interruption in the near term. To give certain instance types a higher chance of launching first, use `capacityOptimizedPrioritized` . Set a priority for each instance type by using the `Priority` parameter for `LaunchTemplateOverrides` . You can assign the same priority to different `LaunchTemplateOverrides` . EC2 implements the priorities on a best-effort basis, but optimizes for capacity first. `capacityOptimizedPrioritized` is supported only if your Spot Fleet uses a launch template. Note that if the `OnDemandAllocationStrategy` is set to `prioritized` , the same priority is applied when fulfilling On-Demand capacity.\n- **diversified** - Spot Fleet requests instances from all of the Spot Instance pools that you specify.\n- **lowestPrice (not recommended)** - > We don't recommend the `lowestPrice` allocation strategy because it has the highest risk of interruption for your Spot Instances. \n\nSpot Fleet requests instances from the lowest priced Spot Instance pool that has available capacity. If the lowest priced pool doesn't have available capacity, the Spot Instances come from the next lowest priced pool that has available capacity. If a pool runs out of capacity before fulfilling your desired capacity, Spot Fleet will continue to fulfill your request by drawing from the next lowest priced pool. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. Because this strategy only considers instance price and not capacity availability, it might lead to high interruption rates.\n\nDefault: `lowestPrice`", "Context": "Reserved.", "ExcessCapacityTerminationPolicy": "Indicates whether running Spot Instances should be terminated if you decrease the target capacity of the Spot Fleet request below the current size of the Spot Fleet.\n\nSupported only for fleets of type `maintain` .", - "IamFleetRole": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide for Linux Instances* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", + "IamFleetRole": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", "InstanceInterruptionBehavior": "The behavior when a Spot Instance is interrupted. The default is `terminate` .", "InstancePoolsToUseCount": "The number of Spot pools across which to allocate your target Spot capacity. Valid only when Spot *AllocationStrategy* is set to `lowest-price` . Spot Fleet selects the cheapest Spot pools and evenly allocates your target Spot capacity across the number of Spot pools that you specify.\n\nNote that Spot Fleet attempts to draw Spot Instances from the number of pools that you specify on a best effort basis. If a pool runs out of Spot capacity before fulfilling your target capacity, Spot Fleet will continue to fulfill your request by drawing from the next cheapest pool. To ensure that your target capacity is met, you might receive Spot Instances from more than the number of pools that you specified. Similarly, if most of the pools have no Spot capacity, you might receive your full target capacity from fewer than the number of pools that you specified.", "LaunchSpecifications": "The launch specifications for the Spot Fleet request. If you specify `LaunchSpecifications` , you can't specify `LaunchTemplateConfigs` .", @@ -13105,7 +13173,7 @@ "ContainerDefinitions": "A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see [Amazon ECS Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html) in the *Amazon Elastic Container Service Developer Guide* .", "Cpu": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nThe CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.\n\n- 256 (.25 vCPU) - Available `memory` values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)\n- 512 (.5 vCPU) - Available `memory` values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)\n- 1024 (1 vCPU) - Available `memory` values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)\n- 2048 (2 vCPU) - Available `memory` values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)\n- 4096 (4 vCPU) - Available `memory` values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)\n- 8192 (8 vCPU) - Available `memory` values: 16 GB and 60 GB in 4 GB increments\n\nThis option requires Linux platform `1.4.0` or later.\n- 16384 (16vCPU) - Available `memory` values: 32GB and 120 GB in 8 GB increments\n\nThis option requires Linux platform `1.4.0` or later.", "EphemeralStorage": "The ephemeral storage settings to use for tasks run with the task definition.", - "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. The task execution IAM role is required depending on the requirements of your task. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the *Amazon Elastic Container Service Developer Guide* .", + "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "Family": "The name of a family that this task definition is registered to. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.\n\nA family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.\n\n> To use revision numbers when you update a task definition, specify this property. If you don't specify a value, AWS CloudFormation generates a new task definition each time that you update it.", "InferenceAccelerators": "The Elastic Inference accelerators to use for the containers in the task.", "IpcMode": "The IPC resource namespace to use for the containers in the task. The valid values are `host` , `task` , or `none` . If `host` is specified, then all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all containers within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see [IPC settings](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) in the *Docker run reference* .\n\nIf the `host` IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. For more information, see [Docker security](https://docs.aws.amazon.com/https://docs.docker.com/engine/security/security/) .\n\nIf you are setting namespaced kernel parameters using `systemControls` for the containers in the task, the following will apply to your IPC resource namespace. For more information, see [System Controls](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n- For tasks that use the `host` IPC mode, IPC namespace related `systemControls` are not supported.\n- For tasks that use the `task` IPC mode, IPC namespace related `systemControls` will apply to all containers within a task.\n\n> This parameter is not supported for Windows containers or tasks run on AWS Fargate .", @@ -13117,7 +13185,7 @@ "RequiresCompatibilities": "The task launch types the task definition was validated against. The valid values are `EC2` , `FARGATE` , and `EXTERNAL` . For more information, see [Amazon ECS launch types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html) in the *Amazon Elastic Container Service Developer Guide* .", "RuntimePlatform": "The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.", "Tags": "The metadata that you apply to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them.\n\nThe following basic restrictions apply to tags:\n\n- Maximum number of tags per resource - 50\n- For each resource, each tag key must be unique, and each tag key can have only one value.\n- Maximum key length - 128 Unicode characters in UTF-8\n- Maximum value length - 256 Unicode characters in UTF-8\n- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.\n- Tag keys and values are case-sensitive.\n- Do not use `aws:` , `AWS:` , or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.", - "TaskRoleArn": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see [Amazon ECS Task Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIAM roles for tasks on Windows require that the `-EnableTaskIAMRole` option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code to use the feature. For more information, see [Windows IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows_task_IAM_roles.html) in the *Amazon Elastic Container Service Developer Guide* .", + "TaskRoleArn": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "Volumes": "The list of data volume definitions for the task. For more information, see [Using data volumes in tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> The `host` and `sourcePath` parameters aren't supported for tasks run on AWS Fargate ." }, "AWS::ECS::TaskDefinition AuthorizationConfig": { @@ -14109,6 +14177,7 @@ "AWS::EMRServerless::Application WorkerConfiguration": { "Cpu": "", "Disk": "", + "DiskType": "", "Memory": "" }, "AWS::EMRServerless::Application WorkerTypeSpecificationInput": { @@ -14278,7 +14347,7 @@ }, "AWS::ElastiCache::ServerlessCache": { "CacheUsageLimits": "The cache usage limit for the serverless cache.", - "DailySnapshotTime": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis only.", + "DailySnapshotTime": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis and Serverless Memcached only.", "Description": "A description of the serverless cache.", "Endpoint": "Represents the information required for client programs to connect to a cache node. This value is read-only.", "Engine": "The engine the serverless cache is compatible with.", @@ -14289,7 +14358,7 @@ "SecurityGroupIds": "The IDs of the EC2 security groups associated with the serverless cache.", "ServerlessCacheName": "The unique identifier of the serverless cache.", "SnapshotArnsToRestore": "The ARN of the snapshot from which to restore data into the new cache.", - "SnapshotRetentionLimit": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis only.", + "SnapshotRetentionLimit": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis and Serverless Memcached only.", "SubnetIds": "If no subnet IDs are given and your VPC is in us-west-1, then ElastiCache will select 2 default subnets across AZs in your VPC. For all other Regions, if no subnet IDs are given then ElastiCache will select 3 default subnets across AZs in your default VPC.", "Tags": "A list of tags to be added to this resource.", "UserGroupId": "The identifier of the user group associated with the serverless cache. Available for Redis only. Default is NULL." @@ -14331,7 +14400,7 @@ "Engine": "The current supported value is redis.", "NoPasswordRequired": "Indicates a password is not required for this user.", "Passwords": "Passwords used for this user. You can create up to two passwords for each user.", - "Tags": "", + "Tags": "The list of tags.", "UserId": "The ID of the user.", "UserName": "The username of the user." }, @@ -14345,7 +14414,7 @@ }, "AWS::ElastiCache::UserGroup": { "Engine": "The current supported value is redis.", - "Tags": "", + "Tags": "The list of tags.", "UserGroupId": "The ID of the user group.", "UserIds": "The list of user IDs that belong to the user group. A user named `default` must be included." }, @@ -16447,14 +16516,14 @@ }, "AWS::Glue::Connection ConnectionInput": { "ConnectionProperties": "These key-value pairs define parameters for the connection.", - "ConnectionType": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", + "ConnectionType": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `SALESFORCE` - Designates a connection to Salesforce using OAuth authencation.\n\n- Requires the `AuthenticationConfiguration` member to be configured.\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", "Description": "The description of the connection.", "MatchCriteria": "A list of criteria that can be used in selecting this connection.", - "Name": "The name of the connection. Connection will not function as expected without a name.", - "PhysicalConnectionRequirements": "A map of physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection." + "Name": "The name of the connection.", + "PhysicalConnectionRequirements": "The physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection." }, "AWS::Glue::Connection PhysicalConnectionRequirements": { - "AvailabilityZone": "The connection's Availability Zone. This field is redundant because the specified subnet implies the Availability Zone to be used. Currently the field must be populated, but it will be deprecated in the future.", + "AvailabilityZone": "The connection's Availability Zone.", "SecurityGroupIdList": "The security group ID list used by the connection.", "SubnetId": "The subnet ID used by the connection." }, @@ -16628,7 +16697,7 @@ "ExecutionProperty": "The maximum number of concurrent runs that are allowed for this job.", "GlueVersion": "Glue version determines the versions of Apache Spark and Python that AWS Glue supports. The Python version indicates the version supported for jobs of type Spark.\n\nFor more information about the available AWS Glue versions and corresponding Spark and Python versions, see [Glue version](https://docs.aws.amazon.com/glue/latest/dg/add-job.html) in the developer guide.\n\nJobs that are created without specifying a Glue version default to Glue 0.9.", "LogUri": "This field is reserved for future use.", - "MaintenanceWindow": "", + "MaintenanceWindow": "This field specifies a day of the week and hour for a maintenance window for streaming jobs. AWS Glue periodically performs maintenance activities. During these maintenance windows, AWS Glue will need to restart your streaming jobs.\n\nAWS Glue will restart the job within 3 hours of the specified maintenance window. For instance, if you set up the maintenance window for Monday at 10:00AM GMT, your jobs will be restarted between 10:00AM GMT to 1:00PM GMT.", "MaxCapacity": "The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory.\n\nDo not set `Max Capacity` if using `WorkerType` and `NumberOfWorkers` .\n\nThe value that can be allocated for `MaxCapacity` depends on whether you are running a Python shell job or an Apache Spark ETL job:\n\n- When you specify a Python shell job ( `JobCommand.Name` =\"pythonshell\"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.\n- When you specify an Apache Spark ETL job ( `JobCommand.Name` =\"glueetl\"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.", "MaxRetries": "The maximum number of times to retry this job after a JobRun fails.", "Name": "The name you assign to this job definition.", @@ -16953,14 +17022,14 @@ }, "AWS::Grafana::Workspace": { "AccountAccessType": "Specifies whether the workspace can access AWS resources in this AWS account only, or whether it can also access AWS resources in other accounts in the same organization. If this is `ORGANIZATION` , the `OrganizationalUnits` parameter specifies which organizational units the workspace can access.", - "AuthenticationProviders": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .", + "AuthenticationProviders": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .\n\n*Allowed Values* : `AWS_SSO | SAML`", "ClientToken": "A unique, case-sensitive, user-provided identifier to ensure the idempotency of the request.", "DataSources": "Specifies the AWS data sources that have been configured to have IAM roles and permissions created to allow Amazon Managed Grafana to read data from these sources.\n\nThis list is only used when the workspace was created through the AWS console, and the `permissionType` is `SERVICE_MANAGED` .", "Description": "The user-defined description of the workspace.", "GrafanaVersion": "Specifies the version of Grafana to support in the workspace. Defaults to the latest version on create (for example, 9.4), or the current version of the workspace on update.\n\nCan only be used to upgrade (for example, from 8.4 to 9.4), not downgrade (for example, from 9.4 to 8.4).\n\nTo know what versions are available to upgrade to for a specific workspace, see the [ListVersions](https://docs.aws.amazon.com/grafana/latest/APIReference/API_ListVersions.html) operation.", "Name": "The name of the workspace.", "NetworkAccessControl": "The configuration settings for network access to your workspace.", - "NotificationDestinations": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.", + "NotificationDestinations": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.\n\n*AllowedValues* : `SNS`", "OrganizationRoleName": "The name of the IAM role that is used to access resources through Organizations.", "OrganizationalUnits": "Specifies the organizational units that this workspace is allowed to use data sources from, if this workspace is in an account that is part of an organization.", "PermissionType": "If this is `SERVICE_MANAGED` , and the workplace was created through the Amazon Managed Grafana console, then Amazon Managed Grafana automatically creates the IAM roles and provisions the permissions that the workspace needs to use AWS data sources and notification channels.\n\nIf this is `CUSTOMER_MANAGED` , you must manage those roles and permissions yourself.\n\nIf you are working with a workspace in a member account of an organization and that account is not a delegated administrator account, and you want the workspace to access data sources in other AWS accounts in the organization, this parameter must be set to `CUSTOMER_MANAGED` .\n\nFor more information about converting between customer and service managed, see [Managing permissions for data sources and notification channels](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-datasource-and-notification.html) . For more information about the roles and permissions that must be managed for customer managed workspaces, see [Amazon Managed Grafana permissions and policies for AWS data sources and notification channels](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-manage-permissions.html)", @@ -17623,7 +17692,7 @@ }, "AWS::GuardDuty::Detector CFNFeatureConfiguration": { "AdditionalConfiguration": "Information about the additional configuration of a feature in your account.", - "Name": "Name of the feature.", + "Name": "Name of the feature. For a list of allowed values, see [DetectorFeatureConfiguration](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorFeatureConfiguration.html#guardduty-Type-DetectorFeatureConfiguration-name) in the *GuardDuty API Reference* .", "Status": "Status of the feature configuration." }, "AWS::GuardDuty::Detector CFNKubernetesAuditLogsConfiguration": { @@ -17642,8 +17711,8 @@ "EbsVolumes": "Describes the configuration for scanning EBS volumes as data source." }, "AWS::GuardDuty::Detector TagItem": { - "Key": "The tag value.", - "Value": "The tag key." + "Key": "The tag key.", + "Value": "The tag value." }, "AWS::GuardDuty::Filter": { "Action": "Specifies the action that is to be applied to the findings that match the filter.", @@ -17669,11 +17738,11 @@ "NotEquals": "Represents a *not equal* ** condition to be applied to a single field when querying for findings." }, "AWS::GuardDuty::Filter FindingCriteria": { - "Criterion": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, API and CFN use the following input list for the condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) .\n- type\n- updatedAt\n\nType: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value" + "Criterion": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, the API and AWS CLI use the following input list for the `FindingCriteria` condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) in the *Amazon GuardDuty User Guide* .\n- type\n- updatedAt\n\nType: ISO 8601 string format: `YYYY-MM-DDTHH:MM:SS.SSSZ` or `YYYY-MM-DDTHH:MM:SSZ` depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.dnsRequestAction.domainWithSuffix\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV6\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.kubernetesApiCallAction.namespace\n- service.action.kubernetesApiCallAction.remoteIpDetails.organization.asn\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.kubernetesApiCallAction.statusCode\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.localIpDetails.ipAddressV6\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- service.malwareScanDetails.threats.name\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value" }, "AWS::GuardDuty::Filter TagItem": { - "Key": "", - "Value": "" + "Key": "The tag key.", + "Value": "The tag value." }, "AWS::GuardDuty::IPSet": { "Activate": "Indicates whether or not GuardDuty uses the `IPSet` .", @@ -17684,12 +17753,39 @@ "Tags": "The tags to be added to a new IP set resource. Each tag consists of a key and an optional value, both of which you define.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, "AWS::GuardDuty::IPSet TagItem": { - "Key": "", - "Value": "" + "Key": "The tag key.", + "Value": "The tag value." + }, + "AWS::GuardDuty::MalwareProtectionPlan": { + "Actions": "Specifies the action that is to be applied to the Malware Protection plan resource.", + "ProtectedResource": "Information about the protected resource. Presently, `S3Bucket` is the only supported protected resource.", + "Role": "IAM role that includes the permissions required to scan and (optionally) add tags to the associated protected resource.", + "Tags": "The tags to be added to the created Malware Protection plan resource. Each tag consists of a key and an optional value, both of which you need to specify." + }, + "AWS::GuardDuty::MalwareProtectionPlan CFNActions": { + "Tagging": "Contains information about tagging status of the Malware Protection plan resource." + }, + "AWS::GuardDuty::MalwareProtectionPlan CFNProtectedResource": { + "S3Bucket": "Information about the protected S3 bucket resource." + }, + "AWS::GuardDuty::MalwareProtectionPlan CFNStatusReasons": { + "Code": "The status code of the Malware Protection plan. For more information, see [Malware Protection plan resource status](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection-s3-bucket-status-gdu.html) in the *GuardDuty User Guide* .", + "Message": "Issue message that specifies the reason. For information about potential troubleshooting steps, see [Troubleshooting Malware Protection for S3 status issues](https://docs.aws.amazon.com/guardduty/latest/ug/troubleshoot-s3-malware-protection-status-errors.html) in the *GuardDuty User Guide* ." + }, + "AWS::GuardDuty::MalwareProtectionPlan CFNTagging": { + "Status": "Indicates whether or not you chose GuardDuty to add a predefined tag to the scanned S3 object." + }, + "AWS::GuardDuty::MalwareProtectionPlan S3Bucket": { + "BucketName": "Name of the S3 bucket.", + "ObjectPrefixes": "Information about the specified object prefixes. An S3 object will be scanned only if it belongs to any of the specified object prefixes." + }, + "AWS::GuardDuty::MalwareProtectionPlan TagItem": { + "Key": "The tag key.", + "Value": "The tag value." }, "AWS::GuardDuty::Master": { "DetectorId": "The unique ID of the detector of the GuardDuty member account.", - "InvitationId": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by using the ListInvitation action of the GuardDuty API.", + "InvitationId": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by running the [ListInvitations](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_ListInvitations.html) in the *GuardDuty API Reference* .", "MasterId": "The AWS account ID of the account designated as the GuardDuty administrator account." }, "AWS::GuardDuty::Member": { @@ -17709,8 +17805,8 @@ "Tags": "The tags to be added to a new threat list resource. Each tag consists of a key and an optional value, both of which you define.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, "AWS::GuardDuty::ThreatIntelSet TagItem": { - "Key": "", - "Value": "" + "Key": "The tag key.", + "Value": "The tag value." }, "AWS::HealthImaging::Datastore": { "DatastoreName": "The data store name.", @@ -20717,8 +20813,8 @@ "EnableKeyRotation": "Enables automatic rotation of the key material for the specified KMS key. By default, automatic key rotation is not enabled.\n\nAWS KMS supports automatic rotation only for symmetric encryption KMS keys ( `KeySpec` = `SYMMETRIC_DEFAULT` ). For asymmetric KMS keys, HMAC KMS keys, and KMS keys with Origin `EXTERNAL` , omit the `EnableKeyRotation` property or set it to `false` .\n\nTo enable automatic key rotation of the key material for a multi-Region KMS key, set `EnableKeyRotation` to `true` on the primary key (created by using `AWS::KMS::Key` ). AWS KMS copies the rotation status to all replica keys. For details, see [Rotating multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-manage.html#multi-region-rotate) in the *AWS Key Management Service Developer Guide* .\n\nWhen you enable automatic rotation, AWS KMS automatically creates new key material for the KMS key one year after the enable date and every year thereafter. AWS KMS retains all key material until you delete the KMS key. For detailed information about automatic key rotation, see [Rotating KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html) in the *AWS Key Management Service Developer Guide* .", "Enabled": "Specifies whether the KMS key is enabled. Disabled KMS keys cannot be used in cryptographic operations.\n\nWhen `Enabled` is `true` , the *key state* of the KMS key is `Enabled` . When `Enabled` is `false` , the key state of the KMS key is `Disabled` . The default value is `true` .\n\nThe actual key state of the KMS key might be affected by actions taken outside of CloudFormation, such as running the [EnableKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_EnableKey.html) , [DisableKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DisableKey.html) , or [ScheduleKeyDeletion](https://docs.aws.amazon.com/kms/latest/APIReference/API_ScheduleKeyDeletion.html) operations.\n\nFor information about the key states of a KMS key, see [Key state: Effect on your KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide* .", "KeyPolicy": "The key policy to attach to the KMS key.\n\nIf you provide a key policy, it must meet the following criteria:\n\n- The key policy must allow the caller to make a subsequent [PutKeyPolicy](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html) request on the KMS key. This reduces the risk that the KMS key becomes unmanageable. For more information, see [Default key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-root-enable-iam) in the *AWS Key Management Service Developer Guide* . (To omit this condition, set `BypassPolicyLockoutSafetyCheck` to true.)\n- Each statement in the key policy must contain one or more principals. The principals in the key policy must exist and be visible to AWS KMS . When you create a new AWS principal (for example, an IAM user or role), you might need to enforce a delay before including the new principal in a key policy because the new principal might not be immediately visible to AWS KMS . For more information, see [Changes that I make are not always immediately visible](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) in the *AWS Identity and Access Management User Guide* .\n\nIf you do not provide a key policy, AWS KMS attaches a default key policy to the KMS key. For more information, see [Default key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default) in the *AWS Key Management Service Developer Guide* .\n\nA key policy document can include only the following characters:\n\n- Printable ASCII characters\n- Printable characters in the Basic Latin and Latin-1 Supplement character set\n- The tab ( `\\u0009` ), line feed ( `\\u000A` ), and carriage return ( `\\u000D` ) special characters\n\n*Minimum* : `1`\n\n*Maximum* : `32768`", - "KeySpec": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (China Regions only)\n\n- `SM2`", - "KeyUsage": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the property or specify `ENCRYPT_DECRYPT` .\n- For asymmetric KMS keys with RSA key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with ECC key material, specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 (China Regions only) key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For HMAC KMS keys, specify `GENERATE_VERIFY_MAC` .", + "KeySpec": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs (encryption and decryption *or* signing and verification)\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs (signing and verification *or* deriving shared secrets)\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs (signing and verification)\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (encryption and decryption *or* signing and verification *or* deriving shared secrets)\n\n- `SM2` (China Regions only)", + "KeyUsage": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the parameter or specify `ENCRYPT_DECRYPT` .\n- For HMAC KMS keys (symmetric), specify `GENERATE_VERIFY_MAC` .\n- For asymmetric KMS keys with RSA key pairs, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with NIST-recommended elliptic curve key pairs, specify `SIGN_VERIFY` or `KEY_AGREEMENT` .\n- For asymmetric KMS keys with `ECC_SECG_P256K1` key pairs specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 key pairs (China Regions only), specify `ENCRYPT_DECRYPT` , `SIGN_VERIFY` , or `KEY_AGREEMENT` .", "MultiRegion": "Creates a multi-Region primary key that you can replicate in other AWS Regions . You can't change the `MultiRegion` value after the KMS key is created.\n\nFor a list of AWS Regions in which multi-Region keys are supported, see [Multi-Region keys in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the ** .\n\n> If you change the value of the `MultiRegion` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nFor a multi-Region key, set to this property to `true` . For a single-Region key, omit this property or set it to `false` . The default value is `false` .\n\n*Multi-Region keys* are an AWS KMS feature that lets you create multiple interoperable KMS keys in different AWS Regions . Because these KMS keys have the same key ID, key material, and other metadata, you can use them to encrypt data in one AWS Region and decrypt it in a different AWS Region without making a cross-Region call or exposing the plaintext data. For more information, see [Multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide* .\n\nYou can create a symmetric encryption, HMAC, or asymmetric multi-Region KMS key, and you can create a multi-Region key with imported key material. However, you cannot create a multi-Region key in a custom key store.\n\nTo create a replica of this primary key in a different AWS Region , create an [AWS::KMS::ReplicaKey](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html) resource in a CloudFormation stack in the replica Region. Specify the key ARN of this primary key.", "Origin": "The source of the key material for the KMS key. You cannot change the origin after you create the KMS key. The default is `AWS_KMS` , which means that AWS KMS creates the key material.\n\nTo [create a KMS key with no key material](https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-create-cmk.html) (for imported key material), set this value to `EXTERNAL` . For more information about importing key material into AWS KMS , see [Importing Key Material](https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys.html) in the *AWS Key Management Service Developer Guide* .\n\nYou can ignore `ENABLED` when Origin is `EXTERNAL` . When a KMS key with Origin `EXTERNAL` is created, the key state is `PENDING_IMPORT` and `ENABLED` is `false` . After you import the key material, `ENABLED` updated to `true` . The KMS key can then be used for Cryptographic Operations.\n\n> AWS CloudFormation doesn't support creating an `Origin` parameter of the `AWS_CLOUDHSM` or `EXTERNAL_KEY_STORE` values.", "PendingWindowInDays": "Specifies the number of days in the waiting period before AWS KMS deletes a KMS key that has been removed from a CloudFormation stack. Enter a value between 7 and 30 days. The default value is 30 days.\n\nWhen you remove a KMS key from a CloudFormation stack, AWS KMS schedules the KMS key for deletion and starts the mandatory waiting period. The `PendingWindowInDays` property determines the length of waiting period. During the waiting period, the key state of KMS key is `Pending Deletion` or `Pending Replica Deletion` , which prevents the KMS key from being used in cryptographic operations. When the waiting period expires, AWS KMS permanently deletes the KMS key.\n\nAWS KMS will not delete a [multi-Region primary key](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) that has replica keys. If you remove a multi-Region primary key from a CloudFormation stack, its key state changes to `PendingReplicaDeletion` so it cannot be replicated or used in cryptographic operations. This state can persist indefinitely. When the last of its replica keys is deleted, the key state of the primary key changes to `PendingDeletion` and the waiting period specified by `PendingWindowInDays` begins. When this waiting period expires, AWS KMS deletes the primary key. For details, see [Deleting multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-delete.html) in the *AWS Key Management Service Developer Guide* .\n\nYou cannot use a CloudFormation template to cancel deletion of the KMS key after you remove it from the stack, regardless of the waiting period. If you specify a KMS key in your template, even one with the same name, CloudFormation creates a new KMS key. To cancel deletion of a KMS key, use the AWS KMS console or the [CancelKeyDeletion](https://docs.aws.amazon.com/kms/latest/APIReference/API_CancelKeyDeletion.html) operation.\n\nFor information about the `Pending Deletion` and `Pending Replica Deletion` key states, see [Key state: Effect on your KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide* . For more information about deleting KMS keys, see the [ScheduleKeyDeletion](https://docs.aws.amazon.com/kms/latest/APIReference/API_ScheduleKeyDeletion.html) operation in the *AWS Key Management Service API Reference* and [Deleting KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html) in the *AWS Key Management Service Developer Guide* .", @@ -21431,6 +21527,7 @@ "AWS::KinesisAnalyticsV2::Application ApplicationConfiguration": { "ApplicationCodeConfiguration": "The code location and type parameters for a Managed Service for Apache Flink application.", "ApplicationSnapshotConfiguration": "Describes whether snapshots are enabled for a Managed Service for Apache Flink application.", + "ApplicationSystemRollbackConfiguration": "", "EnvironmentProperties": "Describes execution properties for a Managed Service for Apache Flink application.", "FlinkApplicationConfiguration": "The creation and update parameters for a Managed Service for Apache Flink application.", "SqlApplicationConfiguration": "The creation and update parameters for a SQL-based Kinesis Data Analytics application.", @@ -21447,6 +21544,9 @@ "AWS::KinesisAnalyticsV2::Application ApplicationSnapshotConfiguration": { "SnapshotsEnabled": "Describes whether snapshots are enabled for a Managed Service for Apache Flink application." }, + "AWS::KinesisAnalyticsV2::Application ApplicationSystemRollbackConfiguration": { + "RollbackEnabled": "" + }, "AWS::KinesisAnalyticsV2::Application CSVMappingParameters": { "RecordColumnDelimiter": "The column delimiter. For example, in a CSV format, a comma (\",\") is the typical column delimiter.", "RecordRowDelimiter": "The row delimiter. For example, in a CSV format, *'\\n'* is the typical row delimiter." @@ -21815,7 +21915,8 @@ "RetryOptions": "Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.", "RoleARN": "Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.", "S3BackupMode": "Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).", - "S3Configuration": "Describes the configuration of a destination in Amazon S3." + "S3Configuration": "Describes the configuration of a destination in Amazon S3.", + "SecretsManagerConfiguration": "The configuration that defines how you access secrets for HTTP Endpoint destination." }, "AWS::KinesisFirehose::DeliveryStream HttpEndpointRequestConfiguration": { "CommonAttributes": "Describes the metadata sent to the HTTP endpoint destination.", @@ -21887,6 +21988,7 @@ "S3BackupConfiguration": "The configuration for backup in Amazon S3.", "S3BackupMode": "The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.", "S3Configuration": "The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the `COPY` command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify `SNAPPY` or `ZIP` because the Amazon Redshift `COPY` command doesn't support them.", + "SecretsManagerConfiguration": "The configuration that defines how you access secrets for Amazon Redshift.", "Username": "The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have `INSERT` privileges for copying data from the Amazon S3 bucket to the cluster." }, "AWS::KinesisFirehose::DeliveryStream RedshiftRetryOptions": { @@ -21913,6 +22015,11 @@ "TableName": "Specifies the AWS Glue table that contains the column information that constitutes your data schema.\n\n> If the `SchemaConfiguration` request parameter is used as part of invoking the `CreateDeliveryStream` API, then the `TableName` property is required and its value must be specified.", "VersionId": "Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to `LATEST` , Firehose uses the most recent version. This means that any updates to the table are automatically picked up." }, + "AWS::KinesisFirehose::DeliveryStream SecretsManagerConfiguration": { + "Enabled": "Specifies whether you want to use the the secrets manager feature. When set as `True` the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to `False` Firehose falls back to the credentials in the destination configuration.", + "RoleARN": "Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.", + "SecretARN": "The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when *Enabled* is set to `True` ." + }, "AWS::KinesisFirehose::DeliveryStream Serializer": { "OrcSerDe": "A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see [Apache ORC](https://docs.aws.amazon.com/https://orc.apache.org/docs/) .", "ParquetSerDe": "A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see [Apache Parquet](https://docs.aws.amazon.com/https://parquet.apache.org/documentation/latest/) ." @@ -21932,6 +22039,7 @@ "S3BackupMode": "Choose an S3 backup mode", "S3Configuration": "", "Schema": "Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views", + "SecretsManagerConfiguration": "The configuration that defines how you access secrets for Snowflake.", "SnowflakeRoleConfiguration": "Optionally configure a Snowflake role. Otherwise the default user role will be used.", "SnowflakeVpcConfiguration": "The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see [Amazon PrivateLink & Snowflake](https://docs.aws.amazon.com/https://docs.snowflake.com/en/user-guide/admin-security-privatelink)", "Table": "All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.", @@ -21961,7 +22069,8 @@ "ProcessingConfiguration": "The data processing configuration.", "RetryOptions": "The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.", "S3BackupMode": "Defines how documents should be delivered to Amazon S3. When set to `FailedEventsOnly` , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to `AllEvents` , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is `FailedEventsOnly` .\n\nYou can update this backup mode from `FailedEventsOnly` to `AllEvents` . You can't update it from `AllEvents` to `FailedEventsOnly` .", - "S3Configuration": "The configuration for the backup Amazon S3 location." + "S3Configuration": "The configuration for the backup Amazon S3 location.", + "SecretsManagerConfiguration": "The configuration that defines how you access secrets for Splunk." }, "AWS::KinesisFirehose::DeliveryStream SplunkRetryOptions": { "DurationInSeconds": "The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt." @@ -26884,7 +26993,8 @@ }, "AWS::OSIS::Pipeline VpcOptions": { "SecurityGroupIds": "A list of security groups associated with the VPC endpoint.", - "SubnetIds": "A list of subnet IDs associated with the VPC endpoint." + "SubnetIds": "A list of subnet IDs associated with the VPC endpoint.", + "VpcEndpointManagement": "Defines whether you or Amazon OpenSearch Ingestion service create and manage the VPC endpoint configured for the pipeline." }, "AWS::Oam::Link": { "LabelTemplate": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`", @@ -37523,6 +37633,7 @@ "EnableHttpEndpoint": "Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.\n\nWhen enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.\n\nRDS Data API is supported with the following DB clusters:\n\n- Aurora PostgreSQL Serverless v2 and provisioned\n- Aurora PostgreSQL and Aurora MySQL Serverless v1\n\nFor more information, see [Using RDS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html) in the *Amazon Aurora User Guide* .\n\nValid for Cluster Type: Aurora DB clusters only", "EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled.\n\nFor more information, see [IAM Database Authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html) in the *Amazon Aurora User Guide.*\n\nValid for: Aurora DB clusters only", "Engine": "The name of the database engine to be used for this DB cluster.\n\nValid Values:\n\n- `aurora-mysql`\n- `aurora-postgresql`\n- `mysql`\n- `postgres`\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", + "EngineLifecycleSupport": "The life cycle type for this DB cluster.\n\n> By default, this value is set to `open-source-rds-extended-support` , which enrolls your DB cluster into Amazon RDS Extended Support. At the end of standard support, you can avoid charges for Extended Support by setting the value to `open-source-rds-extended-support-disabled` . In this case, creating the DB cluster will fail if the DB major version is past its end of standard support date. \n\nYou can use this setting to enroll your DB cluster into Amazon RDS Extended Support. With RDS Extended Support, you can run the selected major engine version on your DB cluster past the end of standard support for that engine version. For more information, see the following sections:\n\n- Amazon Aurora (PostgreSQL only) - [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/extended-support.html) in the *Amazon Aurora User Guide*\n- Amazon RDS - [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/extended-support.html) in the *Amazon RDS User Guide*\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values: `open-source-rds-extended-support | open-source-rds-extended-support-disabled`\n\nDefault: `open-source-rds-extended-support`", "EngineMode": "The DB engine mode of the DB cluster, either `provisioned` or `serverless` .\n\nThe `serverless` engine mode only applies for Aurora Serverless v1 DB clusters. Aurora Serverless v2 DB clusters use the `provisioned` engine mode.\n\nFor information about limitations and requirements for Serverless DB clusters, see the following sections in the *Amazon Aurora User Guide* :\n\n- [Limitations of Aurora Serverless v1](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html#aurora-serverless.limitations)\n- [Requirements for Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.requirements.html)\n\nValid for Cluster Type: Aurora DB clusters only", "EngineVersion": "The version number of the database engine to use.\n\nTo list all of the available engine versions for Aurora MySQL version 2 (5.7-compatible) and version 3 (8.0-compatible), use the following command:\n\n`aws rds describe-db-engine-versions --engine aurora-mysql --query \"DBEngineVersions[].EngineVersion\"`\n\nYou can supply either `5.7` or `8.0` to use the default engine version for Aurora MySQL version 2 or version 3, respectively.\n\nTo list all of the available engine versions for Aurora PostgreSQL, use the following command:\n\n`aws rds describe-db-engine-versions --engine aurora-postgresql --query \"DBEngineVersions[].EngineVersion\"`\n\nTo list all of the available engine versions for RDS for MySQL, use the following command:\n\n`aws rds describe-db-engine-versions --engine mysql --query \"DBEngineVersions[].EngineVersion\"`\n\nTo list all of the available engine versions for RDS for PostgreSQL, use the following command:\n\n`aws rds describe-db-engine-versions --engine postgres --query \"DBEngineVersions[].EngineVersion\"`\n\n*Aurora MySQL*\n\nFor information, see [Database engine updates for Amazon Aurora MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.html) in the *Amazon Aurora User Guide* .\n\n*Aurora PostgreSQL*\n\nFor information, see [Amazon Aurora PostgreSQL releases and engine versions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html) in the *Amazon Aurora User Guide* .\n\n*MySQL*\n\nFor information, see [Amazon RDS for MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt) in the *Amazon RDS User Guide* .\n\n*PostgreSQL*\n\nFor information, see [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts) in the *Amazon RDS User Guide* .\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "GlobalClusterIdentifier": "If you are configuring an Aurora global database cluster and want your Aurora DB cluster to be a secondary member in the global database cluster, specify the global cluster ID of the global database cluster. To define the primary database cluster of the global cluster, use the [AWS::RDS::GlobalCluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-globalcluster.html) resource.\n\nIf you aren't configuring a global database cluster, don't specify this property.\n\n> To remove the DB cluster from a global database cluster, specify an empty value for the `GlobalClusterIdentifier` property. \n\nFor information about Aurora global databases, see [Working with Amazon Aurora Global Databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters only", @@ -37638,6 +37749,7 @@ "EnablePerformanceInsights": "Specifies whether to enable Performance Insights for the DB instance. For more information, see [Using Amazon Performance Insights](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to RDS Custom DB instances.", "Endpoint": "The connection endpoint for the DB instance.\n\n> The endpoint might not be shown for instances with the status of `creating` .", "Engine": "The name of the database engine to use for this DB instance. Not every database engine is available in every AWS Region.\n\nThis property is required when creating a DB instance.\n\n> You can convert an Oracle database from the non-CDB architecture to the container database (CDB) architecture by updating the `Engine` value in your templates from `oracle-ee` to `oracle-ee-cdb` or from `oracle-se2` to `oracle-se2-cdb` . Converting to the CDB architecture requires an interruption. \n\nValid Values:\n\n- `aurora-mysql` (for Aurora MySQL DB instances)\n- `aurora-postgresql` (for Aurora PostgreSQL DB instances)\n- `custom-oracle-ee` (for RDS Custom for Oracle DB instances)\n- `custom-oracle-ee-cdb` (for RDS Custom for Oracle DB instances)\n- `custom-sqlserver-ee` (for RDS Custom for SQL Server DB instances)\n- `custom-sqlserver-se` (for RDS Custom for SQL Server DB instances)\n- `custom-sqlserver-web` (for RDS Custom for SQL Server DB instances)\n- `db2-ae`\n- `db2-se`\n- `mariadb`\n- `mysql`\n- `oracle-ee`\n- `oracle-ee-cdb`\n- `oracle-se2`\n- `oracle-se2-cdb`\n- `postgres`\n- `sqlserver-ee`\n- `sqlserver-se`\n- `sqlserver-ex`\n- `sqlserver-web`", + "EngineLifecycleSupport": "The life cycle type for this DB instance.\n\n> By default, this value is set to `open-source-rds-extended-support` , which enrolls your DB instance into Amazon RDS Extended Support. At the end of standard support, you can avoid charges for Extended Support by setting the value to `open-source-rds-extended-support-disabled` . In this case, creating the DB instance will fail if the DB major version is past its end of standard support date. \n\nThis setting applies only to RDS for MySQL and RDS for PostgreSQL. For Amazon Aurora DB instances, the life cycle type is managed by the DB cluster.\n\nYou can use this setting to enroll your DB instance into Amazon RDS Extended Support. With RDS Extended Support, you can run the selected major engine version on your DB instance past the end of standard support for that engine version. For more information, see [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/extended-support.html) in the *Amazon RDS User Guide* .\n\nValid Values: `open-source-rds-extended-support | open-source-rds-extended-support-disabled`\n\nDefault: `open-source-rds-extended-support`", "EngineVersion": "The version number of the database engine to use.\n\nFor a list of valid engine versions, use the `DescribeDBEngineVersions` action.\n\nThe following are the database engines and links to information about the major and minor versions that are available with Amazon RDS. Not every database engine is available for every AWS Region.\n\n*Amazon Aurora*\n\nNot applicable. The version number of the database engine to be used by the DB instance is managed by the DB cluster.\n\n*Db2*\n\nSee [Amazon RDS for Db2](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Db2.html#Db2.Concepts.VersionMgmt) in the *Amazon RDS User Guide.*\n\n*MariaDB*\n\nSee [MariaDB on Amazon RDS Versions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html#MariaDB.Concepts.VersionMgmt) in the *Amazon RDS User Guide.*\n\n*Microsoft SQL Server*\n\nSee [Microsoft SQL Server Versions on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.VersionSupport) in the *Amazon RDS User Guide.*\n\n*MySQL*\n\nSee [MySQL on Amazon RDS Versions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt) in the *Amazon RDS User Guide.*\n\n*Oracle*\n\nSee [Oracle Database Engine Release Notes](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.PatchComposition.html) in the *Amazon RDS User Guide.*\n\n*PostgreSQL*\n\nSee [Supported PostgreSQL Database Versions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.DBVersions) in the *Amazon RDS User Guide.*", "Iops": "The number of I/O operations per second (IOPS) that the database provisions. The value must be equal to or greater than 1000.\n\nIf you specify this property, you must follow the range of allowed ratios of your requested IOPS rate to the amount of storage that you allocate (IOPS to allocated storage). For example, you can provision an Oracle database instance with 1000 IOPS and 200 GiB of storage (a ratio of 5:1), or specify 2000 IOPS with 200 GiB of storage (a ratio of 10:1). For more information, see [Amazon RDS Provisioned IOPS Storage to Improve Performance](https://docs.aws.amazon.com/AmazonRDS/latest/DeveloperGuide/CHAP_Storage.html#USER_PIOPS) in the *Amazon RDS User Guide* .\n\n> If you specify `io1` for the `StorageType` property, then you must also specify the `Iops` property. \n\nConstraints:\n\n- For RDS for Db2, MariaDB, MySQL, Oracle, and PostgreSQL - Must be a multiple between .5 and 50 of the storage amount for the DB instance.\n- For RDS for SQL Server - Must be a multiple between 1 and 50 of the storage amount for the DB instance.", "KmsKeyId": "The ARN of the AWS KMS key that's used to encrypt the DB instance, such as `arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef` . If you enable the StorageEncrypted property but don't specify this property, AWS CloudFormation uses the default KMS key. If you specify this property, you must set the StorageEncrypted property to true.\n\nIf you specify the `SourceDBInstanceIdentifier` or `SourceDbiResourceId` property, don't specify this property. The value is inherited from the source DB instance, and if the DB instance is encrypted, the specified `KmsKeyId` property is used. However, if the source DB instance is in a different AWS Region, you must specify a KMS key ID.\n\nIf you specify the `SourceDBInstanceAutomatedBackupsArn` property, don't specify this property. The value is inherited from the source DB instance automated backup, and if the automated backup is encrypted, the specified `KmsKeyId` property is used.\n\nIf you create an encrypted read replica in a different AWS Region, then you must specify a KMS key for the destination AWS Region. KMS encryption keys are specific to the region that they're created in, and you can't use encryption keys from one region in another region.\n\nIf you specify the `DBSnapshotIdentifier` property, don't specify this property. The `StorageEncrypted` property value is inherited from the snapshot. If the DB instance is encrypted, the specified `KmsKeyId` property is also inherited from the snapshot.\n\nIf you specify `DBSecurityGroups` , AWS CloudFormation ignores this property. To specify both a security group and this property, you must use a VPC security group. For more information about Amazon RDS and VPC, see [Using Amazon RDS with Amazon VPC](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\n*Amazon Aurora*\n\nNot applicable. The KMS key identifier is managed by the DB cluster.", @@ -37932,10 +38044,10 @@ "MasterUsername": "The user name associated with the admin user account for the cluster that is being created.\n\nConstraints:\n\n- Must be 1 - 128 alphanumeric characters or hyphens. The user name can't be `PUBLIC` .\n- Must contain only lowercase letters, numbers, underscore, plus sign, period (dot), at symbol (@), or hyphen.\n- The first character must be a letter.\n- Must not contain a colon (:) or a slash (/).\n- Cannot be a reserved word. A list of reserved words can be found in [Reserved Words](https://docs.aws.amazon.com/redshift/latest/dg/r_pg_keywords.html) in the Amazon Redshift Database Developer Guide.", "MultiAZ": "A boolean indicating whether Amazon Redshift should deploy the cluster in two Availability Zones. The default is false.", "NamespaceResourcePolicy": "The policy that is attached to a resource.", - "NodeType": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `ds2.xlarge` | `ds2.8xlarge` | `dc1.large` | `dc1.8xlarge` | `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", + "NodeType": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", "NumberOfNodes": "The number of compute nodes in the cluster. This parameter is required when the *ClusterType* parameter is specified as `multi-node` .\n\nFor information about determining how many nodes you need, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nIf you don't specify this parameter, you get a single-node cluster. When requesting a multi-node cluster, you must specify the number of nodes that you want in the cluster.\n\nDefault: `1`\n\nConstraints: Value must be at least 1 and no more than 100.", "OwnerAccount": "The AWS account used to create or copy the snapshot. Required if you are restoring a snapshot you do not own, optional if you own the snapshot.", - "Port": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with ds2 or dc2 nodes - Select a port within the range `1150-65535` .", + "Port": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with dc2 nodes - Select a port within the range `1150-65535` .", "PreferredMaintenanceWindow": "The weekly time range (in UTC) during which automated cluster maintenance can occur.\n\nFormat: `ddd:hh24:mi-ddd:hh24:mi`\n\nDefault: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. For more information about the time blocks for each region, see [Maintenance Windows](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-maintenance-windows) in Amazon Redshift Cluster Management Guide.\n\nValid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun\n\nConstraints: Minimum 30-minute window.", "PubliclyAccessible": "If `true` , the cluster can be accessed from a public network.", "ResourceAction": "The Amazon Redshift operation to be performed. Supported operations are `pause-cluster` , `resume-cluster` , and `failover-primary-compute` .", @@ -38049,7 +38161,7 @@ "ScheduledActionDescription": "The description of the scheduled action.", "ScheduledActionName": "The name of the scheduled action.", "StartTime": "The start time in UTC when the schedule is active. Before this time, the scheduled action does not trigger.", - "TargetAction": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ds2.8xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \"." + "TargetAction": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ra3.4xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \"." }, "AWS::Redshift::ScheduledAction PauseClusterMessage": { "ClusterIdentifier": "The identifier of the cluster to be paused." @@ -39139,7 +39251,7 @@ "ObjectOwnership": "Specifies an object ownership rule." }, "AWS::S3::Bucket PartitionedPrefix": { - "PartitionDateSource": "Specifies the partition date source for the partitioned prefix. PartitionDateSource can be EventTime or DeliveryTime." + "PartitionDateSource": "Specifies the partition date source for the partitioned prefix. `PartitionDateSource` can be `EventTime` or `DeliveryTime` .\n\nFor `DeliveryTime` , the time in the log file names corresponds to the delivery time for the log files.\n\nFor `EventTime` , The logs delivered are for a specific day only. The year, month, and day correspond to the day on which the event occurred, and the hour, minutes and seconds are set to 00 in the key." }, "AWS::S3::Bucket PublicAccessBlockConfiguration": { "BlockPublicAcls": "Specifies whether Amazon S3 should block public access control lists (ACLs) for this bucket and objects in this bucket. Setting this element to `TRUE` causes the following behavior:\n\n- PUT Bucket ACL and PUT Object ACL calls fail if the specified ACL is public.\n- PUT Object calls fail if the request includes a public ACL.\n- PUT Bucket calls fail if the request includes a public ACL.\n\nEnabling this setting doesn't affect existing policies or ACLs.", @@ -39605,9 +39717,13 @@ "DimensionName": "The name of an Amazon CloudWatch dimension associated with an email sending metric. The name must meet the following requirements:\n\n- Contain only ASCII letters (a-z, A-Z), numbers (0-9), underscores (_), dashes (-), or colons (:).\n- Contain 256 characters or fewer.", "DimensionValueSource": "The place where Amazon SES finds the value of a dimension to publish to Amazon CloudWatch. To use the message tags that you specify using an `X-SES-MESSAGE-TAGS` header or a parameter to the `SendEmail` / `SendRawEmail` API, specify `messageTag` . To use your own email headers, specify `emailHeader` . To put a custom tag on any link included in your email, specify `linkTag` ." }, + "AWS::SES::ConfigurationSetEventDestination EventBridgeDestination": { + "EventBusArn": "" + }, "AWS::SES::ConfigurationSetEventDestination EventDestination": { "CloudWatchDestination": "An object that contains the names, default values, and sources of the dimensions associated with an Amazon CloudWatch event destination.", "Enabled": "Sets whether Amazon SES publishes events to this destination when you send an email with the associated configuration set. Set to `true` to enable publishing to this destination; set to `false` to prevent publishing to this destination. The default value is `false` .", + "EventBridgeDestination": "", "KinesisFirehoseDestination": "An object that contains the delivery stream ARN and the IAM role ARN associated with an Amazon Kinesis Firehose event destination.", "MatchingEventTypes": "The type of email sending events to publish to the event destination.\n\n- `send` - The send request was successful and SES will attempt to deliver the message to the recipient\u2019s mail server. (If account-level or global suppression is being used, SES will still count it as a send, but delivery is suppressed.)\n- `reject` - SES accepted the email, but determined that it contained a virus and didn\u2019t attempt to deliver it to the recipient\u2019s mail server.\n- `bounce` - ( *Hard bounce* ) The recipient's mail server permanently rejected the email. ( *Soft bounces* are only included when SES fails to deliver the email after retrying for a period of time.)\n- `complaint` - The email was successfully delivered to the recipient\u2019s mail server, but the recipient marked it as spam.\n- `delivery` - SES successfully delivered the email to the recipient's mail server.\n- `open` - The recipient received the message and opened it in their email client.\n- `click` - The recipient clicked one or more links in the email.\n- `renderingFailure` - The email wasn't sent because of a template rendering issue. This event type can occur when template data is missing, or when there is a mismatch between template parameters and data. (This event type only occurs when you send email using the [`SendTemplatedEmail`](https://docs.aws.amazon.com/ses/latest/APIReference/API_SendTemplatedEmail.html) or [`SendBulkTemplatedEmail`](https://docs.aws.amazon.com/ses/latest/APIReference/API_SendBulkTemplatedEmail.html) API operations.)\n- `deliveryDelay` - The email couldn't be delivered to the recipient\u2019s mail server because a temporary issue occurred. Delivery delays can occur, for example, when the recipient's inbox is full, or when the receiving email server experiences a transient issue.\n- `subscription` - The email was successfully delivered, but the recipient updated their subscription preferences by clicking on an *unsubscribe* link as part of your [subscription management](https://docs.aws.amazon.com/ses/latest/dg/sending-email-subscription-management.html) .", "Name": "The name of the event destination. The name must meet the following requirements:\n\n- Contain only ASCII letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-).\n- Contain 64 characters or fewer.", @@ -39817,7 +39933,7 @@ "QueueName": "A name for the queue. To create a FIFO queue, the name of your FIFO queue must end with the `.fifo` suffix. For more information, see [FIFO queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html) in the *Amazon SQS Developer Guide* .\n\nIf you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the queue name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) in the *AWS CloudFormation User Guide* .\n\n> If you specify a name, you can't perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.", "ReceiveMessageWaitTimeSeconds": "Specifies the duration, in seconds, that the ReceiveMessage action call waits until a message is in the queue in order to include it in the response, rather than returning an empty response if a message isn't yet available. You can specify an integer from 1 to 20. Short polling is used as the default or when you specify 0 for this property. For more information, see [Consuming messages using long polling](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html#sqs-long-polling) in the *Amazon SQS Developer Guide* .", "RedriveAllowPolicy": "The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:\n\n- `redrivePermission` : The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:\n\n- `allowAll` : (Default) Any source queues in this AWS account in the same Region can specify this queue as the dead-letter queue.\n- `denyAll` : No source queues can specify this queue as the dead-letter queue.\n- `byQueue` : Only queues specified by the `sourceQueueArns` parameter can specify this queue as the dead-letter queue.\n- `sourceQueueArns` : The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the `redrivePermission` parameter is set to `byQueue` . You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the `redrivePermission` parameter to `allowAll` .", - "RedrivePolicy": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", + "RedrivePolicy": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is received by a consumer of the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", "SqsManagedSseEnabled": "Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (for example, [SSE-KMS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html) or [SSE-SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sqs-sse-queue.html) ). When `SqsManagedSseEnabled` is not defined, `SSE-SQS` encryption is enabled by default.", "Tags": "The tags that you attach to this queue. For more information, see [Resource tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) in the *AWS CloudFormation User Guide* .", "VisibilityTimeout": "The length of time during which a message will be unavailable after a message is delivered from the queue. This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.\n\nValues must be from 0 to 43,200 seconds (12 hours). If you don't specify a value, AWS CloudFormation uses the default value of 30 seconds.\n\nFor more information about Amazon SQS queue visibility timeouts, see [Visibility timeout](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) in the *Amazon SQS Developer Guide* ." @@ -42146,11 +42262,6 @@ "Key": "The key for the tag.", "Value": "The value for the tag." }, - "AWS::SecretsManager::ResourcePolicy": { - "BlockPublicPolicy": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", - "ResourcePolicy": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", - "SecretId": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN." - }, "AWS::SecretsManager::RotationSchedule": { "HostedRotationLambda": "Creates a new Lambda rotation function based on one of the [Secrets Manager rotation function templates](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-rotation-templates.html) . To use a rotation function that already exists, specify `RotationLambdaARN` instead.\n\nFor Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .", "RotateImmediatelyOnUpdate": "Specifies whether to rotate the secret immediately or wait until the next scheduled rotation window. The rotation schedule is defined in `RotationRules` .\n\nIf you don't immediately rotate the secret, Secrets Manager tests the rotation configuration by running the [`testSecret` step](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) of the Lambda rotation function. The test creates an `AWSPENDING` version of the secret and then removes it.\n\nIf you don't specify this value, then by default, Secrets Manager rotates the secret immediately.\n\nRotation is an asynchronous process. For more information, see [How rotation works](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) .", @@ -42621,7 +42732,7 @@ "TargetRoleArn": "The Amazon Resource Name (ARN) of the EventBridge API destinations IAM role that you created. For more information about ARNs and how to use them in policies, see [Managing data access](https://docs.aws.amazon.com///security-lake/latest/userguide/subscriber-data-access.html) and [AWS Managed Policies](https://docs.aws.amazon.com//security-lake/latest/userguide/security-iam-awsmanpol.html) in the *Amazon Security Lake User Guide* ." }, "AWS::SecurityLake::SubscriberNotification NotificationConfiguration": { - "HttpsNotificationConfiguration": "The configurations for HTTPS subscriber notification.", + "HttpsNotificationConfiguration": "The configurations used for HTTPS subscriber notification.", "SqsNotificationConfiguration": "The configurations for SQS subscriber notification. The members of this structure are context-dependent." }, "AWS::ServiceCatalog::AcceptedPortfolioShare": { diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index 804b85262..5ff6f9dcc 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -39447,12 +39447,12 @@ "type": "array" }, "CloudWatchLogsLogGroupArn": { - "markdownDescription": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nNot required unless you specify `CloudWatchLogsRoleArn` .", + "markdownDescription": "Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs are delivered. You must use a log group that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", "title": "CloudWatchLogsLogGroupArn", "type": "string" }, "CloudWatchLogsRoleArn": { - "markdownDescription": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.", + "markdownDescription": "Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group. You must use a role that exists in your account.\n\nTo enable CloudWatch Logs delivery, you must provide values for `CloudWatchLogsLogGroupArn` and `CloudWatchLogsRoleArn` .\n\n> If you previously enabled CloudWatch Logs delivery and want to disable CloudWatch Logs delivery, you must set the values of the `CloudWatchLogsRoleArn` and `CloudWatchLogsLogGroupArn` fields to `\"\"` .", "title": "CloudWatchLogsRoleArn", "type": "string" }, @@ -42002,7 +42002,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only.", + "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only.", "title": "Type", "type": "string" } @@ -46473,7 +46473,7 @@ "type": "string" }, "DefaultRedirectURI": { - "markdownDescription": "The default redirect URI. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nSee [OAuth 2.0 - Redirection Endpoint](https://docs.aws.amazon.com/https://tools.ietf.org/html/rfc6749#section-3.1.2) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", + "markdownDescription": "The default redirect URI. In app clients with one assigned IdP, replaces `redirect_uri` in authentication requests. Must be in the `CallbackURLs` list.\n\nA redirect URI must:\n\n- Be an absolute URI.\n- Be registered with the authorization server.\n- Not include a fragment component.\n\nFor more information, see [Default redirect URI](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html#cognito-user-pools-app-idp-settings-about) .\n\nAmazon Cognito requires HTTPS over HTTP except for http://localhost for testing purposes only.\n\nApp callback URLs such as myapp://example are also supported.", "title": "DefaultRedirectURI", "type": "string" }, @@ -69927,7 +69927,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -72340,7 +72340,7 @@ "type": "string" }, "PreserveClientIp": { - "markdownDescription": "Indicates whether your client's IP address is preserved as the source. The value is `true` or `false` .\n\n- If `true` , your client's IP address is used when you connect to a resource.\n- If `false` , the elastic network interface IP address is used when you connect to a resource.\n\nDefault: `true`", + "markdownDescription": "Indicates whether the client IP address is preserved as the source. The following are the possible values.\n\n- `true` - Use the client IP address as the source.\n- `false` - Use the network interface IP address as the source.\n\nDefault: `false`", "title": "PreserveClientIp", "type": "boolean" }, @@ -73028,7 +73028,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -73291,7 +73291,7 @@ "type": "array" }, "UserData": { - "markdownDescription": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) (Linux) or [Work with instance user data](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instancedata-add-user-data.html) (Windows) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* .", + "markdownDescription": "The user data to make available to the instance. You must provide base64-encoded text. User data is limited to 16 KB. For more information, see [Run commands on your Amazon EC2 instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide* .\n\nIf you are creating the launch template for use with AWS Batch , the user data must be provided in the [MIME multi-part archive format](https://docs.aws.amazon.com/https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) . For more information, see [Amazon EC2 user data in launch templates](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the *AWS Batch User Guide* .", "title": "UserData", "type": "string" } @@ -77548,7 +77548,7 @@ "type": "array" }, "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice": { - "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `DesiredCapacityType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", + "markdownDescription": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from the lowest priced current generation instance types, and failing that, from the lowest priced previous generation instance types that match your attributes. When Amazon EC2 selects instance types with your attributes, it will exclude instance types whose price exceeds your specified threshold.\n\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.\n\nIf you set `TargetCapacityUnitType` to `vcpu` or `memory-mib` , the price protection threshold is based on the per vCPU or per memory price instead of the per instance price.\n\n> Only one of `SpotMaxPricePercentageOverLowestPrice` or `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` can be specified. If you don't specify either, Amazon EC2 will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as `999999` .", "title": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", "type": "number" }, @@ -77916,7 +77916,7 @@ "type": "string" }, "IamFleetRole": { - "markdownDescription": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide for Linux Instances* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", + "markdownDescription": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see [Spot Fleet Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) in the *Amazon EC2 User Guide* . Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request or when the Spot Fleet request expires, if you set `TerminateInstancesWithExpiration` .", "title": "IamFleetRole", "type": "string" }, @@ -84281,7 +84281,7 @@ "title": "EphemeralStorage" }, "ExecutionRoleArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. The task execution IAM role is required depending on the requirements of your task. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the *Amazon Elastic Container Service Developer Guide* .", + "markdownDescription": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "title": "ExecutionRoleArn", "type": "string" }, @@ -84353,7 +84353,7 @@ "type": "array" }, "TaskRoleArn": { - "markdownDescription": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see [Amazon ECS Task Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIAM roles for tasks on Windows require that the `-EnableTaskIAMRole` option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code to use the feature. For more information, see [Windows IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows_task_IAM_roles.html) in the *Amazon Elastic Container Service Developer Guide* .", + "markdownDescription": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "title": "TaskRoleArn", "type": "string" }, @@ -91438,7 +91438,7 @@ "title": "CacheUsageLimits" }, "DailySnapshotTime": { - "markdownDescription": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis only.", + "markdownDescription": "The daily time that a cache snapshot will be created. Default is NULL, i.e. snapshots will not be created at a specific time on a daily basis. Available for Redis and Serverless Memcached only.", "title": "DailySnapshotTime", "type": "string" }, @@ -91499,7 +91499,7 @@ "type": "array" }, "SnapshotRetentionLimit": { - "markdownDescription": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis only.", + "markdownDescription": "The current setting for the number of serverless cache snapshots the system will retain. Available for Redis and Serverless Memcached only.", "title": "SnapshotRetentionLimit", "type": "number" }, @@ -91780,7 +91780,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The list of tags.", "title": "Tags", "type": "array" }, @@ -91889,7 +91889,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "The list of tags.", "title": "Tags", "type": "array" }, @@ -105356,7 +105356,7 @@ "type": "object" }, "ConnectionType": { - "markdownDescription": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", + "markdownDescription": "The type of the connection. Currently, these types are supported:\n\n- `JDBC` - Designates a connection to a database through Java Database Connectivity (JDBC).\n\n`JDBC` Connections use the following ConnectionParameters.\n\n- Required: All of ( `HOST` , `PORT` , `JDBC_ENGINE` ) or `JDBC_CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- Optional: `JDBC_ENFORCE_SSL` , `CUSTOM_JDBC_CERT` , `CUSTOM_JDBC_CERT_STRING` , `SKIP_CUSTOM_JDBC_CERT_VALIDATION` . These parameters are used to configure SSL with JDBC.\n- `KAFKA` - Designates a connection to an Apache Kafka streaming platform.\n\n`KAFKA` Connections use the following ConnectionParameters.\n\n- Required: `KAFKA_BOOTSTRAP_SERVERS` .\n- Optional: `KAFKA_SSL_ENABLED` , `KAFKA_CUSTOM_CERT` , `KAFKA_SKIP_CUSTOM_CERT_VALIDATION` . These parameters are used to configure SSL with `KAFKA` .\n- Optional: `KAFKA_CLIENT_KEYSTORE` , `KAFKA_CLIENT_KEYSTORE_PASSWORD` , `KAFKA_CLIENT_KEY_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD` , `ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD` . These parameters are used to configure TLS client configuration with SSL in `KAFKA` .\n- Optional: `KAFKA_SASL_MECHANISM` . Can be specified as `SCRAM-SHA-512` , `GSSAPI` , or `AWS_MSK_IAM` .\n- Optional: `KAFKA_SASL_SCRAM_USERNAME` , `KAFKA_SASL_SCRAM_PASSWORD` , `ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD` . These parameters are used to configure SASL/SCRAM-SHA-512 authentication with `KAFKA` .\n- Optional: `KAFKA_SASL_GSSAPI_KEYTAB` , `KAFKA_SASL_GSSAPI_KRB5_CONF` , `KAFKA_SASL_GSSAPI_SERVICE` , `KAFKA_SASL_GSSAPI_PRINCIPAL` . These parameters are used to configure SASL/GSSAPI authentication with `KAFKA` .\n- `MONGODB` - Designates a connection to a MongoDB document database.\n\n`MONGODB` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTION_URL` .\n- Required: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `SALESFORCE` - Designates a connection to Salesforce using OAuth authencation.\n\n- Requires the `AuthenticationConfiguration` member to be configured.\n- `NETWORK` - Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).\n\n`NETWORK` Connections do not require ConnectionParameters. Instead, provide a PhysicalConnectionRequirements.\n- `MARKETPLACE` - Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue .\n\n`MARKETPLACE` Connections use the following ConnectionParameters.\n\n- Required: `CONNECTOR_TYPE` , `CONNECTOR_URL` , `CONNECTOR_CLASS_NAME` , `CONNECTION_URL` .\n- Required for `JDBC` `CONNECTOR_TYPE` connections: All of ( `USERNAME` , `PASSWORD` ) or `SECRET_ID` .\n- `CUSTOM` - Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue .\n\n`SFTP` is not supported.\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue , consult [AWS Glue connection properties](https://docs.aws.amazon.com/glue/latest/dg/connection-defining.html) .\n\nFor more information about how optional ConnectionProperties are used to configure features in AWS Glue Studio, consult [Using connectors and connections](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html) .", "title": "ConnectionType", "type": "string" }, @@ -105374,13 +105374,13 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the connection. Connection will not function as expected without a name.", + "markdownDescription": "The name of the connection.", "title": "Name", "type": "string" }, "PhysicalConnectionRequirements": { "$ref": "#/definitions/AWS::Glue::Connection.PhysicalConnectionRequirements", - "markdownDescription": "A map of physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection.", + "markdownDescription": "The physical connection requirements, such as virtual private cloud (VPC) and `SecurityGroup` , that are needed to successfully make this connection.", "title": "PhysicalConnectionRequirements" } }, @@ -105393,7 +105393,7 @@ "additionalProperties": false, "properties": { "AvailabilityZone": { - "markdownDescription": "The connection's Availability Zone. This field is redundant because the specified subnet implies the Availability Zone to be used. Currently the field must be populated, but it will be deprecated in the future.", + "markdownDescription": "The connection's Availability Zone.", "title": "AvailabilityZone", "type": "string" }, @@ -108724,7 +108724,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .", + "markdownDescription": "Specifies whether this workspace uses SAML 2.0, AWS IAM Identity Center , or both to authenticate users for using the Grafana console within a workspace. For more information, see [User authentication in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG.html) .\n\n*Allowed Values* : `AWS_SSO | SAML`", "title": "AuthenticationProviders", "type": "array" }, @@ -108765,7 +108765,7 @@ "items": { "type": "string" }, - "markdownDescription": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.", + "markdownDescription": "The AWS notification channels that Amazon Managed Grafana can automatically create IAM roles and permissions for, to allow Amazon Managed Grafana to use these channels.\n\n*AllowedValues* : `SNS`", "title": "NotificationDestinations", "type": "array" }, @@ -113106,7 +113106,7 @@ "type": "array" }, "Name": { - "markdownDescription": "Name of the feature.", + "markdownDescription": "Name of the feature. For a list of allowed values, see [DetectorFeatureConfiguration](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_DetectorFeatureConfiguration.html#guardduty-Type-DetectorFeatureConfiguration-name) in the *GuardDuty API Reference* .", "title": "Name", "type": "string" }, @@ -113190,12 +113190,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "The tag value.", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "The tag key.", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113389,7 +113389,7 @@ "properties": { "Criterion": { "additionalProperties": false, - "markdownDescription": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, API and CFN use the following input list for the condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) .\n- type\n- updatedAt\n\nType: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value", + "markdownDescription": "Represents a map of finding properties that match specified conditions and values when querying findings.\n\nFor information about JSON criterion mapping to their console equivalent, see [Finding criteria](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_filter-findings.html#filter_criteria) . The following are the available criterion:\n\n- accountId\n- id\n- region\n- severity\n\nTo filter on the basis of severity, the API and AWS CLI use the following input list for the `FindingCriteria` condition:\n\n- *Low* : `[\"1\", \"2\", \"3\"]`\n- *Medium* : `[\"4\", \"5\", \"6\"]`\n- *High* : `[\"7\", \"8\", \"9\"]`\n\nFor more information, see [Severity levels for GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity) in the *Amazon GuardDuty User Guide* .\n- type\n- updatedAt\n\nType: ISO 8601 string format: `YYYY-MM-DDTHH:MM:SS.SSSZ` or `YYYY-MM-DDTHH:MM:SSZ` depending on whether the value contains milliseconds.\n- resource.accessKeyDetails.accessKeyId\n- resource.accessKeyDetails.principalId\n- resource.accessKeyDetails.userName\n- resource.accessKeyDetails.userType\n- resource.instanceDetails.iamInstanceProfile.id\n- resource.instanceDetails.imageId\n- resource.instanceDetails.instanceId\n- resource.instanceDetails.tags.key\n- resource.instanceDetails.tags.value\n- resource.instanceDetails.networkInterfaces.ipv6Addresses\n- resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress\n- resource.instanceDetails.networkInterfaces.publicDnsName\n- resource.instanceDetails.networkInterfaces.publicIp\n- resource.instanceDetails.networkInterfaces.securityGroups.groupId\n- resource.instanceDetails.networkInterfaces.securityGroups.groupName\n- resource.instanceDetails.networkInterfaces.subnetId\n- resource.instanceDetails.networkInterfaces.vpcId\n- resource.instanceDetails.outpostArn\n- resource.resourceType\n- resource.s3BucketDetails.publicAccess.effectivePermissions\n- resource.s3BucketDetails.name\n- resource.s3BucketDetails.tags.key\n- resource.s3BucketDetails.tags.value\n- resource.s3BucketDetails.type\n- service.action.actionType\n- service.action.awsApiCallAction.api\n- service.action.awsApiCallAction.callerType\n- service.action.awsApiCallAction.errorCode\n- service.action.awsApiCallAction.remoteIpDetails.city.cityName\n- service.action.awsApiCallAction.remoteIpDetails.country.countryName\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.awsApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.awsApiCallAction.remoteIpDetails.organization.asn\n- service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg\n- service.action.awsApiCallAction.serviceName\n- service.action.dnsRequestAction.domain\n- service.action.dnsRequestAction.domainWithSuffix\n- service.action.networkConnectionAction.blocked\n- service.action.networkConnectionAction.connectionDirection\n- service.action.networkConnectionAction.localPortDetails.port\n- service.action.networkConnectionAction.protocol\n- service.action.networkConnectionAction.remoteIpDetails.city.cityName\n- service.action.networkConnectionAction.remoteIpDetails.country.countryName\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV4\n- service.action.networkConnectionAction.remoteIpDetails.ipAddressV6\n- service.action.networkConnectionAction.remoteIpDetails.organization.asn\n- service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg\n- service.action.networkConnectionAction.remotePortDetails.port\n- service.action.awsApiCallAction.remoteAccountDetails.affiliated\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4\n- service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV6\n- service.action.kubernetesApiCallAction.namespace\n- service.action.kubernetesApiCallAction.remoteIpDetails.organization.asn\n- service.action.kubernetesApiCallAction.requestUri\n- service.action.kubernetesApiCallAction.statusCode\n- service.action.networkConnectionAction.localIpDetails.ipAddressV4\n- service.action.networkConnectionAction.localIpDetails.ipAddressV6\n- service.action.networkConnectionAction.protocol\n- service.action.awsApiCallAction.serviceName\n- service.action.awsApiCallAction.remoteAccountDetails.accountId\n- service.additionalInfo.threatListName\n- service.resourceRole\n- resource.eksClusterDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.name\n- resource.kubernetesDetails.kubernetesWorkloadDetails.namespace\n- resource.kubernetesDetails.kubernetesUserDetails.username\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image\n- resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix\n- service.ebsVolumeScanDetails.scanId\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity\n- service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash\n- service.malwareScanDetails.threats.name\n- resource.ecsClusterDetails.name\n- resource.ecsClusterDetails.taskDetails.containers.image\n- resource.ecsClusterDetails.taskDetails.definitionArn\n- resource.containerDetails.image\n- resource.rdsDbInstanceDetails.dbInstanceIdentifier\n- resource.rdsDbInstanceDetails.dbClusterIdentifier\n- resource.rdsDbInstanceDetails.engine\n- resource.rdsDbUserDetails.user\n- resource.rdsDbInstanceDetails.tags.key\n- resource.rdsDbInstanceDetails.tags.value\n- service.runtimeDetails.process.executableSha256\n- service.runtimeDetails.process.name\n- service.runtimeDetails.process.name\n- resource.lambdaDetails.functionName\n- resource.lambdaDetails.functionArn\n- resource.lambdaDetails.tags.key\n- resource.lambdaDetails.tags.value", "patternProperties": { "^[a-zA-Z0-9]+$": { "$ref": "#/definitions/AWS::GuardDuty::Filter.Condition" @@ -113405,12 +113405,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113521,12 +113521,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -113578,7 +113578,7 @@ "type": "string" }, "InvitationId": { - "markdownDescription": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by using the ListInvitation action of the GuardDuty API.", + "markdownDescription": "The ID of the invitation that is sent to the account designated as a member account. You can find the invitation ID by running the [ListInvitations](https://docs.aws.amazon.com/guardduty/latest/APIReference/API_ListInvitations.html) in the *GuardDuty API Reference* .", "title": "InvitationId", "type": "string" }, @@ -113807,12 +113807,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "", + "markdownDescription": "The tag key.", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "", + "markdownDescription": "The tag value.", "title": "Value", "type": "string" } @@ -133697,12 +133697,12 @@ "type": "object" }, "KeySpec": { - "markdownDescription": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (China Regions only)\n\n- `SM2`", + "markdownDescription": "Specifies the type of KMS key to create. The default value, `SYMMETRIC_DEFAULT` , creates a KMS key with a 256-bit symmetric key for encryption and decryption. In China Regions, `SYMMETRIC_DEFAULT` creates a 128-bit symmetric key that uses SM4 encryption. You can't change the `KeySpec` value after the KMS key is created. For help choosing a key spec for your KMS key, see [Choosing a KMS key type](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose.html) in the *AWS Key Management Service Developer Guide* .\n\nThe `KeySpec` property determines the type of key material in the KMS key and the algorithms that the KMS key supports. To further restrict the algorithms that can be used with the KMS key, use a condition key in its key policy or IAM policy. For more information, see [AWS KMS condition keys](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms) in the *AWS Key Management Service Developer Guide* .\n\n> If you change the value of the `KeySpec` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. > [AWS services that are integrated with AWS KMS](https://docs.aws.amazon.com/kms/features/#AWS_Service_Integration) use symmetric encryption KMS keys to protect your data. These services do not support encryption with asymmetric KMS keys. For help determining whether a KMS key is asymmetric, see [Identifying asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide* . \n\nAWS KMS supports the following key specs for KMS keys:\n\n- Symmetric encryption key (default)\n\n- `SYMMETRIC_DEFAULT` (AES-256-GCM)\n- HMAC keys (symmetric)\n\n- `HMAC_224`\n- `HMAC_256`\n- `HMAC_384`\n- `HMAC_512`\n- Asymmetric RSA key pairs (encryption and decryption *or* signing and verification)\n\n- `RSA_2048`\n- `RSA_3072`\n- `RSA_4096`\n- Asymmetric NIST-recommended elliptic curve key pairs (signing and verification *or* deriving shared secrets)\n\n- `ECC_NIST_P256` (secp256r1)\n- `ECC_NIST_P384` (secp384r1)\n- `ECC_NIST_P521` (secp521r1)\n- Other asymmetric elliptic curve key pairs (signing and verification)\n\n- `ECC_SECG_P256K1` (secp256k1), commonly used for cryptocurrencies.\n- SM2 key pairs (encryption and decryption *or* signing and verification *or* deriving shared secrets)\n\n- `SM2` (China Regions only)", "title": "KeySpec", "type": "string" }, "KeyUsage": { - "markdownDescription": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the property or specify `ENCRYPT_DECRYPT` .\n- For asymmetric KMS keys with RSA key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with ECC key material, specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 (China Regions only) key material, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For HMAC KMS keys, specify `GENERATE_VERIFY_MAC` .", + "markdownDescription": "Determines the [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations) for which you can use the KMS key. The default value is `ENCRYPT_DECRYPT` . This property is required for asymmetric KMS keys and HMAC KMS keys. You can't change the `KeyUsage` value after the KMS key is created.\n\n> If you change the value of the `KeyUsage` property on an existing KMS key, the update request fails, regardless of the value of the [`UpdateReplacePolicy` attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html) . This prevents you from accidentally deleting a KMS key by changing an immutable property value. \n\nSelect only one valid value.\n\n- For symmetric encryption KMS keys, omit the parameter or specify `ENCRYPT_DECRYPT` .\n- For HMAC KMS keys (symmetric), specify `GENERATE_VERIFY_MAC` .\n- For asymmetric KMS keys with RSA key pairs, specify `ENCRYPT_DECRYPT` or `SIGN_VERIFY` .\n- For asymmetric KMS keys with NIST-recommended elliptic curve key pairs, specify `SIGN_VERIFY` or `KEY_AGREEMENT` .\n- For asymmetric KMS keys with `ECC_SECG_P256K1` key pairs specify `SIGN_VERIFY` .\n- For asymmetric KMS keys with SM2 key pairs (China Regions only), specify `ENCRYPT_DECRYPT` , `SIGN_VERIFY` , or `KEY_AGREEMENT` .", "title": "KeyUsage", "type": "string" }, @@ -227361,7 +227361,7 @@ "type": "object" }, "NodeType": { - "markdownDescription": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `ds2.xlarge` | `ds2.8xlarge` | `dc1.large` | `dc1.8xlarge` | `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", + "markdownDescription": "The node type to be provisioned for the cluster. For information about node types, go to [Working with Clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#how-many-nodes) in the *Amazon Redshift Cluster Management Guide* .\n\nValid Values: `dc2.large` | `dc2.8xlarge` | `ra3.xlplus` | `ra3.4xlarge` | `ra3.16xlarge`", "title": "NodeType", "type": "string" }, @@ -227376,7 +227376,7 @@ "type": "string" }, "Port": { - "markdownDescription": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with ds2 or dc2 nodes - Select a port within the range `1150-65535` .", + "markdownDescription": "The port number on which the cluster accepts incoming connections.\n\nThe cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections.\n\nDefault: `5439`\n\nValid Values:\n\n- For clusters with ra3 nodes - Select a port within the ranges `5431-5455` or `8191-8215` . (If you have an existing cluster with ra3 nodes, it isn't required that you change the port to these ranges.)\n- For clusters with dc2 nodes - Select a port within the range `1150-65535` .", "title": "Port", "type": "number" }, @@ -228291,7 +228291,7 @@ }, "TargetAction": { "$ref": "#/definitions/AWS::Redshift::ScheduledAction.ScheduledActionType", - "markdownDescription": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ds2.8xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \".", + "markdownDescription": "A JSON format string of the Amazon Redshift API operation with input parameters.\n\n\" `{\\\"ResizeCluster\\\":{\\\"NodeType\\\":\\\"ra3.4xlarge\\\",\\\"ClusterIdentifier\\\":\\\"my-test-cluster\\\",\\\"NumberOfNodes\\\":3}}` \".", "title": "TargetAction" } }, @@ -236293,7 +236293,7 @@ "additionalProperties": false, "properties": { "PartitionDateSource": { - "markdownDescription": "Specifies the partition date source for the partitioned prefix. PartitionDateSource can be EventTime or DeliveryTime.", + "markdownDescription": "Specifies the partition date source for the partitioned prefix. `PartitionDateSource` can be `EventTime` or `DeliveryTime` .\n\nFor `DeliveryTime` , the time in the log file names corresponds to the delivery time for the log files.\n\nFor `EventTime` , The logs delivered are for a specific day only. The year, month, and day correspond to the day on which the event occurred, and the hour, minutes and seconds are set to 00 in the key.", "title": "PartitionDateSource", "type": "string" } @@ -240925,7 +240925,7 @@ "type": "object" }, "RedrivePolicy": { - "markdownDescription": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", + "markdownDescription": "The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:\n\n- `deadLetterTargetArn` : The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of `maxReceiveCount` is exceeded.\n- `maxReceiveCount` : The number of times a message is received by a consumer of the source queue before being moved to the dead-letter queue. When the `ReceiveCount` for a message exceeds the `maxReceiveCount` for a queue, Amazon SQS moves the message to the dead-letter-queue.\n\n> The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue. \n\n*JSON*\n\n`{ \"deadLetterTargetArn\" : *String* , \"maxReceiveCount\" : *Integer* }`\n\n*YAML*\n\n`deadLetterTargetArn : *String*`\n\n`maxReceiveCount : *Integer*`", "title": "RedrivePolicy", "type": "object" }, @@ -254457,18 +254457,12 @@ "additionalProperties": false, "properties": { "BlockPublicPolicy": { - "markdownDescription": "Specifies whether to block resource-based policies that allow broad access to the secret. By default, Secrets Manager blocks policies that allow broad access, for example those that use a wildcard for the principal.", - "title": "BlockPublicPolicy", "type": "boolean" }, "ResourcePolicy": { - "markdownDescription": "A JSON-formatted string for an AWS resource-based policy. For example policies, see [Permissions policy examples](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) .", - "title": "ResourcePolicy", "type": "object" }, "SecretId": { - "markdownDescription": "The ARN or name of the secret to attach the resource-based policy.\n\nFor an ARN, we recommend that you specify a complete ARN rather than a partial ARN.", - "title": "SecretId", "type": "string" } },